Course is approaching half-way point — midterm review and evaluations will be done soon.
Questions / Answers / Tips:
Q (Tina): Any tips to find/evaluate/confirm performance issues?
A: If you suspect a performance regression, usually you want to confirm that suspicion — time an activity a few times, and then time an activity from before the suspected regression hit.
Q (Tina): Specifically on page load, retrieving large amount of data, any there any standards for Review Board that we should be testing for?
A: Yes. Install django-debug-toolbar, and then I’ll give you configuration to put in your settings_local.py. It will help you with seeing page load times, SQL queries, etc. You can also turn on log profiling in your Log Settings, and then pass ?profiling=1 to the URL. It will write a reviewboard.prof file.
Q (Tina): Any past records of the benchmarks/performance stable revisions of reviewboard that we can use as reference?
Q (Michelle): I was wondering how the midterm evaluation is going to work if we haven’t sent many code reviews yet (assuming thats how you monitor our progress)
A: That’s a big part, yes. Your interactions with us over IRC or the mailing list are also part of the (anecdotal) evidence we might use. But yes, patches are really helpful. A WIP patch of whatever you’ve currently got is a good idea. I stressed that last week, and I’ll stress it again — putting up WIP patches is a *great idea*.
Q (Sampson): Aamir, Michelle and I had our projects changed last week, and we’ve been pretty much doing research up until now, I know I’ve not writte much code yet, just looking at docs and looking at RB codebase. How will that be reflected on our evaluation?
A: The project changes will be factored in. Keep in mind that the mid-term eval doesn’t really affect the grade. It’s just a way for you to get a sense of how you’re doing. You still have some time before we start evaluating and if you haven’t yet reached a point where you can produce a patch, you should be struggling to get through that roadblock.
Q (Karl): What’s the RB way to give users feedback if a non-critical problem occurred with a form submission? My reason for asking is, if someone creates a repository and wants to associate an SSH key with it, I can’t think of a reason to make the form invalid if everything _but_ the association works. It doesn’t look like RB uses Django’s messaging middleware (it’s installed though) — is there something else I should use?
A: Produce an error — it’s still a failure. Configuring the repository should involve a successful repository creation, hosting service authentication, repository validation check, and SSH key association.
Q (Karl): Right, but should the Repository not get saved?
A: It should not get saved, correct.
Q (Jesus): Is datastore.js using the Prototype JS framework to deal with method binding?
A: No, we use jQuery and not Prototype.
Q (Jesus): Well, what is code here doing (specifically, the ‘self’ object)?
Q (John): Mike suggested I work on ‘review request’ as my next command. Could anyone expound on what exactly it does and what type of args it takes?
A: Steven is already working on ‘review request’ and you should instead look at ‘rb patch’.
Q (Allyshia): Were any automated tests written for the existing ReviewBot pep8 tool? i.e. to ensure the static analysis tool is working properly? Or was it manual testing by uploading reviews and making sure inline comments appear?
A: All testing has been manual thus far. Basically, don’t test the pep8 script. Just test that your tool runs pep8 properly.