Meeting Minutes for October 21st, 2012


Course is approaching half-way point — midterm review and evaluations will be done soon.

Questions / Answers / Tips:

Q (Tina): Any tips to find/evaluate/confirm performance issues?

A: If you suspect a performance regression, usually you want to confirm that suspicion — time an activity a few times, and then time an activity from before the suspected regression hit.

Q (Tina): Specifically on page load, retrieving large amount of data, any there any standards for Review Board that we should be testing for?

A: Yes. Install django-debug-toolbar, and then I’ll give you configuration to put in your It will help you with seeing page load times, SQL queries, etc. You can also turn on log profiling in your Log Settings, and then pass ?profiling=1 to the URL. It will write a file.

Q (Tina): Any past records of the benchmarks/performance stable revisions of reviewboard that we can use as reference?

A: No, we don’t have real benchmarks. Though a lot of work has gone into RB lately to improve performance in general, so right now, it’s about as fast as it’s ever been. Additionally, the WebKit developer tools and Firebug have Javascript profilers for client-side benchmarking.

Q (Michelle): I was wondering how the midterm evaluation is going to work if we haven’t sent many code reviews yet (assuming thats how you monitor our progress)

A: That’s a big part, yes. Your interactions with us over IRC or the mailing list are also part of the (anecdotal) evidence we might use. But yes, patches are really helpful. A WIP patch of whatever you’ve currently got is a good idea. I stressed that last week, and I’ll stress it again — putting up WIP patches is a *great idea*.

Q (Sampson): Aamir, Michelle and I had our projects changed last week, and we’ve been pretty much doing research up until now, I know I’ve not writte much code yet, just looking at docs and looking at RB codebase. How will that be reflected on our evaluation?

A: The project changes will be factored in. Keep in mind that the mid-term eval doesn’t really affect the grade. It’s just a way for you to get a sense of how you’re doing. You still have some time before we start evaluating and if you haven’t yet reached a point where you can produce a patch, you should be struggling to get through that roadblock.

Q (Karl): What’s the RB way to give users feedback if a non-critical problem occurred with a form submission? My reason for asking is, if someone creates a repository and wants to associate an SSH key with it, I can’t think of a reason to make the form invalid if everything _but_ the association works. It doesn’t look like RB uses Django’s messaging middleware (it’s installed though) — is there something else I should use?

A: Produce an error — it’s still a failure. Configuring the repository should involve a successful repository creation, hosting service authentication, repository validation check, and SSH key association.

Q (Karl): Right, but should the Repository not get saved?

A: It should not get saved, correct.

Q (Jesus): Is datastore.js using the Prototype JS framework to deal with method binding?

A: No, we use jQuery and not Prototype.

Q (Jesus): Well, what is code here doing (specifically, the ‘self’ object)?

A: ‘ready’ is just a function we define on that object. The concept we use is, we don’t operate on an object until we’ve loaded from the server. ‘ready’ does that, and takes a callback that’s called when we know we have the data. If we already have the data, we call it directly. The ‘self’ object is a Javascriptism. In Javascript, ‘this’ doesn’t always point to what you’d expect. If we don’t have the ability to say “Preserve ‘this'” when calling a function, we keep our own reference and defining ‘self’ to be this is a common way to do that.

Q (John): Mike suggested I work on ‘review request’ as my next command. Could anyone expound on what exactly it does and what type of args it takes?

A: Steven is already working on ‘review request’ and you should instead look at ‘rb patch’.

Q (Allyshia): Were any automated tests written for the existing ReviewBot pep8 tool? i.e. to ensure the static analysis tool is working properly? Or was it manual testing by uploading reviews and making sure inline comments appear?

A: All testing has been manual thus far. Basically, don’t test the pep8 script. Just test that your tool runs pep8 properly.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s