Meeting Minutes for Nov 4th, 2012


(ChipX86) I hope everyone saw David’s e-mail about Testing Done. This is very important. Treat that field like it’s part of your code. Something to spend time on. We won’t commit things that we’re not sure have been tested well and actually go through every step you outline. Don’t write what you think you’d have done we get patches from other contributors that say they’ve tested, when it’s clear the code has never worked. Keep that field up-to-date as you make changes to your code, and go through the testing each time unit tests will help us to validate that your code is good too.

(ChipX86) The review UI code many of you have had to hand-apply is now in master, so you should be able to just update master now and merge it into your branch. You also are going to need to upgrade your djblets. Some non-backwards-compatible changes had to be made.

(Smacleod) It really helps me to review revisions to your changes, when you provide a good description of what has changed. As you are updating your diffs, use the box that allows you to specify what has been updated, to explain your changes in detail. Keep the description as something we can use for the final commit

Questions / Answers / Tips:

Q (Johns187): For rb patch, I imagine I’ll need to test against other SCM clients.  For example, SVN has the notion of a ‘basedir’ that helps me find the proper pX argument for patch. Can someone explain a good way of testing a bunch of different clients (SVN, Peforce, Mercury, etc)? And do you have any tips on obtaining the correct -pX arg for rb patch?

A: Best way is to create repositories and try them out. SVN and Mercurial are easy to get going. Perforce too, actually. You can get a free download. In Perforce’s case, you’ll just be using plain ol’ patch though. ChipX86 said he’ll talk to a friend about getting the right -pX argument. You may have to accept the arg via the command options. Focus on Git, SVN and Mercurial. Perforce doesn’t work like the others.

Q (Allyshia): When using a 3rd party tool as a part of a plugin (in my case for ReviewBot) like JSLint or any other, how do we procede for keeping those libraries up to date? What do we currently do with the PEP8 static analysis tool?

A: Pep8 is python itself, so it can be managed using the regular packaging stuff for python. As for jslint, we’re just going to have to update it manually, and release new versions of the tool.

side note: This brought up an legality issue with JSLint if it was packaged with RB. RB’s activity may qualify as evil and thus conflicting with MIT plus no evil clause. As a result, we’re going to break JSLint support off into a separate python package, which will be installable separately

Q (Yangtina):1. What’s the general practice for marking issues opened in review request as resolved/fixed? Can we just use them as checklist and mark them as resolved as we code or do we have to wait till we submit a new revision that actually addresses those problems?

A: Definitely use them as your checklist, super helpful.

Q (Yangtina): 2. For testing, is there general rule of thumb for what kind of changes need test cases to be implemented and what kind just need manual testing?

A: If it’s something that can be easily unit tested, you should do it. Things that manipulate data, for API calls, for example, are easily testable. Right now that’s all Python. We don’t have JavaScript unit tests in just yet, but I’ll be doing that pretty soon here. Some things are harder to test – like UI. I don’t believe we’re on the Selenium boat yet. If something looks like it’s ripe for automated testing, you’ll hear about it in a review.

Q (Tahnok) : How should password storage work? I will need to ask the user to input their buildbot username/password (unless they have ssh passwordless auth setup) but I want to make sure I’m storing that information properly

A: We want to keep as much configuration as possible in the Review Bot admin panel on the review board side. An option is to look into AES encryption using pycrypto (which is already a dependency). It’s two-way encryption. You can tie it to the settings.SECRET_KEY.  We should encourage ssh-based access. The issue here is, the creds must be sent through the queue, the the worker, which doesn’t have the secret key. The queue *should* be secure.  I think we’re going to have to have some sort of on-worker-machine configuration for this unless we just store and send user/pass in plain text. This is a tricky problem and will be discussed outside the meeting.

Q (Tahnok): We’re going to be including buildbot as a dependency for reviewbot?

A: No, it should just talk to buildbot, and so Review Bot has two pieces as well.  The extension shouldn’t have any knowledge of buildbot. buildbot try is a command line utility like post-review and thus we’ll have to have it as a dependency then.  can handle the buildbot dependency.

Q (slchen) I realized a potential problem with truncating .rst and .md files before passing them to docutil / markdown APIs: Truncating after the first 20 (or 5) lines can cause certain structures (such as tables in RST) to become ill-formatted / syntactically invalid when they are passed off to the docutil / markdown APIs.
The docutil API handles it as follows: 1) output the ill-formatted table in a <pre> block, 2) append an error message immediately after this <pre> block. In the sample .rst and .md files this hasn’t caused a problem yet, because the cropping area of the thumbnail is only about ~10 lines, so you never see the error messages farther down the html.
It’s a sticky situation, because neither of the libraries have APIs for parsing the text-ish files one “structure” at a time, so doing it dynamically (i.e. truncate the file after the last “structure” past a certain line #) would require adding to docutil / markdown APIs (which is something I assume not worth our dev time investment).
My proposed solution is as follows: set a higher upperbound for the truncation (truncate after 200 ~ 300) lines – which still leaves a constant processing time on converting the textish files to html through the APIs, and we can reasonably assume that in 99% of use cases people won’t upload a .rst file with a table that spans 300 lines.
but it’s a design decision – can we do better?

A: It’s fine, though I am curious what benchmarks look like for rendering such files. You can use our log_time stuff around the rendering code to measure (examples in A nice improvement would be to stick the rendered results in memcached so that way it’s only rendered once, optimistically. Also have a look at cache_memoize.

Side note on extensions and hooks: Hooks has some good documentation. Check out: /, ,

Other Notes:

  • ChipX86 will be away or less active in the next couple days. He’ll be most likely unavailable in the next two meetings. He’ll be active during usual times except this 10th through 12th he’ll be out during the day, and the 15th through 19th he is on vacation and will likely only poke my head in for short times in the evenings.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s