Pages

Sunday, July 22, 2012

CACE: computer-aided code evaluation of CDK code

Computer-aided code evaluation (CACE) is an important part of scientific code development projects. There are many ways to do peer-review of source code (Maven, Gerrit, ...), and I won't go into details here. Instead, I focus on CDK's Nightly build system.

Nightly reports
Making sure the source code compiles is one of the most basic requirements. Given it compiles, we get a full report with a log of information:


The lead of the report contains useful links to a precompiled binary Java ARchive (jar file), a link to the latest git commit, source code, and the JavaDoc. Also very useful is the keyword list, which acts as an index to CDK functionality, using @cdk.keyword hashtags in the class JavaDoc.


Unit testing
Below the horizontal bar are the code evaluation reports. First, are the results for the unit tests (for which JUnit is used):


In the middle we get the JUnit test results for each module separately, and behind the 'Stable' link there is a summary, giving a quick glance at all modules:


Full reports are again available for individual reports, but we all get statistics per module on the number of unit tests run, the number of fails and errors, and the number of methods not tested.

JavaDoc quality
For JavaDoc we also run evaluations. For this, we use OpenJavaDocCheck for which too alternative solutions are available, I learned later. The front page section of Nightly looks like:


The summery is quite like that of the unit testing, and a single report for a module looks like this:


Many of these tests are general for JavaDoc, but we also have CDK-specific tests, such as shown below (along with the summary down the bottom of the page):


There is a lot of small code fixes for those who like to contribute to the CDK project, and like to learn git skills along the way.

Code evaluation
We use PMD for general code evaluation. which is most useful in computer-aided code evaluation. It often highlights the more interesting bits of code, and importantly, those code bits where errors may occur. Another set of tests involve tests for code readability, which is very important too, allowing your peers to review your code more efficiently. The Nighlty front page looks for PMD very much the same as for the other parts:


For example, we get warnings like these:


We here get reported about various things. For example, about short variable names, like 'st'. Really short variable names often make it harder to read the code, because they are less informative. Is 'ac' refering to the old or the new atom container?

We also get a warning about incorrect use of the StringBuffer.append() method, indicated where we can improve the code (making it faster in this case). We also see a CDK-specific test here (sources are here), warning us about a bad practice: interfaces should take data model interfaces, rather than implementations.

Conclusion
As will be clear, the Nightly reports provide a wealth of information helping code review. I hope this post has popularized this useful resource a bit more, and I invite you to visit it frequently. For example, it is a useful too to validate your own code before you send it for review. For the latter it is useful to know you do not have to install a full Nightly to do this. Mind you, for largest patch writing efforts, we can set up a Nightly crontab on a specific branch, as we have done frequently before.

But you can also run these code evaluations from the command line with:

ant clean dist-all test-dist-all jarTestdata
$ ant -Dmodule=io qa-module

This will run the JavaDoc, JUnit,  and PMD tests, and store the results in the reports/ subfolder.

2 comments:

  1. Nice post. It looks like standard/qsarmolecule needs a bit of tidy up. Is it possible to get a summary of the PMD reports that avoids duplicates? The UniversalIsomoprohismTester takes up a fifth of the errors mainly with its g1/g2 variables. I agree on the variable names but if they were changed would this mess up git blame?

    We've got a Jenkins instance for doing some automated builds but don't really use it at the moment as most projects are a single developer. What I did like though is the test history graph/user test ranking (e.g. https://hudson.ch.cam.ac.uk/job/Jumbo6/). With the CDK nightly is it a custom script to summarise the test results?

    ReplyDelete
    Replies
    1. John, yes, many modules are up for tidying up indeed. It's tpically quite relaxing and rewarding work.

      There are three sections for PMD warnings, but the first includes everything. They other two are mere subsets.

      In case of code improvements, I do not care so much about git blame. But, it would be very useful to have variable name changes in a single commit.

      Jenkins is nice indeed, and it is used for Bioclipse too. Nigtly is a custom script; some of the reports are custom indeed. Also note that we have custom tests for various things. When we move over to another system, these should be ported or replaced with something equivalent.

      Delete