Friday, March 4, 2011

Software Quality Verifier Framework


I completed my MSc thesis couple of weeks back. My project was developing a Software Quality Verification Framework. Given the value of early bug detection in the software life cycle, the framework addresses both white box and black box testing.

White box testing
White box testing is implemented in two phases.
1. Commit time analysis - In this, the code is automatically analyzed when the developer tries to add new code or code changes to the code repository. Quality verification is done employing tools against a predefined set of rules. The commit is rejected if the code does not conform to the rules. The developer is informed with the reasons for rejection in the svn client interface. This functionality is implemented using svn hooks. Example output is as follows.

2. Offline analysis - A more thorough analysis is performed in an offline fashion, in the context of a nightly build, for example. Results of the analysis is displayed in a dashboard which shows various analytics and provides violation drill-downs to code level. Automatic emails can be configured to acknowledging various stakeholders on the overall health of the system and developer technical debt. This is implemented using a tool named Sonar (http://www.sonarsource.org/).

Black box testing
It was identified during the research that quite a number of tools have become available recently for evaluating a software product during its run without looking at the source code which generated it. Different tools evaluate a product on different aspects such as memory usage (corruptions, leaks), IO usage, operating system calls, performance, access right violations, etc. However, there's no tool that combines the results generated by these individual tools to automatically generate a product health profile like Sonar does with the white box testing tools. There are two main problems associated with the approach of using individual tools manually to perform tests.
1. Tool usage requires expertise and also is laborious
2. There's no way to record or automate a once identified troubleshooting (or evaluating) procedure

I thought about different solutions for this. Noting that almost all the tools generate a textual output in the form of a log file, I decided to implement a way to automatically extract the information of interest in a given context from those log files and generate reports for consumption by various parties like project managers, developers and technical leads. The output was a simple scripting language based on mind maps. The developers can write scripts in this language to extract information from various log files, derive conclusions based on them and generate reports.

Following is the architecture of the framework. I will blog more about the framework later.

No comments:

Post a Comment