Back in early 2011, when we were deciding where and how to split GUIdancer into two tools, we put a lot of consideration into what we wanted to achieve with our Open Source activities. We firmly decided that Jubula users should have all they need to write, execute and analyze tests. We wanted to have a good, well-rounded tool as our Open Source contribution, not something that is missing necessary features. Nevertheless, we did want to save some nice features for GUIdancer, the idea being that for a small amount of money, you can add some nice aspects to your testing project and process.
As it turns out, many of the features we kept closed source and as part of the commercial tool are things that are likely to become nice-to-haves once you’ve got more than just a few tests up and running. Once tests start getting bigger, then aspects like Test Style (like Checkstyle) and Mylyn integration really start getting interesting. And if you’re using Jubula in CI processes and your tests are gaining in importance, then it would be nice to have reporting capabilities (you know, for the managers ;) ) and to get some information on code coverage. If you’ve already used Jubula successfully in one project, then maybe you’ll think about starting testing even earlier next time with the Model-Based approach.
But these are things that come later, and we often get asked the question by newer users – what can I do with all of this? Well, in a short blog series, we’re going to describe how the various aspects of GUIdancer are designed to be used, and how they help us with our work in the Jubula project and in customer projects. This first entry will look at how we use BIRT reports in our daily, weekly and monthly work.
GUIdancer and BIRT – the basics
Jubula contains various options to analyze single test runs – a test result report appears dynamically during test execution, and individual test reports can be reopened including information on error-types and screenshots. This is indispensable in a test tool – I need to be able to see what went wrong so it can be fixed. However, what this doesn’t allow me is any kind of view over time. Questions such as:
How many tests ran successfully / failed / didn’t run this week / month?
How many tests have been added over the last period of time?
How has the code coverage developed since adding the new tests?
can’t be answered with single test runs; they need to be cumulated and displayed in an easily readable manner. This is the aim of the BIRT integration in GUIdancer.
One of the additional features that GUIdancer provides compared to Jubula is creating BIRT reports of test progress. There are some default reports provided with the installation but you can customize the reports or create your own. Creating a report using the GUIdancer Integrated Test Environment is straightforward (and you will find it documented in [...]