Eclipse Testing Day - What you missed
27.09.2013 09:08 by Alexandra Schladebeck
Wow – if you missed the Eclipse Testing Day, you missed a stunning event. Sure, as an organizer and speaker I’m likely to be biased, but this year’s testing day resonated not just with me but with everyone I spoke to as well. Here’s what you missed:
Keynote – Flying Sharks with Eclipse m2m
Florian, Clemens and Petra kicked off the day with a fun and technologically interesting keynote. Using M2M technology, they can control a flying shark via a Vaadin UI and also test the shark’s flight using a JUnit test that passes when the shark flies into the beach early warning system. One lucky audience member even got to fly Sharky himself from his tablet. The good news for this talk is that you can see it again at Eclipse Con Europe – and I’d highly recommend doing so.
Energy Testing Mobile Applications
One of the aspects of mobile testing that came up consistently throughout the day was that non-functional quality characteristics are hugely important. Claas’ research found out that one in six Android apps has bugs related to the energy consumption. And the comments for bad energy usage bring the app ratings down. He showed us his work on JouleUnit, including a live feed via webcam to a running test in his testlab in Dresden where tests can monitor the energy usage of the app. His aim is to provide energy labels for apps – like we already have for fridges and washing machines. His survey on energy usage can be found at survey.jouleunit.org.
Geord Hansbauer from Testbirds gave us a concise overview of the principles and benefits of crowdtesting. The problem of functional testing amongst the device fragmentation is potentially large for app development, and usability is another non-functional requirement that has become considerably more important for mobile technology. Apps that aren’t subject to security restrictions can use crowdtesting to access target groups and devices to complement traditional QA activities.
Case studies for GUI test automation for mobile apps
After Georg, I was up. My topic was to present experiences we’ve had in automated GUI testing of mobile apps. Using three examples, we looked at writing and executing tests, continuous integration, and cross-platform testing. The conclusions are that automated testing is feasible and useful for mobile applications, that continuous integration is just as important as for desktop applications (although considerably complexer) and that “cross-platform” is far from what we know in the desktop world.
Mobile testing – new wine in old skins?
Just before lunch, Frank Simon from Bluecarat gave us a thought-provoking talk about mobile testing. He encouraged us to leave old-fashioned definitions of testing behind (where testing is something that happens only at the end, and only with an executable object). The mobile space is something new, and we shouldn’t fall into his 12 pitfalls by thinking that we can simply translate what we know and what we do from the desktop world.
Straight after lunch we had the panel discussion. Frank Simon did an excellent job of moderating, and the panel members provided an hour and a half of valuable and interesting discussion. Just some of the points we talked about included:
- That people are less tolerant of errors in mobile apps than in desktop software
- That non-functional quality attributes (such as performance, usability and security) are given more weight by users of apps
- Process and methodology are just as important in mobile development – but process especially tends to be forgotten.
- That everyone is sure of the importance of mobile technologies – but that this is not always reflected in budgets and resources.
- What the business value of mobile applications is
- That the current standards (iSTQB, CMMI) are also valid for mobile development processes.
I found myself wondering whether the focus on quality in mobile applications will have repercussions for desktop software. Can we expect users of other technologies to become less tolerant of errors and more quality-aware because of their experiences with mobile technology? Time will tell I guess…
Obviously, there’s no way a few bullet points can cover the richness of the discussion – trust me, you’re sorry you missed it!
Cross platform development and testing with Tabris
It was lovely to welcome Jochen Krause from Eclipse Source to the testing day to talk about Tabris – the extension for RAP that produces native widgets on Android and iOS from a single source. I was particularly interested in their findings for cross-platform testing – and found that our experiences are confirmed by their work as well: Cross-platform testing is technically feasible. However, reality shows that workflows, navigation, use of components and usability guidelines differ considerably.
The final talk of the day took us on the search for the elusive Monkey King. Richard Süselbeck from Developer Garden presented another complement to the quality assurance process – monitoring apps in the field. The problem of device fragmentation becomes smaller if you can concentrate on the devices that the customers from the app store are using. The challenge is then to react quickly and flexibly to fix issues that are found before they become awful ratings.
Thoughts – it’s the same, but it’s different
Over the course of the day we asked ourselves and each other this question: is testing for mobile different, or the same? The answer seems to be twofold: The processes we know from testing for any other technology remain the same. Testing must be a part of the development process long before there is a test object to execute. Testing should be based on risk, target group and business aims. The testing methodologies also remain the same. But there is a difference for mobile testing – and that is in the details. We can’t assume that steps, activities and priorities can simply be transferred from other projects. There are whole new dimensions such as marketplace integration, localization and energy consumption that need to be included in the test strategy. And there are already known dimensions that may carry more weight such as system variation, usability and security.
Over the well-earned beer in the evening, it was great to hear the feedback from the participants and speakers. I said it on the day, and I’ll say it again – I learned a lot! The mix of topics – all presented by great speakers – made the day fly by. Thanks to everyone who made it possible!