Automated Testing Keeps the World Turning
I recently came across a blog written by a former developer at ORACLE.
The author highlighted the trials and tribulations of maintaining and modifying the codebase of Oracle (version 12) database. This version, by the way, is not some legacy piece of software, but is the current major version that is running all over the world. According to the author, the codebase is close to 25 million lines of C code and described as “unimaginable horror”. It is riddled with thousands of flags, multitudes of macros and extremely complex routines that were added by generations of developers trying to meet various deadlines. According to the author, this is a common experience for an Oracle developer at the company (their words, not mine):
- Start working on a new bug.
- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bug.
- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.
- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.
- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.
- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.
- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.
- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.
- Finally one fine day you would succeed with 0 tests failing.
- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.
- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.
- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.
Millions of tests
As I read this post, I started getting a little nervous, being a software developer and currently interacting with an Oracle 12 database. Quickly it became clear that millions of tests were acting as a true line of defense against the release of disastrous software. It was interesting to read that the author was diligent in adding tests not just because it was his job, but out of concern for ensuring that future developers will not break the bug fix being added. There are probably many ways of streamlining the development lifecycle described by the author, and maybe it is a topic for another blog, but it seems that having hundreds or thousands of failed tests is infinitely better than the alternative.
The black box
Product owners and business stakeholders are not the only people in an organization that may view their codebase as a black box. Developers may as well. As the number of lines of code grows over time, no institutional knowledge can cover the various nuances and complexities that various features introduce. When new developers are added to a project, the value of having automated tests becomes more pronounced. Organizations should strive to remove the perception of the code being a “black box” as much as possible. Failure to do so will result in loss of confidence in quality and viability of the products and features being developed, not to mention placing unnecessary burden on test teams.
Legacy Systems
I have heard horror stories from colleagues and friends about maintaining “legacy” systems. One question that comes to mind is if the system is being modified, is it really a legacy system, or is it just an old system that is still updated? In my experience, if a bug fix or a new feature is being added to an older system, the cost of not automating the corresponding tests most often results in some other seemingly unrelated bug being introduced. Sometimes it can be very difficult to automate tests especially in old systems. There may not be any available testing frameworks that could be easily plugged in. In such cases I would advocate for writing your own testing framework. It may not be as sophisticated as some of the commercial or open source packages, but it will help immensely. Testing automation of older systems can also ensure a smooth retirement of obsolete features that keep the codebase bloated and more complex that it needs to be.
Flaky Tests
In my experience, having more automated tests is better than having less. Over time, tests may be become redundant as they are being added. This is not the worst problem to have and can generally be managed as technical debt. What is worse, is presence of flaky tests. These are tests that change from passing to failing from day to day with no clear explanation. These result in significant time sink and loss of confidence in the product being developed. Getting rid of such tests should be the top priority. Ideally, they should be rewritten and made more robust, but in some cases, they may be completely removed as they do not reliably prove that the software is working as it should.
Conclusion
Test automation is not new, but it still appears to be lacking in many systems. Organizations should embrace the cost of test automation as part of the cost of developing new features and modifying existing ones. Doing so will help build confidence in ensuring quality for product owners as well as developers.
Happy testing!
‘Agile’… ‘Lean’… ‘Fitnesse’… ‘Fit’… ‘(Win)Runner’… ‘Cucumber’… ‘Eggplant’… ‘Lime’… As 2018 draws near, one might hear a few of these words bantered around the water cooler at this time of year as part of the trendy discussion topic: our personal New Year’s resolutions to get back into shape and eat healthy. While many of my well-intentioned colleagues are indeed well on their way to a healthier 2018, many of these words were actually discussed during a strategy session I attended recently – which, surprisingly, based on the fact that many of these words are truly foods – did not cover new diet and exercise trends for 2018. Instead, this planning session agenda focused on another trendy discussion topic in our office as we close out 2017 and flip the calendar over to 2018: software test automation.
“SOFTWARE TEST AUTOMATION?!?” you ask?
“SERIOUSLY – cucumbers and limes and fitness(e)?!?”
This thought came to mind after the planning session and gave me a chuckle. I thought, “If a complete stranger walked by our meeting room and heard these words thrown around, what would they think we were talking about?”
This humorous thought resonated further when recently – and rather coincidentally – a client asked me for a high-level, summary explanation as to how I would implement automated testing on a software development project. It was a broad and rather open-ended question – not meant to be technical in nature or to solicit a solution. Rather, how would I, with a background in Agile Business Analysis and Testing (i.e. I am not a developer) go about kickstarting and implementing a test automation framework for a particular software development project?
This all got me thinking. I’ve never seen an official survey, but I assume many people employed or with an interest in software development could provide a reasonable and well-informed response if ever asked to define or discuss software test automation, the many benefits of automated testing and how the practice delivers requisite business value. I believe, however, that there is a substantial dividing line between understanding the general concepts of test automation and successfully implementing a high-quality and sustainable automated testing suite. In other words, those who are considered experts in this domain are truly experts – they possess a unique and sought-after skill set and are very good at what they do. There really isn’t any middle ground, in my opinion.
My reasoning here is that getting from ‘Point A’ (simply understanding the concepts) to ‘Point B’ (implementing and maintaining an effective and sustainable test automation platform) is often an arduous and laborious effort, which unfortunately, in many cases, does not always result in success. At a fundamental level, the journey to a successful test automation practice involves the following:
- Financial investment: Like with any software development quality assurance initiative, test automation requires a significant financial investment (in both tools and personnel). The notion here, however – like any other reasonable investment – is that an upfront financial investment should provide a solid return down the line if the venture is successful. This is not simply a two-point ‘spike’ user story assigned to someone to research the latest test automation tools. To use the poker metaphor – if you are ready to commit, then you should go all-in.
- Time investment: How many software development project teams have you heard stating that they have extra time on their hands? Surely, not many, if any at all. Kicking off an automated testing initiative also requires a significant upfront time investment. Resources otherwise assigned to standard analysis, development or testing tasks will need to shift roles and contribute to the automated testing effort. Researching and learning the technical aspects of automated testing tools, along with the actual effort to design, build out and execute a suite of automated tests requires an exceptional team effort. Reassigning team tasks initially will reduce a team’s velocity, although similar to the financial investment concept, the hope is significant time savings and improved quality down the line in later sprints as larger deployments and major releases draw near.
- Dedicated resources with unique, sought after skill sets: In my experience, I’ve seen that usually the highest rated employees with the most institutional/system knowledge and experience are called on to manage and drive automated testing efforts. These highly rated employees are also more than likely the most expensive, as the roles require a unique technical and analytical skill set, along with a significant knowledge of corresponding business processes. Because these organizational ‘all-stars’ initially will be focused solely on the test automation effort, other quality assurance tasks will inherently assume added risk. This risk needs to be mitigated in order to prevent a reduction in quality in other organizational efforts.
It turns out that the coincidental internal automated testing discussion and timely client question – along with the ongoing challenge in the QA domain associated with the aforementioned ‘Point A to Point B’ metaphor – led to a documented, bulleted list response to the client’s question. Let’s call it an Agile test automation best-practices checklist. This list can be found below and provides several concepts and ideas an organization could utilize in order to incorporate test automation into their current software testing/QA practice. Since I was familiar with the client’s organization, personnel and product offerings, I could provide a bit more detail than necessary. The idea here is not the ‘what’, as you will not find any specific automation tools mentioned. Instead, this list covers the ‘how’: the process-oriented concepts of test automation along with the associated benefits of each concept.
This list should provide your team with a handy starting point, or a ‘bridge’ between Point A and Point B. If your team can identify with many of the concepts in this list and relate them to your current testing process and procedures, then further pursuing an automated testing initiative should be a reasonable option for your team or project.
More importantly, this list can be used as a tool and foundation for non-technical members of a software development team (e.g. BA, Tester, ScrumMaster, etc.) in order to start the conversation – essentially, to decide if automated testing fits in with your established process and procedures, whether or not it will provide a return on investment and to ensure if you do indeed embark down the test automation path, that you continue to progress forward as applications, personnel and teams mature, grow and inevitably, change. Understand these concepts and when to apply them, and you can learn more about cucumbers, limes and eggplants as you progress further down the test automation path:
To successfully implement and advance an effective and sustainable automated testing initiative, I make every effort to follow the following strategy which combines proven Agile test automation best-practices with personal, hands-on project and testing experience. As such, this is not an all-inclusive list, rather just one IT consultant’s answer to a client’s question:
For folks new to the world of test automation and for those who had absolutely no idea that ‘Cucumber’ is not only a healthy vegetable but is also the name of an automated testing tool, I hope this blog entry is a good start for your journey into the world of test automation. For the ‘experts’ out there, please respond and let me know if I missed any important steps or tasks, or, how you might do things differently. After all, we’re all in this together, and as more knowledge is spread throughout the IT world, the more we can further enhance our processes.
So, if you’ll excuse me now, I’m going to go ahead and plan out my 2018 New Year’s resolution exercise regimen and diet. Any additional thoughts on test automation will have to wait until next year.