Since this is the first post for my blog, it seems appropriate to start by defining test automation and then adding some scope to rein it in. In the broadest of terms, test automation is the process of consistently obtaining and verifying results with minimum (or no) human interaction. This means that the tests could be physical or virtual. An example of a physical test for a manufacturing environment would be a sorting machine that checks for over-sized or misshapen parts. In the virtual realm, automated tests come in numerous forms and functions such as unit, integration, regression, and performance. My personal experience lies predominantly with software so my posts will most frequently deal with that aspect of automation.
Types of Software Automation
As mentioned earlier, software development has multiple flavors of automated tests. Each type has a specific role associated with it that generally indicates where it belongs in the SDLC (Software Development Life Cycle). The following are the main groupings I use when discussing tests, but it is not an exhaustive list. My intent in providing these items is to help establish a common terminology I can refer back to in future articles.
The most fundamental tests in the automation realm are the unit tests. Most often written by developers, these tests are used to verify the small segments of functionality. For example, several tests may be written to check the values returned by a method. To limit sources of error, unit tests often utilize mocking of data and interfaces with other modules.
Integration tests are used to confirm functionality between classes and modules within an application. These tests are typically designed to mimic production use of the tested sections, but will not necessarily represent actual use cases. A developer might create an integration test to confirm that the reporting module is able to interface with a new database adapter via API (Application Programming Interface) calls rather than using the UI as a user would.
Business Process Tests
It can be argued that all tests are business process tests (or should be). I tend to use this term to distinguish between tests that follow a use case or user path through the software and those that are focused on verifying specific pieces of functionality. Business process tests can be written using an API, but often include UI automation to better represent user interaction. Being able to login and create a post through the API confirms the backend operation of a site, but it doesn’t tell you that actual users can’t log in because the username field is hidden.
Just hearing the name sends chills down the spine of developers and QA alike. Regression tests are often the most tedious and time consuming tests in our arsenal because they are focused on validating all of the functionality within an application without making any assumptions along the way. These are the tests in which every field validation is confirmed and all edge cases are tested.
Load tests, stress tests, scalability tests. Whatever you call them, performance tests always require some form of automation unless you have a significantly large QA team. These automated processes are designed to push an application to its limits and expose flaws before they become a pain point. They are typically implemented under controlled conditions and monitored for changes between runs.
Since this was meant to be a blog post and not a book, I’m going to wrap this up right here. I hope I’ve given you something to chew on and established some common ground we can explore in later posts. Please let me know if there is something you would like some clarification on or if you had a different understanding of it. While I’m writing articles to share the knowledge I have, I’d be very interested in hearing alternate concepts and discussing them. I’ve found the best way to improve is to be open to learning from all available sources.