Tag Archives: Manual testing

Be An Eternal Student

No, I’m not talking about spending your entire life jumping from one major to another so you don’t have to graduate. When I say, “Be an eternal student.”, I am advising you to keep learning throughout your life both actively and passively. Don’t let opportunities pass you by. Just because you don’t need to know something right now doesn’t mean that it won’t come in handy later.

One question I get from many people is how I know all of the things I do. I tell them that I just pick it up as I go. Growing up, I was surrounded by mechanics, electricians, carpenters, and other tradesmen. If they needed help, or were helping me, I paid attention and picked up some tricks from them that help me around the house today. With my career, it’s been a bit different. Until recently, I didn’t have a lot of computer people around to talk to so I had spend a lot of time tinkering on my own and doing research. I learned a lot of ways not to do things; I learned some bad habits; and I gained some excellent insight into how our magical toys work from the ground up.

Aside from the hands-on/trial and error approach, I have also invested a significant amount of time reading, watching videos, and taking courses on various topics. Usually I try to focus on things I am either actively working on or expect to in the near future, but sometimes I throw in something brand new or just plain fun to keep me excited about learning. Recently I did this by taking all of the courses in the Docker Path on Pluralsight.com. While I started the courses for fun, I quickly found that what I was learning could be implemented in my current projects, which was an added bonus since I get to practice what I learned and improve my working environment.

It has been experiences like that and some unpleasant bills, replacing things I didn’t know how to fix, that helped me realize the importance of not growing stale or letting my aptitude for learning atrophy because I already know how to do my job. I was also lucky to have grown up around other perpetual students who gave me a solid understanding of how to acquire knowledge. I have found that the keys to learning are very simple:

  1. Find something you are curious about or need to learn.
  2. Gather resources that cover the topic.
    1. Talk to people who already do or know what you need.
    2. Read books and articles about the topic.
    3. Watch videos about it.
    4. Look for someone teaching a course that you can sign up for.
    5. Experiment on your own.
  3. Do something with what you have learned.
    1. Complete a project using what you learned.
    2. Share what you learned with someone else.
  4. Appreciate yourself for learning something.
  5. Repeat.

If that sounds easy, that’s because it is most of the time. I use this approach in my daily life for everything from plumbing to performance testing applications. Granted I will never be a master plumber (it just isn’t my calling) but I also don’t need to call one when I need to unclog a drain or replace a faucet. When it comes to computers and software, there is always something new to learn regardless of your level of mastery. This is part of the reason it is important to be an eternal student. If being armed with new tools and ideas isn’t enough to fuel your desire to learn, remember that once you stop growing you begin to become stale and obsolete. Don’t let your potential sit idle. Take the time and spend the effort to find out exactly what you are capable of. You might even find out that you can do anything you set your mind to.


I have never let my schooling interfere with my education – Mark Twain

Advertisements

How to Configure a Virtual Ubuntu Server

Virtual machines are useful tools for both development and testing of applications. The following video shows how to setup an Ubuntu server using Oracle VirtualBox.

There Is No Right/Wrong Way: Writing Tests

Over the years, I have encountered and practiced many different ways of creating, documenting, and executing tests. I have also worked with a number of testers, including myself at times, who harbored strong opinions about how tests should be handled. As a result, I’ve found that the “right way” to handle tests depends upon your environment and your team.

If we look at the methodology and maturity of any team’s development life cycle, we will see that certain approaches to testing are better suited than others. For example, a waterfall team that has a high turn over rate will get more value out of high detail, scripted tests because there will be a smaller product knowledge base among testers. Alternately, an Agile team dedicated to a project/product will likely benefit from more generalized tests and exploratory testing. A company’s culture, regulatory bodies, and procedures also play into how testing is performed and documented.

With this in mind, it is important for testers to keep an open mind and consider their processes carefully. Small changes can result in major savings in time and costs. A good example of this would be changing the test team’s charter from exhaustive to risk-based testing. This change will quickly switch their focus from reviewing every possible combination of data paired with a complete review of the system’s functionality to examining the changes and identifying the sections of the software with the greatest need for testing, which typically results in a significant reduction of test time. As a counter-point to that example, there are situations in which the team may need to perform more in-depth testing, such as when there is a major refactoring and the project is disturbingly light on unit tests.

The point of my ramblings here is that as professionals, we should focus on learning and improving our skills and finding solutions to our pain points rather than arguing over who’s methodology is better. The reason there are so many ways to approach testing is because each situation is different and cannot be handled by a one-size-fits-all solution. Keep learning and remember there is no right or wrong way, it’s just a different style.

 

Limbering Up QA: Adapting to an Agile Process

In the days of Waterfall, testers were seen as the guardians of quality and the protectors of user experience. They were the last line of defense to prevent a flawed product from being released. This meant that QA needed to have all of the requirements provided to them so they could prepare their tests for when the product was finally complete and they could begin their process. Sure, the deadlines were usually tight because the release couldn’t be moved and development ran long, but that was just part of thrill. QA were sometimes viewed as an obstacle rather than an aid, but they remained strong and provided their sworn services for client and company.

All of this changed with the dawning of Agile.

The once-powerful QA is now faced with shorter deadlines, stories instead of specifications, and seemingly incomplete features being submitted for test. Worse yet, the developers are encouraged to develop unit tests that automate a chunk of what the tester once handled.  How can we possibly work like this? It’s utter madness.

It may seem like madness and chaos, but there is also method and rhythm to be found in the new processes. The first step is to stop fighting the current and dive off the waterfall instead. After taking the plunge, testers can learn to navigate the flow of sprints and iterations. Granted, this is often easier said than done, but any habit can be kicked and new ones formed. It just requires time, effort, and a willingness to change.

Unfortunately, two out of three of those things are often not in the testers’ inventory, time and willingness to change. I’m sure that a number of those reading this might be offended by that statement, but bear with me. I know that no-one will argue with the time part, but everyone trips on the willingness to change. It’s natural for people to find a comfort zone and settle in. It’s also natural to be startled and scared by change. In my experience, testers are often leery of Agile because it seems to value speed over quality, but this couldn’t be farther from the truth. Agile places emphasis on quality but it is done by building it in rather than straining out errors after development.

This change in both thought and method is the key to producing software quickly without compromising quality. The catch is that it requires developers and testers to step out of their silos and work closely together. They should also include product owners and other stakeholders in their discussions to keep everyone aligned.

The idea is to inject the knowledge of QA from the beginning rather than waiting until everything is done. While this may sound like a pitch for TDD or BDD, that is only a piece of the picture. Sometimes, it is already too late for feature to maintain quality once it gets to development. This case is most often found with legacy modifications or a stakeholder pet project that seems “simple enough” but hasn’t been fully evaluated for ripples or pitfalls. This isn’t because someone missed  something. It is a result of their standard thought processes.

  • Developers tend to think in terms of “how can I make this work?”. They are focused on solving the puzzles.
  • Product owners focus on the value of the feature to clients. Most POs leave the technical concepts to the dev team.
  • Stakeholders are usually focused on how the feature will improve their positions or increase the company’s profits

When testers are left out of the planning stages, teams sacrifice an opportunity to reduce bugs and head off troubled projects before they are sent to development. A tester’s mindset generally includes looking at contingencies, interactions, and risks within a system. Even if the person doesn’t know the  system, asking the right question during planning can shine a light on a major issue that may have been glossed over, such as “Have we considered how the new shipping system handles an order including both physical and downloadable content?”.

After planning, QA should be helping to translate business process knowledge from the POs into paths through the software and tests for the new feature. A common method of doing this is by writing Given-When-Then (or similar concept) scenarios. These scenarios will then form a basis for both automated and manual tests used to confirm the functionality during development and prior to release.

In short, testers should make an effort to be involved throughout the project cycle rather than sweeping up at the end. By doing this they can help set the stage for successful projects and avoid the stress of being the roadblock or bearer of bad news just before release. While this is a major change, it is one worth embracing.

Automate All the Things!

“Automate all the things” has become a common battle cry heard when discussing software testing. Those who follow that banner will often support it with claims of improved ROI, faster deployment times, and lower personnel costs. In an idealized situation, this would all be true, but reality rarely deals in Utopian terms.

As a developer turned tester, I recognize the importance of good testing practices and thorough test coverage for producing a quality product. As applications become more complex, the possibly of introducing bugs into a seemingly unrelated module increases. Granted that this becomes less likely when using well-planned and executed architecture, but I have found few developers that have had the fortune to never deal with legacy systems (usually some of the most intricate and fragile balls of mud known to man). Since it is always better to err on the side of caution, someone needs to makes sure that new features are working as expected and that the rest of the system is also functioning as it should. Enter the manual testers.

Management teams often consider their developers to be too precious of a resource to have them spend time testing applications and instead bring in the “QA Team”.  This team of typically less technical employees is usually charged with reviewing and verifying all of the functionality in the application prior to release. While this sounds reasonable at first glance, QA typically gets the application two days before release because development ran into snags or was given a short deadline  that would look better at the quarterly meeting. This means that if the company intends to actually get the testing done, the number of testers required becomes a payroll issue. The alternative to this is to automate all of the tests and thus reduce many salaries to a smaller team of developers (wait, aren’t they a precious resource?).  The problem with this strategy is that it is usually implemented with the concept of a “one-time cost”, which is rarely the case. In fact, the only successful automation efforts are those that take into account that tests will require regular maintenance to remain current with the applications.

Another issue with the “Automate Everything” concept is that there are things that can be tested manually more efficiently. Computers may work faster than human employees, but they are either slower to adjust to changes or require a significant amount of time and expense to develop. With this in mind, it becomes obvious that the best strategy for ensuring quality is to develop a team with the skills and authority to implement a blended solution.

In my experience, a team that will produce the most successful results is cross-functional and aligned towards a common goal. Bear with me if that sounded like a case of jargon dropping, it wasn’t. In order to keep the focus narrow I will focus on the developers and QA. This team doesn’t deal in hand-offs between members. Instead, they share the the whole cycle. Many testers may not understand the code that resides within the bowels of the program, but they do know what it is expected to do. It is their job to provide this insight to the developers prior to development to reduce the chances of the finished product diverging from the expected one. Armed with the knowledge of what is expected, the developers should implement appropriate unit tests while writing the new features to make sure that each portion of the application performs as expected. On the other end of the development cycle, the testers are charged with confirming the functionality prior to release.

Ok, you are probably wondering what happened to the automation portion or you think that it was dumped solely onto the developers. You would be wrong. Unit and integration testing are important and should be done as part of development, but they may not cover the full spectrum of tests and requirements. Burdening the development team with writing regression suites would bring them to a crawl in regards to producing new features. Likewise, manual testers will rarely be able to complete such tests within deadlines and would probably be bored to tears performing such tedious tasks repeatedly. This is the proper place for automation.

A good automation strategy identifies the tests that provide the most benefit to the organization that can also be reasonably automated. Scripting a test that accesses the UI and clicks on every button on the screen is a waste of resources. The only thing this test catches is a lazy developer that didn’t even check to see if his code worked. While it can be argued that there is value there, I think such an issue would show itself in other ways. Better automation candidates would be the main business process flows through the application because they are usually time consuming to be done manually and will cover the majority of the applications uses. Having these paths automated will not only save time, but will free the manual testers to explore fringe cases and usability considerations, which machines either don’t do well or would offer little ROI from.