Category Archives: Testing

The Dangers of Tribal Knowledge

On her first day, Jayne spent a lot of time collecting data from teammates to become familiar with the company’s systems and procedures. Since she appreciated the hands-on approach being taken with getting her settled in, she made her own notes and never stopped to ask if there was a corporate wiki or central document system where this information was stored. Six months later, Jayne had become a valuable member of the team, had gained significant knowledge from her team, and was given a product enhancement project. While reviewing the code for the necessary changes, she noticed some oddly designed features. She asked her team lead, Tim, where the original specification of the app could be found so that she could understand why the application had been written that way. Tim gave her a confused look as said, “You’ll have to talk to Carrie. She’s the only one left from the team that wrote that. I think she’s in DevOps now.”

If you have run into a situation like this then you have already experienced the effects of Tribal Knowledge. This happens when enough people know how something works so it is believed that documenting it would be a waste of time. This common knowledge is ok for situations that everyone must deal with every day, however, relying on it for more obscure information can lead to lost data, increased project time, and decreased morale. Let’s check in with Jayne to see how this plays out.

After learning where Carrie now sits, Jayne heads over to ask about application.



“I’ve been assigned to enhance the shipment tracking system. I’m looking for the original specification, design plans, notes, whatever documentation there is on it and Tim told me to talk to you.”

“Good luck with that. I was only on the team for about two weeks before release and I never saw any documentation. We had meetings where we kicked some ideas around and then we went off and made it work. Josh and Kate were the ones that really knew that system. We’ve been hoping it didn’t break since they left.”

All too often (especially in IT), a source of knowledge will change jobs or companies and that resource will be lost. If the teams rely too heavily on tribal knowledge then it becomes likely that they will fall to the trap of reinventing the wheel or relearning lessons. This cycle results in inefficient use of both time and money. In a worst case scenario, it also depletes the moral of the team members assigned.

The easiest way to avoid the negative effects of tribal knowledge is through documentation. I expect that some people cringed after reading that and/or pictured writing tomes explaining every intricacy. Documentation doesn’t have to be tedious nor does it have to be extensive. Sufficient documentation to avoid a situation like Jayne’s could have been a short comment explaining the unusual approach in the code. Alternately, it could have been kept with the design records in an archive of stories or specifications. The key to good documentation is to eliminate the unnecessary and focus on what is important. Obvious functionality such as basic field validation (eg., number fields do not except letters) can be ignored while business logic should be called out in some way. If a developer needs to come up with a “clever” solution to an issue, there should be a comment in the code to explain why it was necessary and possibly how it works. Whether you write traditional specifications, Agile stories, or take notes in a team wiki, a few minutes of writing can save hours of review and frustration.


Adventures in LoadRunner 2: Dangerous Data

Given my experience with creating automated regression tests, one would think that I’d remember the dangers of improperly planned data. Unfortunately, I am susceptible to getting wrapped up in how the function works and getting to an initial success and sometimes forget to see if the feature works differently if the data varies. While I take comfort in not being alone in making this mistake, I still regret not seeing it coming sooner.

With any automation, developers should always prepare a plan for providing reliable and predictable data. Without that data, the automation will become “flaky” and produce undependable results. As an example, I was recently running a performance test that we’ve been working with for a few weeks and has been running smoothly. We decided it was time to start ramping up the load and expanded the product data available. Unfortunately, we did realize that the new products had a setting that caused them to trigger a different process flow. The result was that our tests began failing and because the data was being selected at random, we couldn’t determine if we had hit a product limitation or an issue in the test. We lost several hours to debugging this situation.

An important lesson I have learned through this and similar experiences is that automation requires a data strategy that takes into account how the tests will be run and what their purpose is. In the case mentioned above, the solution is to generate a controlled set of products that will result in a predictable path through the software. We will also need to regulate any changes to the data load or develop an automated system to produce the desired entries in the required state before the test runs. It may sound obvious but it really is more complicated than you might think.

At least now I know what one of my future articles will be about, developing a data strategy.

Initial Review of RedwoodHQ

I recently saw a post in the Test Automation community on LinkedIn suggesting that we should stop writing custom frameworks. The reasoning behind the suggestion was that it was time consuming to reinvent the wheel when there was an Open Source project, RedwoodHQ, that already had everything you need. Like most of the commenters, my first thought was that I was just reading a sales brochure for another “silver bullet” automation solution. The author, however, insisted that it was a sales pitch since his framework was free and he just wanted to raise awareness of a new tool that is available to the community.

I have long been a proponent of developing one’s own framework to build understanding of the underlying operations. In some ways it’s like Jedi training, you might start out using a lightsaber you were given but you will not become a master until you have constructed your own. In my experience, using pre-existing frameworks can limit an automator’s growth by obscuring the inner workings. Some frameworks will also limit a user’s ability to perform tasks because the developer didn’t consider that use case.

With that in mind, I decided to look at RedwoodHQ and see what it actually is. It wouldn’t be fair of me to write off someone else’s work without at least taking it for a spin. After reading a bit more about it, I downloaded the application and set it up on a VM. The installation process was smooth and I was connected to the web interface and poking around in the sample test in minutes.

For someone with very little knowledge of Selenium WebDriver, they could develop a test with RedwoodHQ right out of the box. They may even be able to begin learning by looking through the action example code. I wouldn’t, however, recommend keeping the original actions in the long term. If for no other reason, they need to be replaced because the current Selenium code uses hard sleeps and possibly relies on implicit waits.

With that said, don’t write off this framework yet. You’ll note that I recommend replacing the actions. This is extremely easy to do using the built-in web IDE and you can drop in any jars you like to use with your tests. This means that if you have built your own framework, you could compile it and reference your own classes and functions within RedwoodHQ. I am currently considering doing just this.

The efforts I have put into my personal framework have been focused on building functions around Selenium WebDriver to make developing page/component objects easier and by extension making the tests easier to write. In this way, I was not limited to using a specific test framework (although I tend to use TestNG most often). I have now realized that unlike other automation tools and solutions, RedwoodHQ operates at the same level as a test and reporting framework and will not interfere with working at the lower levels.

Since this is only an initial review, I can’t speak to how it integrates with a CI environment but I am willing to give this tool a chance. Once I have adapted my Selenium framework to RedwoodHQ, I am considering submitting it as a replacement for the existing actions. I believe the tool would benefit from having actions that utilize explicit waits and provide more than just basic functionality.

The take home from this is that I learned that RedwoodHQ is not another “silver bullet” automation nightmare. It is a test development, management, and reporting framework that won’t limit you to working within their methods. My intial opinion is “Well done!”

Testing Is Overrated

As I sit here on the morning of this sunny April 1st, I find myself thinking about the actual importance of testing. We’ve all heard the speeches from QA about how important their job is to the company and the product. How they keep the developers honest and reduce the risk of bugs reaching the customers. Seriously? All they do is hold up release while complaining that some obscure path through the product results in a 5 second delay every 50th time it is performed. Who needs that? The project team certainly has better things to do than listen to that kind of nit-picking.

Now I suggest you think about how much time we can save on projects if we can eliminate the testing phases. In some companies, testing can take up half the project because some module changed and now we have to go through regression testing. As if the developers didn’t already make sure it worked. Of course, the saving for some companies won’t be as significant since they are already practicing the superior process of using hard dates that result in QA only being able to look at the product for a day or less before going live. In today’s fast moving environment, every second counts when getting a new offering or feature out the door.

In the grand scheme, does anyone really expect software to be bug free or even fully functional on launch day? All customers know that the initial release is more of a prototype than a polished product and unless the want to get a feel for application they really should wait until the first or second patch is sent out. With that in mind, who is going care about the testing efforts done pre-release. Heck, we can even save money by letting the early adopters report any real issues found. They are going to be better suited to the task anyway since they are actually using the product rather than making up hypothetical uses. They are also more likely to follow the instructions for using the product and stay on the “happy path” rather than performing “Exploratory Testing”.

All in all, testing is completely overrated and unnecessary beyond the spot checks performed by the developers when they write the code. If anyone tells you otherwise, they are probably just whiny testers trying to feel like they matter, just ignore them.


The post above was written for April Fool’s Day and does not represent the actual opinion of the author or Wolvesbane Academy. If this article sounds like an opinion expressed by teammates or managers in your organization, I am so, so sorry for you.

Preventing Issues or Helping to Mitigate Risk

I recently read an article by Cassandra H. Leung claiming that promoting the idea that testers prevent issues is harming the “tester brand” by establishing unrealistic expectations for testers. My initial reaction was to think that she was crazy because testers do prevent issues if we are allowed to join the project early and influence development. After reading a bit farther, I realized that the latter part of that thought was what she was talking about. Testers can’t prevent issues if they are not allowed to be part of planning or their warnings go unheeded. Unfortunately, both of these scenarios are more common than teams would like to admit.

This revelation leads us to an identity crisis as we are forced to ask, “What should we being doing as testers?”.

Over the years, testers have been charged with many formidable tasks including: assuring quality, catching bugs, policing issues, preventing issues, enforcing acceptance criteria, etc. As a whole, we’ve done a fair job of achieving success despite having the odds stacked against us by these vague definitions. The reason we cannot be completely successful is that we rarely have the level of control needed to perform these tasks.

As thought-leaders continue to move quality considerations “left” and raise awareness of the need to have testers involved early in the process, the need for testers gets questioned. If developers are testing and product owners are writing tests in the form of acceptance criteria, testers need to properly define their role on the team. Since Quality is a team responsibility, we cannot claim it as our goal. My recommendation is to define the tester role as mitigating risk though observation and review of project quality considerations and conversations with the team.

This means that testers are not gatekeepers, defenders of quality, or even responsible for catching all of the bugs. It is our job to bring our unique perspective and abilities to the team and use them to minimize the risk of issues within an implementation.

I admit that this definition is still a bit vague but it is something that we can accomplish. Testers cannot control the quality developed into the product. Testers cannot prevent decisions that could result in issues down the road. Testers can raise concerns to the team and make sure that quality conversations occur so that everyone involved has the best information available to them before making a risky decision.