Top 5 mistakes companies make when building test automation

Nowadays, test automation is becoming very popular among companies looking to improve the quality and efficiency of their product testing, but it also comes with a significant cost. Automating software increases the productivity of a business and also helps companies achieve good outcomes in a short time span. However, many companies are committing some common mistakes while implementing test automation.

What mistakes do companies make when building automation?

  1. Not a good test strategy

First, let’s understand: what is test strategy?

A test strategy is just a document that covers how companies proceed with testing during the project life cycle. It creates guidelines to be followed to achieve the test objectives and allows the team to perform the types of test execution mentioned in test plans. It covers scopes, testing approaches, defect tracking approaches, and automation.

Bad Test Strategy Reasons

# Fall in love with a single form of testing

This comes from my personal experience: companies first start testing with manual testing, then switch to an automation approach, but after discovering the power of automation, they try to achieve 100% automation for the product; they want to automate everything and slowly forget about the manual testing and follow the single form of testing. Sometimes companies follow only manual or exploratory testing for the project.

Rely testing to QA only

Sometimes, development teams depend on the QA team for finding bugs that they can easily find while unit testing. I have seen development and QA teams work in the same office but never interact with each other; they don't even talk to each other face-to-face; they only talk in bug tracker calls and meetings. Testing is not just the QA team's job; as an organization, it’s the responsibility of the whole team to deliver a good and secure product to the client.

# Division in the QA and Development team

This is what I have seen in product-based companies, where they have separate schedules for development and testing. There is no collaboration between both teams, and they even don’t bother to encourage collaboration between these teams. Here is the best example: suppose there is a sprint of 30 days, and in the current sprint, the team has decided to deploy a new feature in the application, so they will spend 3 weeks building the new feature and only 1 week testing and fixing bugs. Suppose QA found any showstopper or high-priority bug, then there would be high pressure on developers to fix bugs, which always add more bugs.

  1. Wrong tool selection

It is very important to select a suitable tool for your project. If the tool is good and provides the required features, then automation would be easier and more effective. While doing POC, we have to go through open source, commercial, and custom tools. POC/Test Managers have to check lots of factors before finalizing any tool. To select the most suitable testing tool for the project, the Test Manager should follow the below tool selection process.

  • Requirement for tool identification

  • Tool and vendor evaluation

  • Cost estimate of the tool

  • Benefit of the tool and final decision

# Requirement for tool identification

How can you select a testing tool if you do not know what you are looking for?

The project team has to precisely identify test tool requirements, and all the requirements should be documented and reviewed by the team and management board.

# Tools and Vendors evaluation

After finalizing the requirements of the tool, the Test Manager should

  • He should go through the commercial, open source, and custom tools that are available in the market, based on the project requirements.

  • Create a list of shortlisted tools that best meet your criteria.

  • A very important factor to consider while making your decision is the vendor. You should consider the vendor’s reputation, after-sale support, tool update frequency, etc.

  • Check out the quality of the tool by taking it through a trial and launching a pilot.

Cost estimation of the tool

This is one of the most critical parts where companies make wrong decisions. The test manager must review the tool's cost estimate in terms of cost, value, and benefit.But suppose that after spending crucial time investigating testing tools, the project team finds that this tool suits the respective requirement. However, after all your discussions with the software vendor, you found that the cost of the tool was too high compared with the value and benefits that it could bring to the team. In such cases, the balance between the cost and benefit of the tool may affect the final decision.

# Make the final decision

To make the final decision, the test manager must have an understanding of the positive and negative points of the tool and also balance cost and benefit. He has to check whether the tool has proper community support, supports open libraries, reporting, language support, integration with other tools, etc. Sometimes he misses the key points and selects the wrong tool for automation, which wastes lots of time and money.

  1. Always try to choose open-source tools

For any project, always trying to choose open-source tools is not good practice, especially when we need to address more security-related issues in the project. The best example is the banking and trading domains project. I personally believe here we should not go for open-source tools but instead focus on commercial tools that come with security features.

  1. Select appropriate and incorrect test cases for automation

This is one of the common mistakes in new projects where the testing team selects incorrect and inappropriate test cases for automation. Below is the list that we should follow while selecting test cases for the automation: -

  • Repetitive test cases

  • Complex test cases

  • Lengthy test cases

  • Critical test cases that cover the main functionality of the application

  • Frequent test cases

  • Test cases that cover multiple environments

  1. Clean code created lots of maintenance

This is a very common mistake I have personally seen in many new projects where they don’t follow any clean-code principles, which creates lots of maintenance tasks in the future. Before we jump on the list of rules for clean code, let's understand what clean code is.

It is a set of principles for writing code that is easy to understand and modify. For example, if any senior tester has written any code and he leaves the company and any new person joins, the existing code should be written in such a way that any new person can understand what this code is doing and he should be able to modify that code.

Below is a list of principles for clean code:

  • Write code as simple as possible.

  • Don’t repeat yourself.

  • Delete unnecessary code.

  • Add detailed comments to the code.

  • Follow the naming convention.

  • Readable and understandable code