Empowering Automation Testing: Automation Testing Strategies — Part 1

Venkatesh Subramanian
8 min readMar 24, 2024

The manual testing is time-consuming, repetitive, and prone to errors, leading to dissatisfaction among developers.

Automating repetitive tests revolutionizes software development, freeing developers from tedious manual tasks. With automated tests, developers can confidently refactor code and implement large-scale changes, knowing any potential issues will be quickly identified. This enhances productivity and enjoyment in software development. Developers and testers can use the manual testing time to concentrate on creative tasks, including exploratory testing.

Let's look at the properties of the automation tests that help to devise the right strategy


The speed of automation tests refers to how quickly they can be executed. Faster tests contribute to shorter feedback cycles, allowing for rapid iteration and quicker issue detection. This is most needed in agile, which mandates adding value increments every iteration.


Isolation in automation tests denotes the degree to which tests are independent of each other and external dependencies. Well-isolated tests are less likely to fail due to changes in unrelated system parts, enhancing test reliability and maintainability.


The scope of automation tests defines the breadth of functionalities and scenarios the tests cover. Comprehensive test coverage ensures that critical aspects of the software are thoroughly validated, reducing the risk of undetected defects in production.


Confidence represents the level of trust and assurance provided by automation tests.

Mike Cohn’s test pyramid. Mike Cohn, Succeeding with Agile

Note 1: let's take the UI (user interface) testing as the end-to-end testing

Unit Tests:

  • Purpose: Unit tests focus on testing individual components or functions in isolation.
  • Scope: They cover small atomic code units (e.g., functions, methods, classes).
  • Speed: Unit tests run quickly because they don’t involve external dependencies.
  • Isolation: Each test case should be independent and not rely on external services or databases.

The scope and confidence might be lower for individual unit tests, but collectively, they provide a solid foundation for ensuring the quality of the codebase. By catching issues early, they contribute significantly to the overall reliability of the software.

Unit tests form the base of the Test Pyramid and occupy the largest area, suggesting the most granular and numerous tests within a test suite.

Note 2: Unit testing includes testing at all the layers, including Presentation (UI), Business Logic, and Data Layer.

Example 1: UI React Application unit tests — React Testing Library, Jest, Enzyme

Example 2: DBT unit tests

Unit tests in the CI pipeline

Given the speed of unit tests, they should be integrated into the service build pipeline. Any code alterations that cause unit test failures must be promptly flagged. Moreover, each pull request (PR) can incorporate its dedicated CI pipeline, ensuring the execution of unit tests before merging into the main branch.

Solitary Unit Tests

These tests focus on a single “unit” of code, such as a function or class, and stub or mock all its dependencies. The goal is to isolate the unit being tested to ensure that the test only fails due to issues within the unit itself, not its collaborators.

Sociable Unit Tests

Sociable unit tests allow the unit under test to interact with its real collaborators. This approach can lead to more integrative tests and may cover interactions between units, potentially catching issues that solitary tests might miss

Each approach has its trade-offs. Solitary tests can be faster and more focused, making them easier to diagnose when they fail, but they might miss integration issues. Sociable tests can catch more complex interactions but be slower and more brittle if collaborators change frequently. The most preferred approach is to have solitary unit tests.

Service Tests

  • Purpose: Service tests are designed to bypass the user interface and test the services directly
  • Scope: Service
  • Speed: These tests could be as fast as our small-scoped unit tests, but if you decide to test against a real database or to go over networks to stubbed downstream collaborators, test times can increase.
  • Isolation: Service level isolation

Conducting tests against a single service in this manner enhances our confidence in the service’s expected behaviour while maintaining test isolation. This approach limits the scope of potential failures to only the service being tested. To achieve better isolation, it’s crucial to stub out or mock all external collaborators.


Mocking is a technique to replace real objects with fake ones during testing. In backend services testing, we use mocks to replace external dependencies (such as databases or APIs) with mock objects. Mock objects mimic the behaviour of real objects but don’t perform actual operations.

Example: Suppose you’re testing a service communicating with an external payment gateway. Instead of making actual payment requests, you create a mock payment gateway and make sure the mock gateway is invoked with expected parameters and exactly once and not twice


Stubbing is another technique to replace real objects during testing. In backend services testing, stubs replace external dependencies with pre-programmed objects. Stubs provide specific sets of values in response to specific input parameters. Unlike mocks, stubs don’t focus on interactions; they focus on returning expected results.

Example: If your service interacts with a database, you can create a stub database connection that returns predefined data without actually querying the database.

Mocks focus on interactions and behaviour verification, while stubs provide predefined responses. Based on your testing needs, choose the appropriate technique!

Mountebank is an open-source tool for creating test doubles (mocks, stubs, and proxies) over HTTP, HTTPS, TCP, and SMTP protocols.

One limitation of mountebank is that it doesn’t support stubbing for messaging. For example, if you want to ensure an event was properly sent (and maybe received) via a broker, you’ll have to look elsewhere for a solution. This is one area where Pact might be able to help

Service Integration test suite

But if you decide to contact external services, attempt to write narrow integration tests to run your external dependencies locally. For example, spin up a local MySQL database and test against a local ext4 filesystem. If you’re integrating with a separate service, run an instance of that service locally.

If there’s no way to run a third-party service locally, you should opt to run a dedicated test instance and point to this test instance when running your integration tests.

Since service tests have a slightly broader scope than unit tests, albeit with lower speed, the number of service tests should ideally be fewer than that of unit tests.

Service tests in the CI pipeline

Considering that the isolation and scope are at the service level, it’s viable to include the service tests in the service's CI pipeline as well.

End-to-End Tests

  • Purpose: End-to-end (UI) tests validate the entire application from the user’s perspective.
  • Scope: They cover end-to-end scenarios, including user interactions.
  • Speed: UI tests involve real browsers or emulators, covering multiple components and services. They mimic user behaviour, catching integration issues.E2E tests are slower due to their end-to-end nature.

E2E tests ownership

“Organizations with design systems are constrained to produce designs that are copies of their communication structures.”

The architecture on the left exhibits high cohesion within related technologies but low cohesion concerning business functionality. To facilitate ease of change, it’s imperative to reevaluate how we organize code, prioritizing cohesion of business functionality over technology, as depicted on the right.

The example provided is just a demonstration. You can create micro frontends and service groups tailored to your business requirements and priorities.

In the illustrated scenario, each business functionality team is responsible for their respective end-to-end (E2E) tests. Specifically, the Order Management team oversees the Order Management E2E tests, while the Customer Management team takes charge of the Customer Management E2E tests.

E2E tests in the CI pipeline

Given the complexity and time required to execute the end-to-end (E2E) tests in the CI pipeline, it should occur once all individual service pipelines have successfully passed unit and service tests.

As the number of services increases, managing end-to-end (E2E) tests becomes more challenging. While E2E tests may remain manageable with a few services, their feasibility diminishes as the service count grows.

What we wanted to ensure was that any change would not break the consumer. That are no Structural or Semantic Breakages

  • Structural: Changes in data structures (e.g., API contract changes).
  • Semantic: Changes in behaviour (e.g., backwards-incompatible changes).

Consumer-driven contracts (CDC) and consumer-driven testing (CDD) using Pact help here

These contracts typically cover:

  • Request Format: Expected data format (headers, body, query parameters).
  • Response Format: Expected response structure (headers, body).
  • Fields and Values: Specific fields and their values.

Contracts capture the what (structure) and the how (behaviour). For example, a consumer expects a specific response to a given request. If the provider changes its behaviour (e.g., different validation rules, additional side effects), it’s a semantic breakage.

Please read more about Pact here

Finally, let us acknowledge the imperfections in the testing approaches are inevitable; let's have the following, which also augments the stability of the production system

  • Low MTTR: Mean Time to Repair (MTTR) represents the average recovery time from a failure or incident. Low MTTR means minimizing downtime and restoring service quickly, which includes better rollback strategies
  • High MTBF: Mean Time between failures represents the average time between two consecutive failures. High MTBF indicates fewer frequent failures
  • Techniques like blue-green deployment and canary releasing allow testing closer to production. These approaches acknowledge that it’s impossible to catch all problems before releasing software. Instead of striving for perfection, we focus on managing failures effectively


Prioritize fast feedback by categorizing tests appropriately.

Whenever a larger scope test fails, seize the opportunity to decompose it into smaller, more manageable tests. This allows smaller-scope tests to collectively address the functionalities and ensure comprehensive test coverage.

Minimize reliance on end-to-end tests spanning multiple teams by leveraging consumer-driven contracts.

I will address other cross-functional testing aspects, such as Web Page Latency, User Capacity, Accessibility, and Data Security, in a separate post.

Thanks for reading! If you’ve got ideas to contribute to this conversation please comment. If you like what you read and want to see more, clap me some love!



Venkatesh Subramanian

Product development & Engineering Leader| Software Architect | AI/ML | Cloud computing