5 Must-Have DevOps Tools for Test Automation in CI/CD  

5 Must-Have DevOps Tools for Test Automation in CI/CD  

DevOps tools for test automation – If you’re working in a real product team, you already know this uncomfortable truth: having automated tests is not the same as having a reliable release process. Many teams do everything “right” on paper—unit tests, API tests, even some end-to-end coverage—yet production releases still feel stressful. The pipeline goes green, but the deployment still breaks. Or the tests pass today and fail tomorrow for no clear reason. Over time, people stop trusting the automation, and the team quietly goes back to manual checking before every release.  

I’ve seen this happen more times than I’d like to admit, and the pattern is usually the same. The problem is not that teams aren’t writing tests. The real problem is that the system around the tests is weak: inconsistent environments, unstable dependencies, slow pipelines, poor reporting, and shared QA setups where multiple deployments collide. When those foundations are missing, test automation becomes “best effort” instead of a true safety net. 

That’s why DevOps tools for test automation matter so much. In a good CI/CD setup, tools don’t just run builds and deployments—they create a repeatable process where every code change is validated the same way, in controlled environments, with clear evidence of what happened. This is what makes automation trustworthy. And once engineers trust the pipeline, quality starts scaling naturally because testing becomes part of the workflow, not an extra task. 

In this blog, I’m focusing on five DevOps tools for test automation that consistently show up in strong test automation pipelines—not because they’re trending, but because each one solves a practical automation problem teams face at scale: 

  • Git (GitHub/GitLab/Bitbucket) for triggering automation and enforcing merge quality gates 
  • Jenkins for orchestrating pipelines, parallel execution, and test reporting 
  • Docker for eliminating environment drift and making test runs consistent everywhere 
  • Kubernetes for isolated, disposable environments and scalable test execution 
  • Terraform (Infrastructure as Code) for reproducible infrastructure and automation-ready environments 

I’ll keep this guide practical and implementation-focused. You’ll see what each tool contributes to automation, why it matters, and how teams use them together in real CI/CD workflows. DevOps tools for test automation

Now, before we go tool-by-tool, let’s define what “good” test automation actually looks like in a CI/CD pipeline. 

What “Test Automation” Really Means in CI/CD 

Before we jump into DevOps tools, it helps to define what “good” looks like. 

A solid test automation system in CI/CD typically has these characteristics: 

Every code change triggers tests automatically, tests run in consistent environments (same runtime, same dependencies, same configuration), feedback is fast enough to influence decisions (engineers shouldn’t wait forever), failures are actionable (clear reports, logs, and artifacts), environments are isolated (no conflicts between branches or teams), and the process is repeatable (you can rerun the same pipeline and get predictable behaviour). 

Most teams struggle not because they can’t write tests, but because they can’t keep test execution stable at scale. The five DevOps tools for test automation in ci/cd below solve that problem from different angles. 

DevOps tools for Test Automation

DevOps tools for test automation in CI/CD

Tool 1: Git (GitHub/GitLab/Bitbucket) – The Control Centre for Automation 

Git is usually introduced as version control, but in CI/CD it becomes something much bigger: it becomes the system that governs automation. 

In a mature setup, Git is where automation is triggered, enforced, and audited. 

Why Git is essential for test automation 

  • Git turns changes into events (and events trigger automation) 
    A strong pipeline isn’t dependent on someone remembering to run tests. Git events automatically drive the workflow: Push to a feature branch triggers lint and unit tests, opening a pull request triggers deeper automated checks, merging to main triggers deployment to staging and post deploy tests, and tagging a release triggers production deployment and smoke tests.  That event-driven model is the heart of CI/CD test automation. 
  • Git enforces quality gates through branch protections 
    This is one of the most overlooked “automation” features because it doesn’t look like testing at first. When branch protection rules require specific checks to pass, test automation becomes non-negotiable: required CI checks (unit tests, build, API smoke), required reviews, and blocked merge when pipeline fails.
    Without those rules, automation becomes optional. Optional automation gets skipped under pressure. Skipped automation eventually becomes unused automation. 
  • Git version-controls everything that affects test reliability
    Stable automation means versioning more than application code: the automated tests themselves, pipeline definitions (Jenkinsfile), Dockerfiles and container configs, Kubernetes manifests / Helm charts, Terraform infrastructure code, and test data and seeding scripts (where applicable). When all of this lives in Git, you can reproduce outcomes. That reproducibility is one of the biggest drivers of trust in automation. 

Practical example: A pull request workflow that makes automation enforceable 

Here’s a pattern that works well in real teams: 

Branch structure: main – protected, always deployable; feature/* – developer work branches; optional: release/* – release candidates. 

Pull request checks: linting, unit tests, build (to ensure code compiles / packages), API tests (fast integration validation), and E2E smoke tests (small, targeted, high signal). 

Protection rules: PR cannot merge unless required checks pass, disallow direct pushes to main, and require at least one reviewer. This turns automation into a daily habit. It also forces early failure detection: bugs are caught at PR time, not after a merge. 

Practical example: Using Git to control test scope (a realistic performance win) 

Not every test should run on every change. Git can help you control test selection in a clean, auditable way. Common approaches: run full unit tests on every PR, run a small set of E2E smoke tests on every PR, and run full regression E2E nightly or on demand. A practical technique is to use PR labels or commit tags to control pipeline behavior: 

label: run-e2e-full triggers full E2E suite, default PR triggers only E2E smoke, and nightly pipeline triggers full regression. 

This keeps pipelines fast while still maintaining coverage. 

Tool 2: Jenkins – The Orchestrator That Makes Tests Repeatable 

Once Git triggers automation, you need something to orchestrate the steps, manage dependencies, and publish results. Jenkins is still widely used for this because it’s flexible, integrates with almost everything, and supports “pipeline as code.” 
For test automation, Jenkins is important because it transforms a collection of scripts into a controlled, repeatable process. 

Why Jenkins is essential for test automation 

  • Jenkins makes test execution consistent and repeatable 
    A Jenkins pipeline defines what runs, in what order, with what environment variables, on what agents, and with what reports and artifacts. That consistency is the difference between “tests exist” and “tests protect releases.”
  • Jenkins supports staged testing (fast checks first, deeper checks later) 
    A well-designed CI/CD pipeline is layered:
    Stage 1: lint + unit tests (fast feedback), Stage 2: build artifact / image, Stage 3: integration/API tests, Stage 4: E2E smoke tests, and Stage 5: optional full regression (nightly or on-demand).
    Jenkins makes it easy to encode this strategy so it runs the same way every time.
  • Jenkins enables parallel execution 
    As test suites grow, total runtime becomes the biggest pipeline bottleneck. Jenkins can parallelize: Jenkins can parallelize lint and unit tests, API tests and UI tests, and sharded E2E jobs (multiple runners). Parallelization is a major reason DevOps tooling is critical for automation: without it, automation becomes too slow to be practical. 
  • Jenkins publishes actionable test outputs 
    Good automation isn’t just “pass/fail.” Jenkins can publish JUnit reports, HTML reports (Allure / Playwright / Cypress), screenshots and videos from failed UI tests, logs and artifacts, and build metadata (commit SHA, image tag, environment). This visibility reduces debugging time and increases trust in the pipeline. 

Practical Jenkins example: A pipeline structure used in real CI/CD automation 

Below is a Jenkins file that demonstrates a practical structure: 

  • Fast checks first 
  • Build Docker image 
  • Deploy to Kubernetes namespace (ephemeral environment) 
  • Run API and E2E tests in parallel 
  • Archive reports 
  • Cleanup 

You can adapt the commands to your stack (Maven/Gradle, pytest, npm, etc.). 

pipeline { 
    agent any 
    environment { 
        APP_NAME        = "demo-app" 
        DOCKER_REGISTRY = "registry.example.com" 
        IMAGE_TAG       = "${env.BUILD_NUMBER}" 
        NAMESPACE       = "pr-${env.CHANGE_ID ?: 'local'}" 
    } 
    options { 
        timestamps() 
    } 
    stages { 
        stage("Checkout") { 
            steps { checkout scm } 
        } 
        stage("Install & Build") { 
            steps { 
                sh "npm ci" 
                sh "npm run build" 
            } 
        } 
        stage("Fast Feedback") { 
            parallel { 
                stage("Lint") { 
                    steps { sh "npm run lint" } 
                } 
                stage("Unit Tests") { 
                    steps { sh "npm test -- --ci --reporters=jest-junit" } 
                    post { always { junit "test-results/unit/*.xml" } } 
                } 
            } 
        } 
        stage("Build & Push Docker Image") { 
            steps { 
                sh """ 
      docker build -t ${DOCKER_REGISTRY}/${APP_NAME}:${IMAGE_TAG} . 
      docker push ${DOCKER_REGISTRY}/${APP_NAME}:${IMAGE_TAG} 
    """ 
            } 
        } 
        stage("Deploy to Kubernetes (Ephemeral)") { 
            steps { 
                sh """ 
      kubectl create namespace ${NAMESPACE} || true 
      kubectl -n ${NAMESPACE} apply -f k8s/ 
      kubectl -n ${NAMESPACE} set image deployment/${APP_NAME} ${APP_NAME}=${DOCKER_REGISTRY}/${APP_NAME}:${IMAGE_TAG} 
      kubectl -n ${NAMESPACE} rollout status deployment/${APP_NAME} --timeout=180s 
    """ 
            } 
        }
        stage("Automation Tests") { 
            parallel { 
                stage("API Tests") { 
                    steps { 
                        sh """ 
          export BASE_URL=http://${APP_NAME}.${NAMESPACE}.svc.cluster.local:8080 
          npm run test:api 
        """ 
                    } 
                    post { always { junit "test-results/api/*.xml" } } 
                } 
                stage("E2E Smoke") { 
                    steps { 
                        sh """ 
          export BASE_URL=https://${APP_NAME}.${NAMESPACE}.example.com 
          npm run test:e2e:smoke 
        """ 
                    } 
                    post { 
                        always { 
                            archiveArtifacts artifacts: "e2e-report/**", allowEmptyArchive: true 
                        } 
                    } 
                } 
            } 
        } 
    } 
    post { 
        always { 
            sh "kubectl delete namespace ${NAMESPACE} --ignore-not-found=true" 
        } 
    } 
} 

This pipeline basically handles everything that should happen when someone opens or updates a pull request. First, it pulls the latest code, installs the dependencies, and builds the application. Then it quickly runs lint checks and unit tests in parallel so small mistakes are caught early instead of later in the process. 

If those basic checks pass, the pipeline creates a Docker image of the app and pushes it to the registry. That same image is then deployed into a temporary Kubernetes namespace created just for that PR. This keeps every pull request isolated from others and avoids environment conflicts. 

Once the app is running in that temporary environment, the pipeline runs API tests and E2E smoke tests against it. The results, reports, and any failure artifacts are

saved so the team can easily understand what went wrong. In the end, whether tests pass or fail, the temporary namespace is deleted to keep the cluster clean and disposable.

Why this Jenkins setup improves automation 

This pipeline is automation-friendly because it fails fast on lint and unit issues, builds a deployable artifact before running environment-dependent tests, isolates test environments per PR (namespace isolation), runs API and UI tests in parallel (better pipeline time), stores test reports and artifacts for debugging, and cleans up environments automatically (important for cost and cluster hygiene). 

Tool 3: Docker – The Foundation for Consistent, Portable Test Environments 

If Jenkins is the orchestrator, Docker is the stabilizer. Docker solves a major cause of unreliable automation: environment differences. A large percentage of pipeline failures happen because of different runtime versions (Node/Java/Python), different OS packages, missing dependencies, browser/driver mismatches for UI automation, and inconsistent configuration between local and CI. 

Docker reduces that variability by packaging the environment with the app or tests. 

Why Docker is essential for automation 

  • Docker eliminates “works on my machine” failures 
    When tests run inside a container, they run with consistent runtime versions, pinned dependencies, and predictable OS environment. This makes results repeatable across laptops, CI agents, and cloud runners. 
  • Docker makes test runners portable
    Instead of preparing every Jenkins agent with test dependencies, you run a container that already contains them. This reduces setup time and avoids agent drift over months. 
  • Docker enables clean integration test stacks 
    Integration tests often need services: database (PostgreSQL/MySQL), cache (Redis), message broker (RabbitMQ/Kafka), and local dependencies or mock services. Docker Compose can spin these up consistently, making integration tests practical and reproducible. 
  • Docker supports parallel and isolated execution 
    Containers isolate processes. That isolation helps when running multiple test jobs simultaneously without cross-interference. 

Practical Docker example A: Running UI tests in a container (Playwright)

UI test reliability often depends on browser versions and system libraries. A container gives you control. 

Dockerfile for Playwright tests written in JS/TS 

FROM mcr.microsoft.com/playwright:v1.46.0-jammy 
 
WORKDIR /tests 
COPY package.json package-lock.json ./ 
RUN npm ci 
 
COPY . . 
CMD ["npm", "run", "test:e2e"] 

This Dockerfile is basically packaging our entire E2E test setup into a container. Instead of installing browsers and fixing environment issues every time, we simply start from Playwright’s official image, which already has everything preconfigured. 

We set a folder inside the container, install the project dependencies using npm ci (so it’s always a clean install), and then copy our test code into it. 

When the container runs, it directly starts the E2E tests. 

What this really means is that our tests don’t depend on someone’s local setup anymore. Whether they run on a laptop or in CI, the environment stays the same — and that removes a lot of random, environment-related failures. 

Run CI

docker build -t e2e-tests:ci . 
docker run --rm -e BASE_URL="https://staging.example.com" e2e-tests:ci 

The first command builds a Docker image named e2e-tests:ci from the Dockerfile in the current directory. That image now contains the Playwright setup, the test code, and all required dependencies bundled together. 

The second command actually runs the tests inside that container. We pass the BASE_URL so the tests know which deployed environment they should hit — in this case, staging. The –rm flag simply cleans up the container after the run so nothing is left behind. 

Basically, we’re packaging our test setup once and then using it to test any environment we want, without reinstalling or reconfiguring things every time. 

In a real pipeline, you typically add an output folder mounted as a volume (to extract reports), retry logic only for known transient conditions, and trace/video capture on failure. 

Practical Docker example B: Integration tests with Docker Compose (app + database + tests) 

This is a pattern I’ve used often because it gives developers a “CI-like” environment locally. 

docker-compose.yml 

version: "3.8" 
services: 
app: 
build: . 
ports: 
        - "8080:8080" 
environment: 
DB_HOST: db 
DB_NAME: demo 
DB_USER: postgres 
DB_PASS: password 
depends_on: 
        - db 
 
db: 
image: postgres:16 
environment: 
POSTGRES_PASSWORD: password 
POSTGRES_DB: demo 
ports: 
        - "5432:5432" 
 
tests: 
build: ./tests 
depends_on: 
        - app 
environment: 
BASE_URL: http://app:8080 
command: ["npm", "run", "test:integration"] 

This docker-compose file brings up three things together: the app, a PostgreSQL database, and the integration tests. Instead of relying on some shared QA environment, everything runs locally inside containers. 

The db service starts a Postgres container with a demo database. The app service builds your application and connects to that database using dB as the hostname (Docker handles the networking automatically). 

Then the tests service builds the test container and runs the integration test command against http://app:8080. The depends on ensures things start in the right order — database first, then app, then tests. 

What this really gives you is a repeatable setup. Every time you run it, the app and database start from scratch, the tests execute, and you’re not depending on some shared environment that might already be in a weird state. 

Run

docker compose up --build --exit-code-from tests 

Why this matters for automation: Every run starts from a clean stack, test dependencies are explicit and versioned, failures are reproducible both locally and in CI, and integration tests stop depending on shared environments. 

Practical Docker example C: Using multi-stage builds for cleaner deployment and more reliable tests 

A multi-stage Dockerfile helps keep runtime images minimal and ensures builds are reproducible. 

# Build stage 
    FROM node:20-alpine AS builder 
    WORKDIR /app 
    COPY package*.json ./ 
    RUN npm ci 
    COPY . . 
    RUN npm run build 
 
# Runtime stage 
    FROM node:20-alpine 
    WORKDIR /app 
    COPY --from=builder /app/dist ./dist 
    COPY package*.json ./ 
    RUN npm ci --omit=dev 
    CMD ["node", "dist/server.js"] 

This is a multi-stage Docker build, which basically means we use one container to build the app and another, smaller one to run it. 

In the first stage (builder), we install all dependencies and run the build command to generate the production-ready files. This stage includes development dependencies because they’re needed to compile the application. 

In the second stage, we start fresh with a clean Node image and copy only the built output (dist) from the first stage. Then we install only production dependencies using npm ci –omit=dev. Finally, the container starts the app with node dist/server.js. 

The main benefit of this approach is that the final image is smaller, cleaner, and more secure since it doesn’t include unnecessary build tools or dev dependencies. 

This reduces surprises in automation by keeping build and runtime steps consistent and predictable. 

Tool 4: Kubernetes – Isolated, Disposable Environments for Real Integration and E2E Testing 

Docker stabilizes execution. Kubernetes stabilizes environments at scale. 

Kubernetes becomes essential when multiple teams deploy frequently, you have microservices, integration environments are shared and constantly overwritten, you need preview environments per PR, and you want parallel E2E execution without resource conflicts.  For test automation, Kubernetes matters because it provides isolation and repeatability for environment-dependent tests. 

Why Kubernetes is important for automation 

  • Namespace isolation prevents test collisions 
    A common problem: one QA environment, multiple branches, constant overwrites.  With Kubernetes, each PR can get its own namespace: Deploy the app stack into pr-245, run tests against pr-245, and delete the namespace afterward.  This prevents one PR deployment from breaking another PR’s test run. 
  • Kubernetes enables realistic tests against real deployments 
    E2E tests are most valuable when they run against something that looks like production: 
    • Deployed services, real networking, real service discovery, and real configuration and secrets injection. 
    • Kubernetes makes it practical to run those tests automatically without manually maintaining long-lived environments. 
  • Parallel test execution becomes infrastructure-driven 
    Instead of running all E2E tests on one runner, Kubernetes can run multiple test pods at once. This matters because: E2E tests are usually slower, pipelines must remain fast enough for engineers, and scaling test runs is often the only sustainable solution. 
  • Failures become easier to debug 
    When a test fails, you can: Collect logs from the specific namespace, inspect the deployed resources, re-run the pipeline with the same manifest versions, and avoid “someone changed the shared environment” confusion. 

Practical Kubernetes example A: Running E2E tests as a Kubernetes Job 

A clean pattern: 

  1. Deploy app 
  2. Run tests as a Job 
  3. Read logs and reports 
  4. Clean up namespace 

e2e-job.yaml 

apiVersion: batch/v1 
kind: Job 
metadata: 
name: e2e-tests 
spec: 
backoffLimit: 0 
template: 
spec: 
restartPolicy: Never 
containers: 
        - name: e2e 
image: registry.example.com/e2e-tests:ci 
env: 
        - name: BASE_URL 
value: "https://demo-app.pr-245.example.com" 

This Kubernetes manifest defines a one-time Job that runs our E2E tests inside the cluster. Instead of running tests from outside, we execute them as a container directly in Kubernetes. 

The Job uses the e2e-tests:ci image that we previously built and pushed to the registry. It passes a BASE_URL so the tests know which deployed environment they should target — in this case, the PR-specific URL. 

restart Policy: Never and back off Limit: 0 mean that if the tests fail, Kubernetes won’t keep retrying them automatically. It runs once and reports the result. 

In simple terms, this lets us trigger automated tests inside the same environment where the application is deployed, making the test run closer to real production behaviour. 

CI commands 

kubectl -n pr-245 apply -f e2e-job.yaml 
kubectl -n pr-245 wait --for=condition=complete job/e2e-tests --timeout=15m 
kubectl -n pr-245 logs job/e2e-tests 

These commands are used to run and monitor the E2E test job inside a specific Kubernetes namespace (pr-245). 

The first command applies the e2e-job.yaml file, which creates the Job and starts the test container. The second command waits until the job finishes (or until 15 minutes pass), so the pipeline doesn’t move forward while tests are still running. 

The last command fetches the logs from the test job, which allows us to see the test output directly in the CI logs. 

These commands create the E2E job in the PR namespace, wait for it to finish, and then fetch the logs so the CI pipeline can display the test results. 

This pattern keeps test execution close to the environment where the app runs, which often improves reliability and debugging. 

Practical Kubernetes example B: Readiness checks that reduce false E2E failures 

A common cause of flaky E2E runs is that tests start before services are ready. Kubernetes readiness probes help. 

Example snippet in a Deployment: 

readinessProbe: 
httpGet: 
path: /health 
port: 8080 
initialDelaySeconds: 10 
periodSeconds: 5 

This configuration adds a readiness probe to the application container in Kubernetes. It tells Kubernetes how to check whether the application is actually ready to receive traffic. 

Kubernetes will call the /health endpoint on port 8080. After waiting 10 seconds (initial Delay Seconds), it checks every 5 seconds (period Seconds). If the health check passes, the pod is marked as “ready” and can start receiving requests. 

When your pipeline waits for rollout status, it becomes far less likely that E2E tests fail due to startup timing issues. 

Practical Kubernetes example C: Sharding E2E tests across multiple Jobs 

If you have 300 E2E tests, running them on one pod may take too long. Sharding splits the suite across multiple pods. 

Concept: 

  • Total shards: 6 
  • Each shard runs in its own Job with environment variables 

Example environment variables: 

  • SHARD_INDEX=1..6 
  • SHARD_TOTAL=6 

Each job runs only a subset of tests. Your test runner must support sharding (many do, directly or via custom logic), but Kubernetes provides the execution layer. 

This is one of the biggest performance wins for automation at scale. 

Tool 5: Terraform (Infrastructure as Code) – Reproducible Test Infrastructure Without Manual Work 

If Kubernetes is where the application lives during testing, Terraform is often what creates the infrastructure that testing depends on. 

Terraform matters because real automation needs reproducible infrastructure. Manual environments drift. Drift breaks tests. Terraform allows you to define and version infrastructure such as networking (VPCs, subnets, security groups), databases and caches, Kubernetes clusters, IAM roles and permissions, and load balancers and storage. 

Why Terraform is essential for automation 

  • Terraform makes environments reproducible 
    When infrastructure is code, your environment isn’t tribal knowledge. It’s documented, versioned, and repeatable. That repeatability improves test reliability, because your tests stop depending on “whatever state the environment is in today.” 
  • Terraform enables ephemeral environments (and reduces long-term drift) 
    Permanent shared environments slowly accumulate manual changes: 
    • Ad-hoc configuration updates, quick fixes, outdated dependencies, and unknown drift over time. 
    •  Ephemeral environments built via Terraform start clean, run tests, and get destroyed. That model dramatically reduces environment-related flakiness. 
  • Terraform makes environment parity achievable 
    A test environment that resembles production catches issues earlier. Terraform supports consistent provisioning across dev, staging, and prod—often using the same modules with different variables. 
  • Terraform integrates cleanly with pipelines 
    Terraform outputs can feed directly into automation: database endpoint, service URL, credentials location (not the secret itself, but the reference), and resource identifiers. 

Practical Terraform example A: Outputs feeding automated tests 

Outputs.tf 

output "db_endpoint" { 
    value = aws_db_instance.demo.address 
} 
 
output "db_port" { 
    value = aws_db_instance.demo.port 
} 

These are Terraform output values. After Terraform creates the database, it exposes the database endpoint (address) and port as outputs. 

This makes it easy for the CI pipeline to read those values and pass them to the application or test scripts as environment variables. Instead of manually copying connection details, the pipeline can automatically fetch them using terraform output. 

CI usage

terraform init 
terraform apply -auto-approve 
 
        DB_ENDPOINT=$(terraform output -raw db_endpoint) 
DB_PORT=$(terraform output -raw db_port) 
 
export DB_ENDPOINT DB_PORT 
npm run test:integration 

These commands show how infrastructure provisioning and test execution are connected in the pipeline. 

First, terraform init initializes Terraform, and terraform apply -auto-approve creates the required infrastructure (like the database) without waiting for manual approval. 

After the infrastructure is created, the script reads the database endpoint and port using terraform output -raw and stores them in environment variables. Those variables are then exported so the integration tests can use them to connect to the newly created database. 

This way, the tests automatically run against fresh infrastructure created during the same pipeline run. 
This bridges infrastructure provisioning and test execution in an automated, repeatable way. 

Practical Terraform example B: Using workspaces (or unique naming) for PR environments 

A common approach is: One workspace per PR (or unique naming per PR), apply infrastructure for that PR, and destroy when pipeline completes. 

Example commands: 

  terraform workspace new pr-245 || terraform workspace select pr-245 
    terraform apply -auto-approve 
# run tests 
    terraform destroy -auto-approve 

These commands create an isolated Terraform workspace for a specific pull request (in this case, pr-245). If the workspace doesn’t exist, it’s created; if it already exists, it’s selected. 

Then terraform apply provisions the infrastructure just for that workspace — meaning this PR gets its own separate resources. After the tests are executed, terraform destroy removes everything that was created. 

This approach ensures that each PR gets its own temporary infrastructure and nothing is left behind once testing is complete. 

This approach prevents resource collisions and makes automation more scalable. 

Practical Terraform example C: Cleanup as a first-class pipeline requirement 

One of the most important operational rules: cleanup must run even when tests fail. 

In Jenkins, cleanup usually belongs in post { always { … } }. The same principle applies to Terraform: do not destroy only on success, or you will accumulate environments, costs, and complexity. 

Putting All 5 DevOps Tools for Test Automation Together: A Realistic “PR to Verified” Pipeline Flow 

DevOps Toosl for Test Automation in CI/CD

When these DevOps tools for test automation work together, test automation becomes a system, not a set of scripts. 
Here’s a practical flow that I’ve used (with minor variations) across multiple projects. 

Reference repository structure (simple but scalable) 

├─ app/                     # application code 
├─ tests/ 
│  ├─ unit/ 
│  ├─ api/ 
│  └─ e2e/ 
├─ k8s/                     # manifests (or Helm charts) 
├─ infra/ 
│  └─ terraform/            # IaC 
└─ Jenkinsfile 

Pipeline flow (PR) 

  1. Developer opens a PR (Git event) 
  2. Jenkins triggers automatically 
  3. Jenkins runs fast checks: 
    • lint 
    • unit tests 
  4. Jenkins builds Docker images: 
    • app image 
    • e2e test runner image 
  5. Terraform provisions required infrastructure (if needed): 
    • database for the PR environment 
    • any required cloud dependencies 
  6. Kubernetes creates an isolated namespace for the PR 
  7. Jenkins deploys the app to that namespace 
  8. Jenkins runs automated tests against that environment: 
    • API tests 
    • E2E smoke tests 
    • optional: full E2E sharded (nightly or on-demand) 
  9. Jenkins publishes reports and artifacts 
  10. Jenkins cleans up: 
    • deletes namespace 
    • destroys Terraform resources 

Why this combination is so effective for automation 

Each DevOps tool for test automation contributes something specific to reliability: Git ensures automation is part of the workflow and enforceable via checks, Jenkins makes execution repeatable and visible with staged pipelines and reporting, Docker keeps test execution consistent everywhere, Kubernetes isolates environments and supports scaling and shading, and Terraform makes infrastructure reproducible and disposable. 

This is exactly why DevOps tools are not “nice to have” for automation. They solve the problems that make automation fail in real life. 

Operational Practices That Make This Setup “Production Grade” 

DevOps Tools alone won’t give you great automation. The practices around them matter just as much.

1) Layer your tests to keep PR feedback fast 

A practical strategy: 

  • On every PR: 
    • lint 
    • unit tests 
    • API smoke tests 
    • E2E smoke tests (limited, high signal) 
  • Nightly: 
    • full E2E regression 
    • broader integration suite 
  • Before release: 
    • full regression 
    • performance checks (if applicable) 
    • security scans (if required by policy) 

This keeps day-to-day work fast while still maintaining strong coverage. 

2) Treat flaky tests as defects, not background noise 

Flaky tests destroy pipeline trust. 
Common fixes include: Stabilizing test data and teardown, waiting on readiness properly (not fixed sleeps), using stable selectors for UI tests, isolating environments (namespaces / disposable DBs), and limiting shared state across tests.  A good pipeline is one engineers rely on. Flaky pipelines get ignored. 

3) Make test results actionable 

At minimum, your pipeline should provide: Which test failed, logs from the failing step, screenshots/videos for UI failures, a link to a report artifact, and build metadata (commit, image tag, environment/namespace).  The goal is to reduce “time to understand failure,” not just detect it. 

4) Keep secrets out of code and images 

Avoid hardcoding secrets in Jenkinsfile, Docker images, Git repositories, and Kubernetes manifests. 
Use a proper secret strategy (Kubernetes secrets, cloud secret manager, Vault). Inject secrets at runtime. 

5) Use consistent naming conventions across tools 

This sounds small, but it helps with debugging a lot. 
Example: Namespace: pr-245, Docker tag: build-9812, and Terraform workspace: pr-245. 
When names align, it’s easier to trace failures across Jenkins logs, Kubernetes resources, and cloud infrastructure. 

Conclusion: The Five Tools That Make Test Automation Trustworthy 

DevOps tools for test automation in CI/CD. Reliable test automation is not about having the largest test suite. It’s about having a system that runs tests consistently, quickly, and automatically—without manual intervention and without environment chaos. 

These five DevOps tools for test automtion are essential because each one solves a practical automation problem: 

  • Git makes automation enforceable through triggers and quality gates 
  • Jenkins makes automation repeatable, staged, parallelizable, and reportable 
  • Docker makes test execution consistent across machines and environments 
  • Kubernetes enables isolated environments and scalable parallel test execution 
  • Terraform makes infrastructure reproducible, reviewable, and automatable 

When you combine them, you don’t just run tests—you operate a quality pipeline that protects every merge and every release. 

DevOps tools for test automation
GitHub – https://github.com/spurqlabs/5-Must-Have-DevOps-Tools-for-Test-Automation/

Click here to read more blogs like this.

Building a Complete API Automation Testing Framework with Java, Rest Assured, Cucumber, and Playwright 

Building a Complete API Automation Testing Framework with Java, Rest Assured, Cucumber, and Playwright 

API Automation Testing Framework – In Today’s fast-paced digital ecosystem, almost every modern application relies on APIs (Application Programming Interfaces) to function seamlessly. Whether it’s a social media integration pulling live updates, a payment gateway processing transaction, or a data service exchanging real-time information, APIs act as the invisible backbone that connects various systems together. 

Because APIs serve as the foundation of all interconnected software, ensuring that they are reliable, secure, and high performing is absolutely critical. Even a minor API failure can impact multiple dependent systems; consequently, it may cause application downtime, data mismatches, or even financial loss.

That’s where API automation testing framework comes in. Unlike traditional UI testing, API testing validates the core business logic directly at the backend layer, which makes it faster, more stable, and capable of detecting issues early in the development cycle — even before the frontend is ready. 

In this blog, we’ll walk through the process of building a complete API Automation Testing Framework using a combination of: 

  • Java – as the main programming language 
  • Maven – for project and dependency management 
  • Cucumber – to implement Behavior Driven Development (BDD) 
  • RestAssured – for simplifying RESTful API automation 
  • Playwright – to handle browser-based token generation 

The framework you’ll learn to build will follow a BDD (Behavior-Driven Development) approach, enabling test scenarios to be written in simple, human-readable language. This not only improves collaboration between developers, testers, and business analysts but also makes test cases easier to understand, maintain, and extend

Additionally, the API automation testing framework will be CI/CD-friendly, meaning it can be seamlessly integrated into automated build pipelines for continuous testing and faster feedback. 

By the end of this guide, you’ll have a scalable, reusable, and maintainable API testing framework that brings together the best of automation, reporting, and real-time token management — a complete solution for modern QA teams. 

What is API?

An API (Application Programming Interface) acts as a communication bridge between two software systems, allowing them to exchange information in a standardized way. In simpler terms, it defines how different software components should interact — through a set of rules, protocols, and endpoints

Think of an API as a messenger that takes a request from one system, delivers it to another system, and then brings back the response. This interaction, therefore, allows applications to share data and functionality without exposing their internal logic or database structure.

Let’s take a simple example: 
When you open a weather application on your phone, it doesn’t store weather data itself. Instead, it sends a request to a weather server API, which processes the request and sends back a response — such as the current temperature, humidity, or forecast. 
This request-response cycle is what makes APIs so powerful and integral to almost every digital experience we use today. 

Most modern APIs follow the REST (Representational State Transfer) architectural style. REST APIs use the HTTP protocol and are designed around a set of standardized operations, including: 

HTTP MethodDescriptionExample Use
GETRetrieve data from the serverFetch a list of users
POSTCreate new data on the serverAdd a new product
PUTUpdate existing dataedit user details
DELETERemove dataDelete a record

The responses returned by API’s are typically in JSON (JavaScript Object Notation) format – a lightweight, human-readable, and machine-friendly data format that’s easy to parse and validate.

In essence, API’s are the digital glue that holds modern applications together — enabling smooth communication, faster integrations, and a consistent flow of information across systems. 

What is API Testing?

API Testing is the process of verifying that an API functions correctly and performs as expected — ensuring that all its endpoints, parameters, and data exchanges behave according to defined business rules. 

In simple terms, it’s about checking whether the backend logic of an application works properly — without needing a graphical user interface (UI). Since APIs act as the communication layer between different software components, testing them helps ensure that the entire system remains reliable, secure, and efficient. 

API testing typically focuses on four main aspects: 

  1. Functionality – Does the API perform the intended operation and return the correct response for valid requests? 
  2. Reliability – Does it deliver consistent results every time, even under different inputs and conditions? 
  3. Security – Is the API protected from unauthorized access, data leaks, or token misuse? 
  4. Performance – Does it respond quickly and remain stable under heavy load or high traffic? 

Unlike traditional UI testing, which validates the visual and interactive parts of an application, API testing operates directly at the business logic layer. This makes it: 

  • Faster – Since it bypasses the UI, execution times are much shorter. 
  • More Stable – UI changes (like a button name or layout) don’t affect API tests. 
  • Proactive – Tests can be created and run even before the front-end is developed. 

In essence, API testing ensures the heart of your application is healthy. By validating responses, performance, and security at the API level, teams can detect defects early, reduce costs, and deliver more reliable software to users. 

Why is API Testing Important?

API Testing plays a vital role in modern software development because APIs form the backbone of most applications. A failure in an API can affect multiple systems and impact overall functionality. 

Here’s why API testing is important: 

  1. Ensures Functionality: Verifies that endpoints return correct responses and handle errors properly. 
  2. Enhances Security: Detects vulnerabilities like unauthorized access or token misuse. 
  3. Validates Data Integrity: Confirms that data remains consistent across APIs and databases. 
  4. Improves Performance: Checks response time, stability, and behavior under load. 
  5. Detects Defects Early: Allows early testing right after backend development, saving time and cost
  6. Supports Continuous Integration: Easily integrates with CI/CD pipelines for automated validation. 

In short, API testing ensures your system’s core logic is reliable, secure, and ready for real-world use. 

Tools for Manual API Testing

Before jumping into automation, it’s essential to explore and understand APIs manually. Manual testing helps you validate endpoints, check responses, and get familiar with request structures. 

Here are some popular tools used for manual API testing: 

  • Postman: The most widely used tool for sending API requests, validating responses, and organizing test collections [refer link – https://www.postman.com/.
  • SoapUI: Best suited for testing both SOAP and REST APIs with advanced features like assertions and mock services. 
  • Insomnia: A lightweight and user-friendly alternative to Postman, ideal for quick API exploration. 
  • cURL: A command-line tool perfect for making fast API calls or testing from scripts. 
  • Fiddler: Excellent for capturing and debugging HTTP/HTTPS traffic between client and server. 

Using these tools helps testers understand API behavior, request/response formats, and possible edge cases — forming a strong foundation before moving to API automation

Tools for API Automation Testing 

After verifying APIs manually, the next step is to automate them using reliable tools and libraries. Automation helps improve test coverage, consistency, and execution speed. 

Here are some popular tools used for API automation testing: 

  • RestAssured: A powerful Java library designed specifically for testing and validating RESTful APIs. 
  • Cucumber: Enables writing test cases in Gherkin syntax (plain English), making them easy to read and maintain. 
  • Playwright: Automates browser interactions; in our framework, it will be used for token generation or authentication flows. 
  • Postman + Newman: Allows you to run Postman collections directly from the command line — ideal for CI/CD integration. 
  • JMeter: A robust tool for performance and load testing of APIs under different conditions. 

In this blog, our focus will be on building a framework using RestAssured, Cucumber, and Playwright — combining functional, BDD, and authentication automation into one cohesive setup. 

Framework Overview 

We’ll build a Behavior-Driven API Automation Testing Framework that combines multiple tools for a complete testing solution. Here’s how each component fits in: 

  • Cucumber – Manages the BDD layer, allowing test scenarios to be written in simple, readable feature files
  • RestAssured – Handles HTTP requests and responses for validating RESTful APIs. 
  • Playwright – Automates browser-based actions like token generation or authentication. 
  • Maven – Manages project dependencies, builds, and plugins efficiently. 
  • Cucumber HTML Reports – Automatically generates detailed execution reports after each run. 

The framework follows a modular structure, with separate packages for step definitions, utilities, configurations, and feature files — ensuring clean organization, easy maintenance, and scalability. 

Step 1: Prerequisites

Before starting, ensure you have: 

Add the required dependencies to your pom.xml file: 

<?xml version="1.0" encoding="UTF-8"?> 
<project xmlns="http://maven.apache.org/POM/4.0.0" 
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 
    <modelVersion>4.0.0</modelVersion> 
 
    <groupId>org.Spurqlabs</groupId> 
    <artifactId>SpurQLabs-Test-Automation</artifactId> 
    <version>1.0-SNAPSHOT</version> 
    <properties> 
        <maven.compiler.source>11</maven.compiler.source> 
        <maven.compiler.target>11</maven.compiler.target> 
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> 
    </properties> 
    <dependencies> 
        <!-- Playwright for UI automation --> 
        <dependency> 
            <groupId>com.microsoft.playwright</groupId> 
            <artifactId>playwright</artifactId> 
            <version>1.50.0</version> 
        </dependency> 
        <!-- Cucumber for BDD --> 
        <dependency> 
            <groupId>io.cucumber</groupId> 
            <artifactId>cucumber-java</artifactId> 
            <version>7.23.0</version> 
        </dependency> 
        <dependency> 
            <groupId>io.cucumber</groupId> 
            <artifactId>cucumber-testng</artifactId> 
            <version>7.23.0</version> 
        </dependency> 
        <!-- TestNG for test execution --> 
        <dependency> 
            <groupId>org.testng</groupId> 
            <artifactId>testng</artifactId> 
            <version>7.11.0</version> 
            <scope>test</scope> 
        </dependency> 
        <!-- Rest-Assured for API testing --> 
        <dependency> 
            <groupId>io.rest-assured</groupId> 
            <artifactId>rest-assured</artifactId> 
            <version>5.5.5</version> 
        </dependency> 
        <!-- Apache POI for Excel support --> 
        <dependency> 
            <groupId>org.apache.poi</groupId> 
            <artifactId>poi-ooxml</artifactId> 
            <version>5.4.1</version> 
        </dependency> 
        <!-- org.json for JSON parsing --> 
        <dependency> 
            <groupId>org.json</groupId> 
            <artifactId>json</artifactId> 
            <version>20250517</version> 
        </dependency> 
        <dependency> 
            <groupId>org.seleniumhq.selenium</groupId> 
            <artifactId>selenium-devtools-v130</artifactId> 
            <version>4.26.0</version> 
            <scope>test</scope> 
        </dependency> 
        <dependency> 
            <groupId>com.sun.mail</groupId> 
            <artifactId>jakarta.mail</artifactId> 
            <version>2.0.1</version> 
        </dependency> 
        <dependency> 
            <groupId>com.sun.activation</groupId> 
            <artifactId>jakarta.activation</artifactId> 
            <version>2.0.1</version> 
        </dependency> 
    </dependencies> 
    <build> 
        <plugins> 
            <plugin> 
                <groupId>org.apache.maven.plugins</groupId> 
                <artifactId>maven-compiler-plugin</artifactId> 
                <version>3.14.0</version> 
                <configuration> 
                    <source>11</source> 
                    <target>11</target> 
                </configuration> 
            </plugin> 
        </plugins> 
    </build> 
</project> 

Step 2: Creating Project

Create a Maven project with the following folder structure:

loanbook-api-automation 

│ 
├── .idea 
│ 
├── src 
│   └── test 
│       └── java 
│           └── org 
│               └── Spurlabs 
│                   ├── Core 
│                   │   ├── Hooks.java 
│                   │   ├── Main.java 
│                   │   ├── TestContext.java 
│                   │   └── TestRunner.java 
│                   │ 
│                   ├── Steps 
│                   │   └── CommonSteps.java 
│                   │ 
│                   └── Utils 
│                       ├── APIUtility.java 
│                       ├── FrameworkConfigReader.java 
│                       └── TokenManager.java 
│ 
├── resources 
│   ├── Features 
│   ├── headers 
│   ├── Query_Parameters 
│   ├── Request_Bodies 
│   ├── Schema 
│   └── cucumber.properties 
│ 
├── target 
│ 
├── test-output 
│ 
├── .gitignore 
├── bitbucket-pipelines.yml 
├── DealDetails.json 
├── FrameworkConfig.json 
├── pom.xml 
├── README.md 
└── token.json 

Step 3: Creating a Feature File

In this, we will be creating a feature file for API Automation Testing Framework. A feature file consists of steps. These steps are mentioned in the gherkin language. The feature is easy to understand and can be written in the English language so that a non-technical person can understand the flow of the test scenario. In this framework we will be automating the four basic API request methods i.e. POST, PUT, GET and DELETE. 
 
We can assign tags to our scenarios mentioned in the feature file to run particular test scenarios based on the requirement. The key point you must notice here is the feature file should end with .feature extension. We will be creating four different scenarios for the four different API methods.  

Feature: All Notes API Validation 
 
  @api 
 
  Scenario Outline: Validate POST Create Notes API Response for "<scenarioName>" Scenario 
    When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile  "<bodyFile>" 
    Then User verifies the response status code is <statusCode> 
    And User verifies the response body matches JSON schema "<schemaFile>" 
    Then User verifies fields in response: "<contentType>" with content type "<fields>" 
    Examples: 
      | scenarioName       | method | url                                                             | headers | queryFile | bodyFile             | statusCode | schemaFile | contentType | fields | 
      | Valid create Notes | POST   | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes | NA      | NA        | Create_Notes_Request | 200        | NA         | NA          | NA     | 
 
  Scenario Outline: Validate GET Notes API Response for "<scenarioName>" Scenario 
    When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>" 
    Then User verifies the response status code is <statusCode> 
    And User verifies the response body matches JSON schema "<schemaFile>" 
    Then User verifies fields in response: "<contentType>" with content type "<fields>" 
    Examples: 
      | scenarioName    | method | url                                                             | headers | queryFile | bodyFile | statusCode | schemaFile       | contentType | fields              | 
      | Valid Get Notes | GET    | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes | NA      | NA        | NA       | 200        | Notes_Schema_200 | json        | note=This is Note 1 | 
 
  Scenario Outline: Validate Update Notes API Response for "<scenarioName>" Scenario 
    When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>" 
    Then User verifies the response status code is <statusCode> 
    And User verifies the response body matches JSON schema "<schemaFile>" 
    Then User verifies fields in response: "<contentType>" with content type "<fields>" 
    Examples: 
      | scenarioName       | method | url                                                                                   | headers | queryFile | bodyFile             | statusCode | schemaFile | contentType | fields | 
      | Valid update Notes | PUT    | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes/{noteId}/update-notes | NA      | NA        | Update_Notes_Request | 200        | NA         | NA          | NA     | 
 
  Scenario Outline: Validate DELETE Create Notes API Response for "<scenarioName>" Scenario 
    When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>" 
    Then User verifies the response status code is <statusCode> 
    And User verifies the response body matches JSON schema "<schemaFile>" 
    Then User verifies fields in response: "<contentType>" with content type "<fields>" 
    Examples: 
      | scenarioName | method | url                                                                      | headers | queryFile | bodyFile | statusCode | schemaFile | contentType | fields | 
      | Valid delete | DELETE | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes/{noteId} | NA      | NA        | NA       | 200        | NA         | NA          | NA     | 

Step 4: Creating a Step Definition File

Unlike the automation framework which we have built in the previous blog, we will be creating a single-step file for all the feature files. In the BDD framework, the step files are used to map and implement the steps described in the feature file. Rest Assured library is very accurate to map the steps with the steps described in the feature file. We will be describing the same steps in the step file as they have described in the feature file so that behave will come to know the step implementation for the particular steps present in the feature file.  

package org.Spurqlabs.Steps; 
 
import io.cucumber.java.en.Then; 
import io.cucumber.java.en.When; 
import io.restassured.response.Response; 
import org.Spurqlabs.Core.TestContext; 
import org.Spurqlabs.Utils.*; 
import org.json.JSONArray; 
import org.json.JSONObject; 
 
import java.io.File; 
import java.io.IOException; 
import java.nio.charset.StandardCharsets; 
import java.nio.file.Files; 
import java.nio.file.Paths; 
import java.util.HashMap; 
import java.util.Map; 
 
import static io.restassured.module.jsv.JsonSchemaValidator.matchesJsonSchemaInClasspath; 
import static org.Spurqlabs.Utils.DealDetailsManager.replacePlaceholders; 
import static org.hamcrest.Matchers.equalTo; 
public class CommonSteps extends TestContext { 
    private Response response; 
 
    @When("User sends {string} request to {string} with headers {string} and query file {string} and requestDataFile {string}") 
    public void user_sends_request_to_with_query_file_and_requestDataFile (String method, String url, String headers, String queryFile, String bodyFile) throws IOException { 
        String jsonString = Files.readString(Paths.get(FrameworkConfigReader.getFrameworkConfig("DealDetails")), StandardCharsets.UTF_8); 
        JSONObject storedValues = new JSONObject(jsonString); 
 
        String fullUrl = FrameworkConfigReader.getFrameworkConfig("BaseUrl") + replacePlaceholders(url); 
 
        Map<String, String> header = new HashMap<>(); 
        if (!"NA".equalsIgnoreCase(headers)) { 
            header = JsonFileReader.getHeadersFromJson(FrameworkConfigReader.getFrameworkConfig("headers") + headers + ".json"); 
        } else { 
            header.put("cookie", TokenManager.getToken()); 
        } 
        Map<String, String> queryParams = new HashMap<>(); 
        if (!"NA".equalsIgnoreCase(queryFile)) { 
            queryParams = JsonFileReader.getQueryParamsFromJson(FrameworkConfigReader.getFrameworkConfig("Query_Parameters") + queryFile + ".json"); 
            for (String key : queryParams.keySet()) { 
                String value = queryParams.get(key); 
                for (String storedKey : storedValues.keySet()) { 
                    value = value.replace("{" + storedKey + "}", storedValues.getString(storedKey)); 
                } 
                queryParams.put(key, value); 
            } 
        } 
 
        Object requestBody = null; 
        if (!"NA".equalsIgnoreCase(bodyFile)) { 
            String bodyTemplate = JsonFileReader.getJsonAsString( 
                    FrameworkConfigReader.getFrameworkConfig("Request_Bodies") + bodyFile + ".json"); 
 
            for (String key : storedValues.keySet()) { 
                String placeholder = "{" + key + "}"; 
                if (bodyTemplate.contains(placeholder)) { 
                    bodyTemplate = bodyTemplate.replace(placeholder, storedValues.getString(key)); 
                } 
            } 
 
            requestBody = bodyTemplate; 
        } 

        response = APIUtility.sendRequest(method, fullUrl, header, queryParams, requestBody); 
        response.prettyPrint(); 
        TestContextLogger.scenarioLog("API", "Request sent: " + method + " " + fullUrl); 
 
        if (scenarioName.contains("GET Notes") && response.getStatusCode() == 200) { 
            DealDetailsManager.put("noteId", response.path("[0].id")); 
        } 
         
    } 
 
    @Then("User verifies the response status code is {int}") 
    public void userVerifiesTheResponseStatusCodeIsStatusCode(int statusCode) { 
        response.then().statusCode(statusCode); 
        TestContextLogger.scenarioLog("API", "Response status code: " + statusCode); 
    } 
 
    @Then("User verifies the response body matches JSON schema {string}") 
    public void userVerifiesTheResponseBodyMatchesJSONSchema(String schemaFile) { 
        if (!"NA".equalsIgnoreCase(schemaFile)) { 
            String schemaPath = "Schema/" + schemaFile + ".json"; 
            response.then().assertThat().body(matchesJsonSchemaInClasspath(schemaPath)); 
            TestContextLogger.scenarioLog("API", "Response body matches schema"); 
        } else { 
            TestContextLogger.scenarioLog("API", "Response body does not have schema to validate"); 
        } 
    } 
 
    @Then("User verifies field {string} has value {string}") 
    public void userVerifiesFieldHasValue(String jsonPath, String expectedValue) { 
        response.then().body(jsonPath, equalTo(expectedValue)); 
        TestContextLogger.scenarioLog("API", "Field " + jsonPath + " has value: " + expectedValue); 
    } 
 
    @Then("User verifies fields in response: {string} with content type {string}") 
    public void userVerifiesFieldsInResponseWithContentType(String contentType, String fields) throws IOException { 
        // If NA, skip verification 
        if ("NA".equalsIgnoreCase(contentType) || "NA".equalsIgnoreCase(fields)) { 
            return; 
        } 
        String responseStr = response.getBody().asString().trim(); 
 
        try { 
            if ("text".equalsIgnoreCase(contentType)) { 
                // For text, verify each expected value is present in response 
                for (String expected : fields.split(";")) { 
                    expected = replacePlaceholders(expected.trim()); 
                    if (!responseStr.contains(expected)) { 
                        throw new AssertionError("Expected text not found: " + expected); 
                    } 
                    TestContextLogger.scenarioLog("API", "Text found: " + expected); 
                } 
            } else if ("json".equalsIgnoreCase(contentType)) { 
                // For json, verify key=value pairs 
                JSONObject jsonResponse; 
                if (responseStr.startsWith("[")) { 
                    JSONArray arr = new JSONArray(responseStr); 
                    jsonResponse = !arr.isEmpty() ? arr.getJSONObject(0) : new JSONObject(); 
                } else { 
                    jsonResponse = new JSONObject(responseStr); 
                } 
                for (String pair : fields.split(";")) { 
                    if (pair.trim().isEmpty()) continue; 
                    String[] kv = pair.split("=", 2); 
                    if (kv.length < 2) continue; 
                    String keyPath = kv[0].trim(); 
                    String expected = replacePlaceholders(kv[1].trim()); 
                    Object actual = JsonFileReader.getJsonValueByPath(jsonResponse, keyPath); 
                    if (actual == null) { 
                        throw new AssertionError("Key not found in JSON: " + keyPath); 
                    } 
                    if (!String.valueOf(actual).equals(String.valueOf(expected))) { 
                        throw new AssertionError("Mismatch for " + keyPath + ": expected '" + expected + "', got '" + actual + "'"); 
                    } 
                    TestContextLogger.scenarioLog("API", "Validated: " + keyPath + " = " + expected); 
                } 
            } else { 
                throw new AssertionError("Unsupported content type: " + contentType); 
            } 
        } catch (AssertionError | Exception e) { 
            TestContextLogger.scenarioLog("API", "Validation failed: " + e.getMessage()); 
            throw e; 
        } 
    } 

Step 5: Creating API

Till now we have successfully created a feature file and a step file now in this step we will be creating a utility file. Generally, in Web automation, we have page files that contain the locators and the actions to perform on the web elements but in this framework, we will be creating a single utility file just like the step file. The utility file contains the API methods and the endpoints to perform the specific action like, POST, PUT, GET, or DELETE. The request body i.e. payload and the response body will be captured using the methods present in the utility file. So the reason these methods are created in the utility file is that we can use them multiple times and don’t have to create the same method over and over again.

package org.Spurqlabs.Utils; 
 
import io.restassured.RestAssured; 
import io.restassured.http.ContentType; 
import io.restassured.response.Response; 
import io.restassured.specification.RequestSpecification; 
 
import java.io.File; 
import java.util.Map; 
 
public class APIUtility { 
    public static Response sendRequest(String method, String url, Map<String, String> headers, Map<String, String> queryParams, Object body) { 
        RequestSpecification request = RestAssured.given(); 
        if (headers != null && !headers.isEmpty()) { 
            request.headers(headers); 
        } 
        if (queryParams != null && !queryParams.isEmpty()) { 
            request.queryParams(queryParams); 
        } 
        if (body != null && !method.equalsIgnoreCase("GET")) { 
            if (headers == null || !headers.containsKey("Content-Type")) { 
                request.header("Content-Type", "application/json"); 
            } 
            request.body(body); 
        } 
        switch (method.trim().toUpperCase()) { 
            case "GET": 
                return request.get(url); 
            case "POST": 
                return request.post(url); 
            case "PUT": 
                return request.put(url); 
            case "PATCH": 
                return request.patch(url); 
            case "DELETE": 
                return request.delete(url); 
            default: 
                throw new IllegalArgumentException("Unsupported HTTP method: " + method); 
        } 
    } 

Step 6: Create a Token Generation using Playwright

In this step, we automate the process of generating authentication tokens using Playwright. Many APIs require login-based tokens (like cookies or bearer tokens), and managing them manually can be difficult — especially when they expire frequently. 

The TokenManager class handles this by: 

  • Logging into the application automatically using Playwright. 
  • Extracting authentication cookies (OauthHMAC, OauthExpires, BearerToken). 
  • Storing the token in a local JSON file for reuse. 
  • Refreshing the token automatically when it expires. 

This ensures that your API tests always use a valid token without manual updates, making the framework fully automated and CI/CD ready. 

package org.Spurqlabs.Utils; 
 
import java.io.*; 
import java.nio.file.*; 
import java.time.Instant; 
import java.util.HashMap; 
import java.util.Map; 
import com.google.gson.Gson; 
import com.google.gson.reflect.TypeToken; 
import com.microsoft.playwright.*; 
import com.microsoft.playwright.options.Cookie; 
 
public class TokenManager { 
    private static final ThreadLocal<String> tokenThreadLocal = new ThreadLocal<>(); 
    private static final ThreadLocal<Long> expiryThreadLocal = new ThreadLocal<>(); 
    private static final String TOKEN_FILE = "token.json"; 
    private static final long TOKEN_VALIDITY_SECONDS = 30 * 60; // 30 minutes 
 
    public static String getToken() { 
        String token = tokenThreadLocal.get(); 
        Long expiry = expiryThreadLocal.get(); 
        if (token == null || expiry == null || Instant.now().getEpochSecond() >= expiry) { 
            // Try to read from a file (for multi-JVM/CI) 
            Map<String, Object> fileToken = readTokenFromFile(); 
            if (fileToken != null) { 
                token = (String) fileToken.get("token"); 
                expiry = ((Number) fileToken.get("expiry")).longValue(); 
            } 
            // If still null or expired, fetch new 
            if (token == null || expiry == null || Instant.now().getEpochSecond() >= expiry) { 
                Map<String, Object> newToken = generateAuthTokenViaBrowser(); 
                token = (String) newToken.get("token"); 
                expiry = (Long) newToken.get("expiry"); 
                writeTokenToFile(token, expiry); 
            } 
            tokenThreadLocal.set(token); 
            expiryThreadLocal.set(expiry); 
        } 
        return token; 
    } 
 
    private static Map<String, Object> generateAuthTokenViaBrowser() { 
        String bearerToken; 
        long expiry = Instant.now().getEpochSecond() + TOKEN_VALIDITY_SECONDS; 
        int maxRetries = 2; 
        int attempt = 0; 
        Exception lastException = null; 
        while (attempt < maxRetries) { 
            try (Playwright playwright = Playwright.create()) { 
                Browser browser = playwright.chromium().launch(new BrowserType.LaunchOptions().setHeadless(true)); 
                BrowserContext context = browser.newContext(); 
                Page page = context.newPage(); 
 
                // Robust wait for login page to load 
                page.navigate(FrameworkConfigReader.getFrameworkConfig("BaseUrl"), new Page.NavigateOptions().setTimeout(60000)); 
                page.waitForSelector("#email", new Page.WaitForSelectorOptions().setTimeout(20000)); 
                page.waitForSelector("#password", new Page.WaitForSelectorOptions().setTimeout(20000)); 
                page.waitForSelector("button[type='submit']", new Page.WaitForSelectorOptions().setTimeout(20000)); 
 
                // Fill a login form 
                page.fill("#email", FrameworkConfigReader.getFrameworkConfig("UserEmail")); 
                page.fill("#password", FrameworkConfigReader.getFrameworkConfig("UserPassword")); 
                page.waitForSelector("button[type='submit']:not([disabled])", new Page.WaitForSelectorOptions().setTimeout(10000)); 
                page.click("button[type='submit']"); 
 
                // Wait for either dashboard element or flexible URL match 
                boolean loggedIn; 
                try { 
                    page.waitForSelector(".dashboard, .main-content, .navbar, .sidebar", new Page.WaitForSelectorOptions().setTimeout(20000)); 
                    loggedIn = true; 
                } catch (Exception e) { 
                    // fallback to URL check 
                    try { 
                        page.waitForURL(url -> url.startsWith(FrameworkConfigReader.getFrameworkConfig("BaseUrl")), new Page.WaitForURLOptions().setTimeout(30000)); 
                        loggedIn = true; 
                    } catch (Exception ex) { 
                        // Both checks failed 
                        loggedIn = false; 
                    } 
                } 
                if (!loggedIn) { 
                    throw new RuntimeException("Login did not complete successfully: dashboard element or expected URL not found"); 
                } 
 
                // Extract cookies 
                String oauthHMAC = null; 
                String oauthExpires = null; 
                String token = null; 
                for (Cookie cookie : context.cookies()) { 
                    switch (cookie.name) { 
                        case "OauthHMAC": 
                            oauthHMAC = cookie.name + "=" + cookie.value; 
                            break; 
                        case "OauthExpires": 
                            oauthExpires = cookie.name + "=" + cookie.value; 
                            if (cookie.expires != null && cookie.expires > 0) { 
                                expiry = cookie.expires.longValue(); 
                            } 
                            break; 
                        case "BearerToken": 
                            token = cookie.name + "=" + cookie.value; 
                            break; 
                    } 
                } 
                if (oauthHMAC != null && oauthExpires != null && token != null) { 
                    bearerToken = oauthHMAC + ";" + oauthExpires + ";" + token + ";"; 
                } else { 
                    throw new RuntimeException("❗ One or more cookies are missing: OauthHMAC, OauthExpires, BearerToken"); 
                } 
                browser.close(); 
                Map<String, Object> map = new HashMap<>(); 
                map.put("token", bearerToken); 
                map.put("expiry", expiry); 
                return map; 
            } catch (Exception e) { 
                lastException = e; 
                System.err.println("[TokenManager] Login attempt " + (attempt + 1) + " failed: " + e.getMessage()); 
                attempt++; 
                try { Thread.sleep(2000); } catch (InterruptedException ignored) {} 
            } 
        } 
        throw new RuntimeException("Failed to generate auth token after " + maxRetries + " attempts", lastException); 
    } 
 
    private static void writeTokenToFile(String token, long expiry) { 
        try { 
            Map<String, Object> map = new HashMap<>(); 
            map.put("token", token); 
            map.put("expiry", expiry); 
            String json = new Gson().toJson(map); 
            Files.write(Paths.get(TOKEN_FILE), json.getBytes()); 
        } catch (IOException e) { 
            e.printStackTrace(); 
        } 
    } 
 
    private static Map<String, Object> readTokenFromFile() { 
        try { 
            Path path = Paths.get(TOKEN_FILE); 
            if (!Files.exists(path)) return null; 
            String json = new String(Files.readAllBytes(path)); 
            return new Gson().fromJson(json, new TypeToken<Map<String, Object>>() {}.getType()); 
        } catch (IOException e) { 
            return null; 
        } 
    } 
} 

Step 7: Create Framework Config File

A good tester is one who knows the use and importance of config files. In this framework, we are also going to use the config file. Here, we are just going to put the base URL in this config file and will be using the same in the utility file over and over again. The config file contains more data than just of base URL when you start exploring the framework and start automating the new endpoints then at some point, you will realize that some data can be added to the config file.  

Additionally, the purpose of the config files is to make tests more maintainable and reusable. Another benefit of a config file is that it makes the code more modular and easier to understand as all the configuration settings are stored in a separate file and it makes it easier to update the configuration settings for all the tests at once.  

{ 
  "BaseUrl": "https://app.sample.com", 
  "UserEmail": "************.com", 
  "UserPassword": "#############", 
  "ExecutionBrowser": "chromium", 
  "Resources": "/src/test/resources/", 
  "Query_Parameters": "src/test/resources/Query_Parameters/", 
  "Request_Bodies": "src/test/resources/Request_Bodies/", 
  "Schema": "src/test/resources/Schema/", 
  "TestResultsDir": "test-output/", 
  "headers": "src/test/resources/headers/", 
  "DealDetails": "DealDetails.json", 
  "UploadDocUrl": "/api/v1/documents" 
} 

Step 8: Execute and Generate Cucumber Report

At this stage, we create the TestRunner class, which serves as the entry point to execute all Cucumber feature files. It uses TestNG as the test executor and integrates Cucumber for running BDD-style test scenarios. 

The @CucumberOptions annotation defines: 

  • features → Location of all .feature files. 
  • glue → Packages containing step definitions and hooks. 
  • plugin → Reporting options like JSON and HTML reports. 

After execution, Cucumber automatically generates: 

  • Cucumber.json → For CI/CD and detailed reporting. 
  • Cucumber.html → A user-friendly HTML report showing test results. 

This setup makes it easy to run all API tests and view clean, structured reports for quick analysis. 

package org.Spurqlabs.Core; 
import io.cucumber.testng.AbstractTestNGCucumberTests; 
import io.cucumber.testng.CucumberOptions; 
import org.testng.annotations.AfterSuite; 
import org.testng.annotations.BeforeSuite; 
import org.testng.annotations.DataProvider; 
import org.Spurqlabs.Utils.CustomHtmlReport; 
import org.Spurqlabs.Utils.ScenarioResultCollector; 
 
@CucumberOptions( 
        features = {"src/test/resources/Features"}, 
        glue = {"org.Spurqlabs.Steps", "org.Spurqlabs.Core"}, 
        plugin = {"pretty", "json:test-output/Cucumber.json","html:test-output/Cucumber.html"} 
) 
 
public class TestRunner {} 

Running your test

Once the framework is set up, you can execute your API automation suite directly from the command line using Maven. Maven handles compiling, running tests, and generating reports automatically. 

Run All Tests – 

To run all Cucumber feature files: 

mvn clean test 
  • clean → Deletes old compiled files and previous reports for a fresh run. 
  • test → Executes all test scenarios defined in your project. 

After running this command, Maven will trigger the Cucumber TestRunner, execute all scenarios, and generate reports in the test-output folder. 

Run Tests by Tag – 

Tags allow you to selectively run specific test scenarios or features. 
You can add tags like @api1, @smoke, or @regression in your .feature files to categorize tests. 

Example: 

@api1 
Scenario: Verify POST API creates a record successfully 
  Given User sends "POST" request to "/api/v1/create" ... 
  Then User verifies the response status code is 201 

To execute only scenarios with a specific tag, use: 

mvn clean test -Dcucumber.filter.tags="@api1" 
  • The framework will run only those tests that have the tag @api1. 
  • You can combine tags for more flexibility: 
  • @api1 or @api2 → Runs tests with either tag. 
  • @smoke and not @wip → Runs smoke tests excluding work-in-progress scenarios. 

This is especially useful when running specific test groups in CI/CD pipelines. 

View Test Reports 

API Automation Testing Framerwork Report – After the execution, Cucumber generates detailed reports automatically in the test-output directory: 

  • Cucumber.html → User-friendly HTML report showing scenario results and logs. 
  • Cucumber.json → JSON format report for CI/CD integrations or analytics tools. 

You can open the report in your browser: 

project-root/test-output/Cucumber.html 
 

This section gives testers a clear understanding of how to: 

  • Run all or specific tests using tags, 
  • Filter executions during CI/CD, and 
  • Locate and view the generated reports. 
API Automation Testing Framework Report

Reference Framework GitHub Link – https://github.com/spurqlabs/APIAutomation_RestAssured_Cucumber_Playwright

Conclusion

API automation testing framework ensures that backend services are functioning properly before the application reaches the end user. 
Therefore, by integrating Cucumber, RestAssured, and Playwright, we have built a flexible and maintainable test framework that: 

  • Supports BDD style scenarios. 
  • Handles token-based authentication automatically. 
  • Provides reusable utilities for API calls. 
  • Generates rich HTML reports for easy analysis. 

This hybrid setup helps QA engineers achieve faster feedback, maintain cleaner code, and enhance the overall quality of the software.