Section divider

GitHub Actions Parallel Test Automation Jobs

By Dennis Gurock
15 min read
GitHub Actions Parallel Testing Jobs

Summary: There are few things more frustrating for a developer than having to wait for a long time for all tests to complete after pushing new code. In this guide we show step by step how to drastically reduce testing times with parallel testing.

Running your automated tests in parallel as part of your GitHub Actions CI pipeline is a great way to improve build times and provide faster feedback to your dev & QA teams. Not only does running your tests in parallel reduces wait times, it also allows you to release bug fixes and updates to production faster without having to compromise on the number of tests you run.

If you are new to running your automated tests with GitHub, we also recommend taking a look at our introduction guide on GitHub Actions test automation.

Parallel test automation execution with GitHub Actions is relatively straightforward, but there are a few configuration settings you need to understand to implement this. In this guide we will go through all the details to set up parallel test automation to speed up your builds, as well as report the results to test management. So let's get started!

GitHub Actions Parallel Test Workflow

To run our tests in GitHub Actions in parallel to improve test times, we need to run multiple testing jobs at the same time. In each test job instance we then need to execute a different subset of our tests.

For our example project we will start by defining separate build, test and deploy jobs. Strictly speaking we wouldn't need the build and deploy jobs just for our example project, but we want to create a full example workflow so it's easier to use this for a real project. The build and deploy jobs will be empty in our example, but you can add any build preparation or deployment code to your workflow as needed.

Our initial workflow will run the build job, followed by multiple parallel test jobs, and finally run deploy if all tests passed:

To make sure that GitHub Actions only runs our test jobs after build, and to make sure that deploy is run at the end of the workflow only if all tests passed, we need to tell GitHub the order and dependencies of the jobs. We start by creating our initial workflow configuration file .github/workflows/build.yml.

You configure the order and dependencies of jobs in GitHub Actions by using the needs setting for jobs. In our example we tell GitHub that the test job(s) depend on a successful build, and that it should only run our deploy job if all tests pass. If we configure dependencies using the needs setting, GitHub Actions will by default automatically stop executing any subsequent jobs if a dependent job fails:

# .github/workflows/build.yml
name: Build

jobs:
  build:
    # [..]

  test:
    needs: build
    # [..]

  deploy:
    needs: test
    # [..]

Start Testing with Testmo Free

#1 Unified Test Management + GitHub Integration

Test automation runs
Test automation command line

Test Automation Suite Example Project

For this article we are building on top of our previous example and extending the test suite to use multiple separate test files. This makes it easier to run the tests in parallel as it allows us to run just a subset of our tests in each parallel job instance. Here's the basic structure of this project's main directory:

.github/workflows/build.yml

package-lock.json
package.json

tests/test1.js
tests/test2.js
tests/test3.js
tests/test4.js
tests/test5.js
tests/test6.js
tests/test7.js
tests/test8.js

We are creating a new repository for this project in GitHub called example-github-parallel and push our initial files to the repository. Whenever we push new code to GitHub, GitHub Actions will automatically pick up our configuration and run our workflow now. You can look at the article's full repository in GitHub to review all project files.

We are using JavaScript Mocha/Chai tests for this example project, as this testing framework is very easy to configure and use. But you can also use any other test automation tool, platform and framework using the same approach. Each of our test files look similar to the following code example, consisting of a list of test cases that pass:

// tests/test1.js
const chai = require('chai');
const assert = chai.assert;

describe('files', function () {
    describe('export', function () {
        it('should export pdf', function () {
            assert.isTrue(true);
        });

        it('should export html', function () {
            assert.isTrue(true);
        });

        it('should export yml', function () {
            assert.isTrue(true);
        });

        it('should export text', function () {
            assert.isTrue(true);
        });
    });
});

Parallel Test Execution With GitHub Actions

Let's look at configuring the full parallel test workflow next. First we need to tell GitHub Actions to run multiple instances of our test job in parallel. We use GitHub's strategy setting for jobs to implement this. As part of the strategy we can define a matrix of configurations. For each combination, GitHub will run a separate instance of the job for us. You could use a test matrix to run multiple combinations of different versions of e.g. browsers and operating systems.

In our scenario we don't want to test against different versions though, we just want to run multiple job instances. We can still use a matrix for this and just define a list of indexes we call ci_index and ci_total here, and GitHub Actions will just start four parallel jobs then. We will also make these values available as environment variables in our test jobs so we know in which instance (0, 1, 2 or 3) our code runs (this will be important in a moment).

Last but not least, we disable the default fail-fast setting. By default, GitHub Actions would stop any other test job instance if another test failed earlier. But we want to run all our tests, regardless of whether another test already failed. Here's our full workflow config now:

# .github/workflows/build.yml
name: Build

on: [push, workflow_dispatch]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - run: echo "Building .."

  test:
    needs: build
    runs-on: ubuntu-latest

    strategy:
      fail-fast: false
      matrix:
        ci_index: [0, 1, 2, 3]
        ci_total: [4]

    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 19
          cache: npm
      - run: npm ci
      - run: node split.js | xargs npm run mocha
        env:
          CI_TOTAL: ${{ matrix.ci_total }}
          CI_INDEX: ${{ matrix.ci_index }}

  deploy:
    needs: test
    runs-on: ubuntu-latest

    steps:
      - run: echo "Deploying .."

We just reviewed how to tell GitHub Actions to run multiple instances of our test job. But if we just executed our Mocha test suites as in our previous GitHub automation example, our full test suite would run in each parallel job instance. But this is not what we want. Instead we want to run a different subset of tests inside each instance and also make sure that all tests are executed across the instances.

So how do we know which subset to run inside each test instance? Remember the index and total settings above? We make these available as CI_TOTAL and CI_INDEX environment variables inside our test step. We then use a small split.js script to find all test files and only select a subset of these files based on the current's job index.

Our split.js script uses a pretty simple approach as it just selects the same number of files inside each instance. We could also adjust the script to select files based on the file size, number of test cases or even previous testing times if we store these. But for our example project our simple approach works well. Once we push our new configuration to GitHub Actions, it will run four parallel test instances as seen below:

Reporting Results to Test Management

Running our tests inside GitHub Actions is the first step, but we also want to be able to collect, review and report our test results. For this we are going to submit all test results to a test management tool; in our example we are using our Testmo. Reporting our test results makes it easy to track and review our tests, submit new issues and bug reports to a linked issue tracker such as GitHub issues or Atlassian Jira, and make the test results easily accessible to the entire team.

As we are running our tests in parallel in GitHub, it would be good to capture the tests and results of our test instances separately as well. Fortunately Testmo has full support for this as you can submit each separate test job as a separate thread. This way you can either see the results of each thread separately, or review the entire test run at once to e.g. focus on just the failed tests etc. It also allows us to capture the run times, console output and exit code of each separate job in its own thread to easily review this.

The following screenshot shows what a test run with multiple separate jobs looks like in Testmo (note the Execution section with the separate threads):

Reporting test results from GitHub Actions to Testmo is very easy, as we can just use the testmo command line tool for this (we also already looked at this in our previous GitHub Actions test automation guide). The command line tool uses NPM for deployment and just requires a single line to install it.

So we could just install the tool at the beginning of all our jobs. But because we want to use the command line tool in multiple jobs, as well as in our parallel test instances, it's better to add @testmo/testmo-cli to our package.json so it's automatically installed with our npm ci calls. Just add the package to your configuration by running this inside your local development container and push the new package files to your repository:

# Run this inside our dev container and then 
# commit package.json & package-lock.json to GitHub
$ npm install --save-dev @testmo/testmo-cli

In our previous article we just needed a single call to the testmo automation:run:submit command to create a new test run, submit all test results and then mark the test run as completed, all in one step. This time though we want to submit multiple parallel test runs and their results, and then mark the run as completed after all tests were executed and reported.

So we are going to extend our GitHub Actions workflow by adding additional test-setup and test-complete jobs. In the setup job we are creating a new Testmo run, then pass the new test run ID to each parallel test instance and submit its results, and finally mark the run as completed in test-complete:

When we create the new Testmo test run in the test-setup step (and pass all basic information to Testmo such as the project, run name, source etc.), we receive the newly created test run ID. We need this ID in the following jobs to submit results and to mark the run as completed. We use GitHub Actions' output variables to implement this, by using the special echo "testmo-run-id=$ID" >> $GITHUB_OUTPUT" call.

When GitHub Actions sees this special format in the job's output, and if you have configured the output settings in your config correctly, you can access the ID in any following jobs that depend on this job (this is important: this output is only available in jobs that list this job in its needs setting).

Also remember our call to the split.js script, followed by the call to our Mocha test suite? We are moving this line to a script alias in our package.json called mocha-junit-parallel, so it's easier to call. This also generates and outputs the test results to JUnit XML files, which we can use to submit our results to Testmo. You can see the full package.json config in this article's GitHub repository.

So all we need to do is to change our test command to call testmo automation:run:submit-thread and pass our new mocha-junit-parallel command to it as the last parameter. Testmo will then run our tests, capture any console output and measure the test time, and finally submit the tests and results from the JUnit XML files.

# .github/workflows/build.yml 
name: Build

# [..]

jobs: # [..]
  test-setup: # [..]
    needs: build
    outputs:
      testmo-run-id: ${{ steps.run-tests.outputs.testmo-run-id }}
    steps:
      # [..]
      - run: |
          npx testmo automation:run:create \
            --instance "$TESTMO_URL" \
            --project-id 1 \
            --name "Parallel mocha test run" \
            --source "unit-tests" > testmo-run-id.txt
          ID=$(cat testmo-run-id.txt)
          echo "testmo-run-id=$ID" >> $GITHUB_OUTPUT
        env:
          TESTMO_URL: ${{ secrets.TESTMO_URL }}
          TESTMO_TOKEN: ${{ secrets.TESTMO_TOKEN }}
        id: run-tests

  test: # [..]
    needs: test-setup
    steps:
      # [..]
      - run: |
          npx testmo automation:run:submit-thread \
          --instance "$TESTMO_URL" \
          --run-id "${{ needs.test-setup.outputs.testmo-run-id }}" \
          --results results/*.xml \
          -- npm run mocha-junit-parallel # Note space after --

Finally we will mark the test run as completed in our test-complete step. We again use the previously generated test run ID here to tell Testmo that the run has been completed. Testmo will then mark the entire run as successful or failed based on the submitted test results.

It would also not be critical if marking the test as completed failed here for some reason: Testmo will automatically do this for us after some time by default to keep everything consistent.

test-complete: # [..]
    needs: [test-setup, test]
    if: always()
    runs-on: ubuntu-latest
    steps:
      # [..]
      - run: |
          npx testmo automation:run:complete \
          --instance "$TESTMO_URL" \
          --run-id "${{ needs.test-setup.outputs.testmo-run-id }}" \
        env:
          TESTMO_URL: ${{ secrets.TESTMO_URL }}
          TESTMO_TOKEN: ${{ secrets.TESTMO_TOKEN }}

What about failing tests?

Until now, all our tests just passed, so we didn't have to worry about failing tests. But what happens if a test fails? Will this still get reported to Testmo? And what about our deploy job? You can easily test this by simply failing one of our tests, e.g. by throwing an error with throw new Error('This test failed');.

The way we configured our workflow, everything works as expected. The call to testmo to run our tests passes through any exit code of our Mocha run. So if the Mocha run failed, we pass this through to GitHub and fail this test job instance.

We also tell GitHub Actions to always run our test-complete job after our tests, even if one of the jobs failed. We do this by specifying the if: always() option for our test-complete job. So we can also mark the test run as completed, even after test failures.

For our deploy job, we changed its configuration to depend on both the test and test-complete jobs, but without specifying the always condition. So it will run after these jobs completed, but only if they were successful. So if a test failed, or if we couldn't mark the run as completed for some reason, the deployment job is skipped, just like it should. Here's what the test results of a test run would look like after they are submitted to Testmo:

Tracking Test Suites, Threads & Runs

Now that we submit our tests and results, we can easily track the tests over time, review failures, link issues, identify slow and flaky tests and use this information to improve our test suites. Having the test results and test runs in our test management tool also makes the results more easily accessible to our entire team and allows us to link and archive test runs with project milestones and releases.

And when our test suites grow over time, we can also easily increase the number of parallel test jobs by adjusting the strategy setting in our GitHub Actions workflow config. This makes it easy to scale our automated tests and we don't have to change anything else in order to increase the number of concurrent test execution jobs in the future.

Another advantage of tracking test results with a test management tool is that we can also manage our automated tests together with manual tests as part of test case management and exploratory testing for unified QA management. It also helps increase awareness of build performance and testing times, thus giving the test and dev teams a way to improve these important metrics over time.

Using the approach explained in this article to implement parallel test automation with GitHub Actions is a great step towards a scalable test automation suite that performs well over the long run, so try it for your projects!

Start Testing with Testmo Free

#1 Unified Test Management + GitHub Integration

Test automation runs
Test automation command line

PS: We regularly publish original software testing & QA research, including free guides, reports and news. To receive our next postings, you can subscribe to updates. You can also follow us on Twitter and Linkedin.

Section divider
More from Testmo
Speed up your GitLab CI/CD pipelines and deployments by running your automated tests in parallel and improve your testing performance.
Read full article
Learn how to configure and set up a modern test automation CI workflow with CircleCI, Docker, test automation reporting and best practices.
Read full article
Testmo now includes flexible CSV export features for test case repositories, test runs, automation and exploratory test sessions. Generate single exports or many CSV files at once.
Read full article

Start testing with Testmo for free today

Unified modern test management for your team.