Section divider

GitLab CI/CD Parallel Test Automation Jobs

By Dennis Gurock
15 min read
GitLab CI/CD Parallel Test Automation Jobs

Configuring your GitLab CI/CD pipeline to run your automated tests with parallel jobs is a great way to improve your build times and provide faster feedback to your dev & QA teams. It can be frustrating to wait a long time for your tests to complete when new code is committed to your project repository. Not only does running your test jobs in parallel reduces wait times, it also enables you to release new updates to production faster without reducing the number of tests you can run.

If you haven't set up GitLab to run your automated tests yet, you can also read our introduction article on GitLab CI/CD test automation first.

Fortunately GitLab makes it very easy to set up parallel execution of your automated tests. You just need to understand a few basic concepts and learn a couple of configuration options to implement this. In this guide we will go through all these details so you can speed up your test automation runs, as well as report your test results to test management. Let's get started!

GitLab CI/CD Parallel Testing Workflow

To run our tests in parallel as part of our GitLab CI/CD pipeline, we need to tell GitLab to start multiple test jobs at the same time. In each of the test jobs we then run a different subset of our test suite.

For our example project we start by adding separate build, test and deploy jobs. For our purpose we wouldn't really need the build and deploy steps, as we will focus on the testing tasks here; but we want to create a full example so it's easier for you to implement a full workflow based on this article.

Our initial pipeline will start executing the build job, followed by multiple parallel test jobs and then finally run deploy if all tests passed (GitLab will stop the pipeline if any of the test jobs fail):

To configure the execution order and dependencies of our jobs, we start by defining the stages in our GitLab pipeline configuration, namely build, test and deploy. We also define jobs with the same names and link them to the relevant stages.

This tells GitLab to execute all the jobs in our build stage first, then follow with our test stage jobs and end with any jobs in the deploy stage. If any of the jobs fail, GitLab will stop executing the pipeline and will not run any subsequent jobs.

GitLab also makes it easy to run multiple instances of the same job in parallel. We can simply specify the parallel option for our test job and GitLab will start four instances of this job at the same time.

# .gitlab-ci.yml
stages:
  - build
  - test
  - deploy

build:
  stage: build
  script:
    - echo "Building .."

test:
  stage: test
  parallel: 4
  script:
    - echo "Testing .."

deploy:
  stage: deploy
  script:
    - echo "Deploying .."

Start Testing with Testmo Free

#1 Unified Modern Test Management Software
Test automation runs
Test automation command line

Test Automation Suite Example Project

The test automation suite we will use as an example for this article will be based on our previous example. We will just extend this example to use multiple separate test files as this makes it easier to run just a subset of our tests in each parallel job instance. Here is the basic structure of this project's main directory:

.gitlab-ci.yml

package-lock.json
package.json

tests/test1.js
tests/test2.js
tests/test3.js
tests/test4.js
tests/test5.js
tests/test6.js
tests/test7.js
tests/test8.js

In GitLab we are creating a new project called example-gitlab-parallel and commit our initial files. Whenever we submit new code to the repository, GitLab CI/CD will automatically pick up our new code and CI config and run our workflow now. You can also visit the article's full repository in GitLab to review all project files.

We are using the JavaScript Mocha/Chai testing framework for this example project, as it's easy to set up and use. Our pipeline will execute with the official node (Node.js JavaScript runtime) container. But any other platform or framework will also work with the examples we provide in this article. You would just use the relevant Docker container for your platform and use your favorite testing framework in your pipeline.

For our example project, each of our test files look similar to the following code example, consisting of a list of test cases that all pass:

// tests/test1.js
const chai = require('chai');
const assert = chai.assert;

describe('files', function () {
    describe('export', function () {
        it('should export pdf', function () {
            assert.isTrue(true);
        });

        it('should export html', function () {
            assert.isTrue(true);
        });

        it('should export yml', function () {
            assert.isTrue(true);
        });

        it('should export text', function () {
            assert.isTrue(true);
        });
    });
});

Parallel Test Execution With GitLab CI/CD

Let's look at a complete example on how to run our tests in parallel with GitLab now. We already told GitLab to start multiple parallel instances of our test job by specifying the parallel option.

But if we now just executed our Mocha test suite as in our previous GitLab test automation article, it would always run the full test suite in each parallel job instance. Instead we need to run a different subset of our tests in each instance and also make sure that all tests are executed across the instances.

So how do we know which subset to run inside each test job? GitLab makes two environment variables available during our job runs to help with this, namely CI_NODE_TOTAL (the total number of parallel jobs) and CI_NODE_INDEX (the index of the current job). We can use these variables to find all test files and then select just a subset of the tests to run in the current job (based on the current index). We just need to make sure that we always run all tests across the jobs, regardless of the number of jobs we spawn.

We wrote a little script (split.js) for this example project to implement this. The script uses a pretty simple approach as it just selects the same number of files (but different files) inside each instance. We could also adjust the script to select files based on the file size, number of test cases or even previous testing times if we store these. But for our example project our simple approach works well. Here's the full pipeline configuration to run our test suite in parallel now:

# .gitlab-ci.yml
default:
  image: node:19
  cache:
    - key:
        files:
          - package-lock.json
      paths:
        - .npm/

stages:
  - build
  - test
  - deploy

build:
  stage: build
  script:
    - echo "Building .."

test:
  stage: test
  parallel: 4
  script:
    - npm ci --cache .npm --prefer-offline
    - node split.js | xargs npm run mocha

deploy:
  stage: deploy
  script:
    - echo "Deploying .."

Once we commit our new configuration to GitLab, it will start a new pipeline run and execute our test job four times in parallel, each executing a subset of our test suite with our split.js script:

Reporting Results to Test Management

Running our automated tests with GitLab is a great start, but we also want to make it easy to collect, review, report and share our test results. For this we are going to submit our test results to a test management tool; Testmo in our case. Submitting and reporting our results enables us to track and manage our tests, link new issues found during testing (e.g. to GitLab Issues or Jira) and make all results accessible to the entire team.

As we are running our tests in parallel with separate jobs in GitLab, it would be good to capture the tests in our testing tool separately as well. Fortunately Testmo has full support for this, as we can submit the results for each job as a separate thread inside a test run. This way we can either see the entire run results at once (e.g. to just focus on the failed tests across all jobs), or we can review a single test job & thread. This also allows us to capture the console output, exit code and measure test times for each job separately, as they are stored as separate threads.

This screenshot shows what a test automation run with multiple testing jobs & threads looks like in Testmo (see the section with the separate threads at the bottom of the screenshot):

To submit our test results to Testmo from our CI pipeline, we can just use the testmo command line tool. We also looked at a more basic example (without multiple threads) in our previous GitLab CI/CD test automation article. The testmo command line tool is deployed as an NPM package, so it only requires one line of code to install it (even if you don't use a node-based Docker container, NPM is usually already installed or is very easy to add).

So we could just install the testmo tool at the beginning of all our jobs. But because we will use it in multiple jobs, including our parallel test jobs, we can just add the package to our package.json config as well, so it's automatically installed with our npm ci calls. So we just add the package to our config by running the following command inside the local development container and commit the file to the repository:

# Run this inside your dev container and then 
# commit package.json & package-lock.json to GitLab
$ npm install --save-dev @testmo/testmo-cli

In our previous article (with just a single testing job/thread) we just had a single call to testmo automation:run:submit. This created the test run in Testmo, added a single thread for it, submitted the test results and marked the test run as completed, all in one step. This time though we want to start a new test run at the beginning of our pipeline, then create a separate thread for each testing job (and submit the results) and finally complete the run at the end of the pipeline.

So we are going to extend our GitLab pipeline by adding additional test-setup and test-complete jobs (and corresponding stages to tell GitLab the order of our jobs). In the setup job we are creating the test run in Testmo, then pass the created test run ID to all our parallel testing jobs, and finally mark the run as completed in our test-complete job at the end:

When we create the new test automation run in Testmo in our test-setup job (and submitting all basic information about the run such as the name and source), we receive the ID of the newly created run. We need to pass this ID to the following jobs, as we need to reference it when creating the threads and submitting the results. GitLab CI/CD supports this by writing any variables to an .env file and uploading it for subsequent jobs via the artifacts & dotenv settings. It's just important to reference the test-setup job as a dependency in any subsequent jobs that need access to the test run ID variable then.

Also remember how we called our split.js script above, followed by the call to our Mocha test suite? To make things a bit simpler and cleaner, we are moving this line to a script alias in our package.json file. This will also generate and output our test results as JUnit XML files, which we need so we can report the results to Testmo. You can see the full package.json config in this article's GitHub repository.

So all we need to do now is changing our test job to call testmo automation:run:submit-thread and pass our new mocha-junit-parallel command to it as the last parameter. The testmo command will then launch our (split) Mocha tests, enabling it to capture the console output, measure the test times and record the exit code. All this is then automatically submitted to the thread in Testmo together with our test results.

# .gitlab-ci.yml

# [..]

test-setup:
  stage: test-setup
  script:
    - npm ci --cache .npm --prefer-offline
    - npx testmo automation:run:create
        --instance "$TESTMO_URL"
        --project-id 1
        --name "Parallel mocha test run"
        --source "unit-tests" > testmo-run-id.txt
    - echo "TESTMO_RUN_ID=$(cat testmo-run-id.txt)" > testmo.env
  artifacts:
    reports:
      dotenv: testmo.env

test:
  parallel: 4
  stage: test
  script:
    - npm ci --cache .npm --prefer-offline
    - npx testmo automation:run:submit-thread
        --instance "$TESTMO_URL"
        --run-id "$TESTMO_RUN_ID"
        --results results/*.xml
        -- npm run mocha-junit-parallel # Note space after --
  dependencies:
    - test-setup

Finally we will mark the test run as completed in our test-complete job after all our testing jobs completed. We will again use our previously created ID to reference the run in Testmo for this. Testmo will then mark the run as completed, and flag it as successful or failed based on the test results.

In case there's a problem with the pipeline execution and we didn't mark the run as completed, Testmo will still mark the run as completed eventually by itself.

test-complete:
  stage: test-complete
  script:
    - npm ci --cache .npm --prefer-offline
    - npx testmo automation:run:complete
        --instance "$TESTMO_URL"
        --run-id "$TESTMO_RUN_ID"
  dependencies:
    - test-setup
  when: always

What about failing tests?

Until now, all our tests always passed, so we didn't have to worry about any of our tests failing. But what happens if any of our tests fail? Will this still get reported to Testmo? And what happens to our deploy job? You can just test this by adding an error to any of the tests, e.g. by throwing an exception with throw new Error('This test failed');.

The way we've set up our CI workflow, everything works as expected. By default, the testmo command will pass through any exit codes from the testing tool. So if Mocha returns an error because a test failed, this is passed through to GitLab so it marks the test job as failed.

We tell GitLab to always run our test-complete job though, because we still want to mark the run as completed even if any tests failed. We use the when: always option for this. If we didn't specify this option, GitLab would not run any subsequent jobs after a job failure.

For the deploy job we don't specify this option. So if our tests fail, GitLab will skip running our deployment, just as we want. Here's what test results look like in Testmo after we submit a test run:

Tracking Test Suites, Threads & Runs

Now that we submit the test run and its results, we can easily track the tests, review failures, document issues, identify flaky and slow tests and use these details to improve our test suites over time. This also allows us to make the test results available to the entire team and link the runs to our projects and milestones.

The way we have configured and set up our CI pipeline, we can also scale the test execution when our test suite grows. We can just increase the number of parallel testing jobs by adjusting the parallel setting in our pipeline configuration in the future and we don't have to change anything else.

Another advantage of submitting and storing our test automation results in our testing tool is that we can link and manage our automated tests together to other testing efforts, such as manual test case management and exploratory testing. In our experience, if you make the test automation results available to the entire team, it also helps to increase awareness of test performance and build times, giving the dev team a chance to improve these metrics.

By following the approach outlined in this article to implement parallel testing jobs with GitLab, you can build a CI pipeline that is easy to maintain and easy to scale, so make sure to try it for your projects!

PS: We regularly publish original software testing & QA research, including free guides, reports and news. To receive our next postings, you can subscribe to updates. You can also follow us on Twitter and Linkedin.

Section divider
More from Testmo
We are introducing new powerful and customizable print & PDF views for runs, repositories, sessions & automation. Easily export, share & review your tests.
Read full article
Today we are announcing another feature-packed update that introduces various often requested features and productivity improvements to work with tags, test steps, folders & more.
Read full article
Testmo now includes flexible CSV export features for test case repositories, test runs, automation and exploratory test sessions. Generate single exports or many CSV files at once.
Read full article

Start testing with Testmo for free today

Unified modern test management for your team.