🚀
Testmo logo
GUIDE

GitHub Actions & Selenium Guide (with Parallel Browser Testing)

By Dennis Gurock
.
18 min read

In this guide we will go through all the steps to set up Selenium browser test automation with GitHub Actions, including parallel testing against multiple browsers and reporting our results to test management.

GitHub Actions makes it very easy to run Selenium browser testing. If you are not yet familiar with basic test automation with GitHub Actions or how to run automated tests with Selenium, we have separate detailed guides on this. We recommend reading these articles if the concepts are completely new to you, as we will not cover the basics in this article:

Once you are familiar with the basics of GitHub Actions test automation and Selenium browser automation, following this article will be very straightforward. So let's get started!

Initial Repository & Project Setup

For this project we will create multiple GitHub Actions workflows that build on each other to introduce more advanced concepts one by one. Specifically we will set up the following workflows:

  • test-single: Our initial basic workflow will run our tests against a single web browser with Selenium. We will configure the workflow so that you can select which browser to run the tests against (Chrome, Firefox, or Edge) through GitHub's interface, which is pretty useful!
  • test-parallel: Next we will extend our workflow to run our Selenium test suite against all browsers at the same time in parallel. This way we can always make sure that our tests pass with all our browsers.
  • test-testmo: Finally we will also report our test results to our test management tool Testmo so we can see and track all results in our testing tool, including the full console output, execution times, test failures etc.
GitHub Actions & Selenium Guide (with Parallel Browser Testing) -
With our first GitHub Actions workflow (see below) we can run our test suite against different browsers

We start by creating a new repository on GitHub named example-github-actions-selenium. You can always find the full repository and all files there. Here is the initial file structure of our project:

# GitHub Actions workflows
.github/workflows/test-single.yml
.github/workflows/test-parallel.yml
.github/workflows/test-testmo.yml

# Local Docker dev environment
dev/docker-compose.yml

# Package dependencies & scripts
package.json
package-lock.json

# Our Selenium test suite
test.mjs
At the top you can find our three GitHub Actions workflow configuration files (for the three workflows mentioned above). For our GitHub Actions pipelines and our local development environment we will be using Docker containers, as this make it easy to reuse preconfigured images. We can also use the same images for GitHub Actions and for local development. The Docker Compose config file in our repository contains the details to run our local development environment (see below). For our example test automation suite we will use a simple Node.js (JavaScript) based test automation framework called Mocha/Chai. You can also use any other programming language and testing framework you are more familiar with. All concepts explained in this article will still work the same, you would just need to adjust the GitHub Actions workflow and Docker configs to use different container images for your preferred tools. The package.json and package-lock.json files contain our project dependencies and a few useful script aliases that we will use in our project to run our tests, generate result files for reporting and submit these reports to Testmo. You can find the full package.json config file in the repository.

Example Selenium Test Automation Suite

Our example test suite with our Selenium tests are stored in the project's test.mjs file (note the .mjs extension, which is important to enable newer ES6 JavaScript features). The test suite consists of a couple of tests that search the DuckDuckGo search engine and verify that the search results contain the links we are looking for. For each test we are starting a new browser session and close the browser at the end of the test. This way we can ensure that we start with a clean browser state without any previous cookies or session history. We've written the test suite so that we can pass the name of the browser we want to test as an environment variable. We can then test different browsers if our environment hosts different Selenium services (see below). Last but not least we are also taking screenshots of the browser at the end of each test. This is useful so we can see what the page looked like, especially if there was a problem with the test. Here's a shortened version of our test suite (you can find the full file in the project repository).
// test.mjs
import { Builder, By, Key, until } from 'selenium-webdriver';
import { assert } from 'chai';
import * as fs from 'fs';

describe('search', async function () {
    let driver;
	// [..]

    // A helper function to start a web search
    const search = async (term) => {
        // Automate DuckDuckGo search
        await driver.get('https://duckduckgo.com/');
        const searchBox = await driver.findElement(
            By.id('search_form_input_homepage'));
        await searchBox.sendKeys(term, Key.ENTER);

        // Wait until the result page is loaded
        await driver.wait(until.elementLocated(By.css('#links .result')));

        // Return page content
        const body = await driver.findElement(By.tagName('body'));
        return await body.getText();
    };

	// [..]

    // Before each test, initialize Selenium and launch the browser
    beforeEach(async function() {
        // Microsoft uses a longer name for Edge
        let browser = process.env.BROWSER;
        if (browser == 'edge') {
            browser = 'MicrosoftEdge';
        }

        // Connect to service specified in env variable or default to 'selenium'
        const host = process.env.SELENIUM || 'selenium';
        const server = `http://${host}:4444`;
        driver = await new Builder()
            .usingServer(server)
            .forBrowser(browser)
            .build();
    });

    // After each test, take a screenshot and close the browser
    afterEach(async function () {
        if (driver) {
            // Take a screenshot of the result page
            // [..]

            // Close the browser
            await driver.quit();
        }
    });

    // Our test definitions
    it('should search for "Selenium dev"', async function () {
        const content = await search('Selenium dev');
        assert.isTrue(content.includes('www.selenium.dev'));
    });

	// [..]
});

Running Our Tests Locally with Docker

We've configured our Docker Compose file to provide a local development environment and to run multiple Selenium services so we can run our tests against different browsers locally. This way we can develop and test our Selenium suite without having to commit each change to GitHub and wait for the tests to complete.

Specifically our Docker Compose config launches the three official Selenium Docker images to run Chrome, Firefox and Microsoft Edge browsers. We also map various ports so we can connect and debug our tests from our host machine (more on this below).

Besides the three Selenium browser services, we configure a container called node that uses the official Node.js (JavaScript) image of the same name. We will be using this container as our interactive shell container to run our test suite and develop our tests. It comes preconfigured with everything we need to run our Node.js (JavaScript) tests.

# dev/docker-compose.yml
version: '3'
services:
  chrome:
    image: selenium/standalone-chrome
    shm_size: '2gb'
    ports:
      - 4444:4444 # Selenium service
      - 5900:5900 # VNC server
      - 7900:7900 # VNC browser client
  firefox:
    image: selenium/standalone-firefox
    shm_size: '2gb'
    ports:
      - 4445:4444 # Selenium service
      - 5901:5900 # VNC server
      - 7901:7900 # VNC browser client
  edge:
    image: selenium/standalone-edge
    shm_size: '2gb'
    ports:
      - 4446:4444 # Selenium service
      - 5902:5900 # VNC server
      - 7902:7900 # VNC browser client
  node:
    image: node:19
    volumes:
      - ./../:/project
    working_dir: /project
    tty: true

We will look at starting and using our Docker environment next. You can find all the commands needed to start, stop and enter the containers in the following code snippet. If you are new to Docker, make sure to get familiar with and install Docker, as it's a very useful tool for getting various local development environments up and running quickly (and more).

To start all our four containers, simply change to the dev directory in our project and launch the containers with the docker compose up -d command. Once all containers are running, enter our node container by launching a shell with docker compose exec node bash. All other commands in this article should be run inside this container. When you are done using the containers, you can also shut them down again to save resources (see below).

# Start all Docker containers (from project dev/ directory)
$ docker compose up -d
[+] Running 5/5
 ⠿ Network dev_default      Created
 ⠿ Container dev_edge_1     Started
 ⠿ Container dev_firefox_1  Started
 ⠿ Container dev_chrome_1   Started
 ⠿ Container dev_node_1     Started

# Enter the 'node' container and start shell
$ docker compose exec node bash
root@8bb27574eb3b:/project$ # We are now inside the container

# When finished, leave the shell & container by pressing Ctrl+D

# Then from outside the container, you can shut everything down
$ docker compose down 
[+] Running 5/5
 ⠿ Container dev_firefox_1  Removed
 ⠿ Container dev_chrome_1   Removed
 ⠿ Container dev_node_1     Removed
 ⠿ Container dev_edge_1     Removed
 ⠿ Network dev_default      Removed

Running our tests inside the container is very simple now. The first time we start our container we need to install all project dependencies from our package.json file. We do this by running the npm install command.

We can then use one of our script aliases to run our tests. Our simple npm run test alias runs our tests and and outputs the test results to the console. Because we wrote our test suite so it can connect to different services and launch different browsers, we pass the browser name and the Selenium host name as variables to our script.

We also defined a few additional script aliases in our package.json file. For example, via the npm run test-junit command we are not outputting the results to the console, but writing the test results to a JUnit XML file. This format is universally used by testing tools to exchange test results (basically any testing tool and framework supports this format directly or indirectly). This result file will come in handy later in this article when we want to report our test results.

# First install any required packages from our package.json file
# Make sure to run this inside the dev container
$ npm install

Added 146 packages, and audited 147 packages in 9s
found 0 vulnerabilities

# Running the Selenium tests inside the dev container
$ BROWSER=chrome SELENIUM=chrome npm run test

> test
> npx mocha test.mjs

  search
    ✔ should search for "Selenium dev" (2220ms)
    ✔ should search for "Appium" (2086ms)
    ✔ should search for "Mozilla" (2262ms)
    ✔ should search for "GitHub" (2314ms)
    ✔ should search for "GitLab" (2145ms)

  5 passing (28s)

# We can also generate an XML result report file instead of 
# printing the results to the console
$ npm run test-junit

> test-junit
> npx mocha --reporter node_modules/mocha-junit-reporter [..]

Live debugging our browser tests via VNC

If everything worked correctly, our test suite connects to one of our Selenium services, starts browser sessions, runs our DuckDuckGo searches and verifies the result page. But wouldn't it be nice if we could see the browser and the browser interactions of our script live? We actually can! By using the debugging tools built-in to the Selenium containers.

Remember the various ports we made available to our host in the above Docker Compose configurations? These are ports for the built-in VNC services and web interfaces to access it. You can either use a third-party VNC client to connect to the services, or you can point your browser to one of the built-in VNC client pages to see the browser live:

VNC services (connect with a VNC client of your choice):

  • localhost:5900 (Chrome)
  • localhost:5901 (Firefox)
  • localhost:5902 (Edge)

Or point your web browser to:

  • http://localhost:7900 (Chrome)
  • http://localhost:7901 (Firefox)
  • http://localhost:7902 (Edge)

Selenium uses a default password for VNC: secret

So for example, if you run your tests against the Chrome service (as we did in our example above), access the VNC web client of the container by pointing your web browser to http://localhost:7900 and enter the default password of secret. You can then see the browser windows with all the live interactions of our script when you run the tests:

GitHub Actions & Selenium Guide (with Parallel Browser Testing) - YXNzZXRzL2NvbnRlbnQvZ2l0aHViLWFjdGlvbnMtc2VsZW5pdW0vdm5jLnBuZw
Debugging our tests by connecting to the container's VNC service to see the live browser

Selenium Browser Tests with GitHub Actions

Next we are going to configure our first GitHub Actions pipeline to run our automated tests with GitHub. We will start with our most simple workflow, our test-single.yml config, to run our tests against a single web browser. You can find the full workflow configuration below:

# .github/workflows/test-single.yml
name: Test (single)

on:
  workflow_dispatch:
    inputs:
      browser:
        type: choice
        description: Which browser to test
        required: true
        options:
          - chrome
          - firefox
          - edge

jobs:
  test:
    name: Test
    runs-on: ubuntu-latest

    container:
      image: node:19

    services:
      selenium:
        image: selenium/standalone-${{ github.event.inputs.browser }}
        options: --shm-size=2gb

    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '19'
          cache: 'npm'
      - run: npm ci
      - run: npm run test
        env:
          BROWSER: ${{ github.event.inputs.browser }}
      - uses: actions/upload-artifact@v3
        if: always()
        with:
          name: screenshots
          path: screenshots/

Let's review our first GitHub Actions workflow configuration step by step:

  • on: workflow_dispatch: This tells GitHub that we want to be able to manually run our workflow via GitHub Actions' website, not when we commit new code (we could also do this, but we will have multiple example workflows in this project, so it's better to launch these manually via the website). One useful option we configure here is an input variable so we can select the browser we want to test against (see the screenshot at the beginning of this article). GitHub Actions will put the selected value ('chrome', or 'firefox', or 'edge') into a variable called browser, which we can reference in other sections of this file.
  • runs-on: ubuntu-latest: This specifies that we want to use GitHub's Ubuntu Linux base machine for our CI workflow. Our code will not run in this virtual machine directly though, because we will be using Docker containers running in this virtual machine instead (see next line).
  • container: node:19: We will to run all our pipeline steps in a Docker container using the official Docker image node (version 19). This is the same Docker image we use for our local development environment, so everything will work the same.
  • services: selenium: We then define an additional Docker service container that will be launched by GitHub Actions next to our main node container. This will run our Selenium browser service, just as we do locally. Note how we specify the container image with a reference to our above defined browser variable, namely via selenium/standalone-${{ github.event.inputs.browser }}. This way, GitHub Actions automatically downloads and starts the correct Docker image based on the browser we've selected.
  • steps: In our steps we start by checking out the project repository files, then set up NPM package caching (for faster performance), and finally install all our project dependencies (npm ci) before running our tests with npm run test. Note how we also make the browser name available as an environment variable here, so our script knows which browser session to request.

    We end our steps by uploading any screenshots taken by our tests to GitHub Actions as artifacts, so we an download and review our screenshots later (this step is configured to always run, regardless of any previous errors, because we want to see the screenshots in case of errors as well).

When you commit this file to GitHub and go to the Actions tab of your repository, you can start a new workflow run through GitHub's web interface. You will be asked to select one of the browsers we've specified. GitHub Actions will then schedule a workflow run and will launch our containers and run our tests as soon as possible. You can then see all tests and their results as executed by GitHub Actions:

GitHub Actions & Selenium Guide (with Parallel Browser Testing) -
Running our Selenium test suite against a single browser with GitHub Actions

Parallel Selenium Browser Testing

We can also extend our workflow config to run our tests against all browsers during a workflow run, and not just against a single browser. We will also run these tests in parallel with GitHub Actions, so the tests will complete faster overall. You can find the configuration for our extended workflow test-parallel.yml below:

# .github/workflows/test-parallel.yml
name: Test (parallel)

on: [workflow_dispatch]

jobs:
  test:
    name: Test
    runs-on: ubuntu-latest

    container:
      image: node:19

    strategy:
      fail-fast: false
      matrix:
        browser: ['chrome', 'firefox', 'edge']

    services:
      selenium:
        image: selenium/standalone-${{ matrix.browser }}
        options: --shm-size=2gb

    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '19'
          cache: 'npm'
      - run: npm ci
      - run: npm run test
        env:
          BROWSER: ${{ matrix.browser }}
      - uses: actions/upload-artifact@v3
        if: always()
        with:
          name: ${{ matrix.browser }}
          path: screenshots/

Let's look at the details of this workflow configuration again. First, we removed our browser variable here. We will be running the tests against all our browsers, so there's no need to select a browser for a run. Here are the interesting bits:

  • strategy: We tell GitHub Actions to run our test job with a matrix. This means that GitHub will start multiple parallel jobs based on the values we define here. We define a matrix of a single config (browser) with the three values of our browsers ('chrome', 'firefox', 'edge'). So GitHub Actions will start three separate parallel jobs for our test job, one for each browser.
  • services: selenium: We again just start a single browser service container for each separate job here. We are referencing the browser name via the matrix.browser varable. So when GitHub Actions starts our three separate jobs, it will launch a Selenium Chrome service container for the first job, the Firefox container for the second job, and the Edge Selenium container for the third job.
  • steps: Most of our steps will be unchanged, such as getting the code and installing our NPM packages. We changed the way we start our tests here though. This time we are referencing the variable matrix.browser and pass it to the test, so we know which browser to test against in each separate test job. We also slightly change our screenshot upload step to upload the files with the relevant browser name.

When you start our new parallel workflow through the GitHub Actions interface, we can see that our three separate test jobs for our different browsers are executed. Below the test jobs you can also see the resulting screenshot artifacts and download them.

GitHub Actions & Selenium Guide (with Parallel Browser Testing) -
Parallel Selenium tests against multiple browsers with resulting screenshot artifacts

Reporting Results to Test Management

Now that we can run our tests against different web browsers in parallel with our extended workflow config, we can also submit and report our test results to test management, in our case Testmo. Reporting our results makes it easy to track our runs over time, share test results with the entire team, identify failing and slow or flaky tests, and compare test results across multiple runs or configurations (such as different browsers).

We've already prepared our package.json file to install the testmo command line tool as a dependency in our project. So when we run npm install or npm ci, this cool is already available. If you are using a different platform and programming language, or if you are starting with an empty project, you can simply install the required package via NPM. This works even if you don't use Node.js/JavaScript for your projects, as NPM is usually still already installed (and if not, you can easily install it and many non-JavaScript projects still use it for various build tools anyway).

# We already have this in our `package.json`, but for new projects
# you can simply install the `testmo` CLI tool like this:
$ npm install --save-dev @testmo/testmo-cli

Submitting our test results via the testmo command line tool is very easy, as you just call its automation:run:submit command and pass a few parameters such as the project ID, new test run name, location of our test result report file etc. In our example we are also referencing the $BROWSER variable again so we can add it to the run name and also specify a configuration for the run in Testmo (you can add new configurations in Testmo under Admin > Configurations). Make sure to add configurations with the names ChromeFirefox and Edge in Testmo, or remove the --config parameter in the package.json file.

Our testmo command line call looks like the following snippet. We don't need to add this directly to our workflow config file though. We already defined the following call with a script alias in our package.json file, so we can just use npm run test-ci to call all this. You might have noticed that we also pass the npm run test-junit call at the end of the command line. This way our testmo tool launches our tests itself, which enables it to capture the full console output, measure the test times and record the exit code.

# This is the command we call with the 'npm run test-ci' script alias
# Make sure to define: $BROWSER, $SELENIUM, $TESTMO_URL & $TESTMO_TOKEN
npx testmo automation:run:submit \
	--instance "$TESTMO_URL" \
	--project-id 1 \
	--name "Selenium test run for $BROWSER" \
	--config "$BROWSER" 
	--source "frontend"
	--results results/*.xml 
	-- npm run test-junit # Note space after --

When we call the above command inside our local development container (e.g. with npm run test-ci), the Testmo CLI tool will start the Mocha test suite, capture its console output and test times, and will then submit all tests after the test suite finished. You can then view the test run and all its results in Testmo:

GitHub Actions & Selenium Guide (with Parallel Browser Testing) - YXNzZXRzL3NjcmVlbnNob3RzL2F1dG9tYXRpb24tcnVucy1yZXN1bHRzLWZ1bGwucG5n
Results of an automated test run in our test management tool

To add our Testmo Selenium reporting to our GitHub Actions workflow, we basically just need to change our test command. Instead of calling npm run test, we change this step to call npm run test-ci to generate our result file and submit this with the above menthined call to Testmo.

Note that we are now also passing the TESTMO_URL and TESTMO_TOKEN secrets to our command call, as the testmo tool expects these variables to be available. You would simply configure these secrets in the GitHub repository settings under the Secrets page with your Testmo address and your Testmo API key. You can also learn more about this in our GitHub Actions Test Automation CI Pipeline article (scroll down to Automation Reporting to Test Management).

# .github/workflows/test-testmo.yml
name: Test (testmo)

on: [workflow_dispatch]

jobs:
  test:
	# [..]

    steps:
      # [..]
	  - run: npm ci
      - run: npm run test-ci
        env:
          BROWSER: ${{ matrix.browser }}
          TESTMO_URL: ${{ secrets.TESTMO_URL }}
          TESTMO_TOKEN: ${{ secrets.TESTMO_TOKEN }}
      # [..]

When you start the new test-testmo.yml workflow in GitHub Actions, GitHub will again start multiple parallel test jobs (one for each browser) and then submit the test runs to Testmo. There will be three separate test runs (one for each browser configuration) with the test results. They will also be linked to the same source in Testmo, so it's easy to compare the results of a test over multiple runs.

GitHub Actions & Selenium Guide (with Parallel Browser Testing) - YXNzZXRzL3NjcmVlbnNob3RzL2F1dG9tYXRpb24tcnVucy1pbmRleC1mdWxsLnBuZw 1
Reporting automated test runs to Testmo

That's it! We've now successfully set up our repository workflows to run our Selenium tests, either against a single web browser or multiple web browsers, and then submit and report the results to our testing tool. You can also combine this approach with additional workflow steps for additional non-Selenium unit tests (e.g. backend, API, mobile tests) and run all your different test suites in the same workflow. You can learn more about implementing non-browser test suites with GitHub actions in our additional articles:

Also make sure to subscribe to notifications about our upcoming articles if you are interested in these kinds of topics. We will also publish more articles about integrating various Selenium cloud providers, other (browser) testing tools and more CI services.

PS: We regularly publish original software testing & QA research, including free guides, reports and news. To receive our next postings, you can subscribe to updates. You can also follow us on Twitter and Linkedin.

More from Testmo

Get news about Testmo & software testing

Also receive our free original testing & QA content directly in your inbox whenever we publish new guides and articles.
We will email you occasionally when news about Testmo and testing content is published. You can unsubscribe at any time with a single click. Learn more in our privacy policy.