Testmo logo
Product News

Announcing the New Runs Results API in Testmo

By Simon Knight
.
Feb 26, 2025
.
5 min read

We’re excited to announce the release of a new runs/results API namespace in Testmo. 

The new API endpoint enables users to extract and filter individual test results from manual test runs directly from the Testmo API. This addition complements and expands upon our existing APIs that allow you to retrieve high-level statistics from test runs, sessions and automation runs.

What’s New?

The runs/results API endpoint provides a new method to retrieve detailed test results for a specific test run. Testmo API users can now utilise the endpoint GET /runs/{run_id}/results to return all test results for a specified run ID.

Key features of the new endpoint include:

  • Filtering Options:
    • Use created_after and created_before to limit results to a specific time frame.
    • Use created_by to filter by users who created the test results.
    • Use status_id to filter by status such as Passed, Failed, or Blocked.
  • Pagination: The API uses pagination to manage large result sets. You can retrieve results in chunks by using the page= and per_page= modifiers.
  • Expands: Return additional information with your results using expands, such as the users or issues associated with the run results: expands=users,issues

How Does It Complement Existing APIs?

Testmo’s REST API is designed to allow you to access your Testmo data programmatically, and we plan to continue expanding available endpoints to provide additional functionality and support even more use cases around automation, reporting, and integrations. 

The new runs/results API complements the existing runs namespace by returning detailed information about each test in a specific run. If all you need is a high-level report of the number of untested/passed/failed/skipped tests in a given run, the existing runs namespace will most likely provide the information you are looking for. However, if you need more detailed information about the manual test results your team has logged — for example, the IDs of any issues they might have reported and linked during testing, the detailed notes for any tests marked as ‘Failed’, or information about who logged the result — the new runs/results endpoint is your go-to! 

Example Usage

Here’s a few examples to get you started:

  • To get all the results for run ID 1, use the new endpoint and pass in the run_id: /api/v1/runs/1/results  
  • Note that if there’s >100 results, you’ll need to paginate through them using the desired page number: /api/v1/runs/1/results?page=2 for example.
  • To limit results to page 1, with 25 results per page, and include issues and users expands, use: /api/v1/runs/1/results?page=1&per_page=25&expands=issues,users
  • If you’re interested in results for a specific user and day, you can apply a filter: /api/v1/runs/12/results?created_after=2025-02-17T00:00:00Z&created_by=1&expands=issues,users

Note that the runs/results endpoint returns actual test results that your team has logged for tests within runs in the Management area of your Testmo instance. It will not return any information about tests where no results have been logged yet (e.g. tests with status “untested”), or details for results your team may have logged in sessions or automation runs (more to come on those objects in the future!).

Example: Exporting Test Run Results to a CSV

One of the reasons many of our Testmo users have requested this endpoint is so that results can be extracted and transformed into a report. The script below demonstrates how this may be accomplished, and how the result can be transformed to CSV format for easier consumption, either by opening the file directly, or by importing it into a database or Business Intelligence tool:

// Import required modules
const axios = require('axios');
const { Parser } = require('json2csv');
const fs = require('fs');
 require('dotenv').config();

// Function to fetch data from Testmo API
async function getTestmoData(endpoint, params = {}) {
    // Replace with your Testmo URL and API token
    const baseUrl = process.env.TESTMO_BASE_URL;
    const apiToken = process.env.TESTMO_API_TOKEN;
    const headers = { "Authorization": `Bearer ${apiToken}` };

    try {
        console.log(`Fetching data from ${baseUrl}${endpoint} with params:`, params);
        const response = await axios.get(baseUrl + endpoint, { headers: headers, params:params });
        return response.data.result; // Extract the 'result' array
    } catch (error) {
        console.error("Error fetching data:", error);
        throw error;
    }
 }

// Function to write data to CSV
function writeToCsv(data, filename, fields) {
    try {
        const json2csv = new Parser({ fields });
        const csv = json2csv.parse(data);
        fs.writeFileSync(filename, csv);
        console.log(`CSV saved to ${filename}`);
    } catch (error) {
        console.error("Error writing CSV:", error);
    }
 }

// Main function to fetch run results and export to CSV
async function main() {
    const runId = 1; // Replace with your actual Run ID
    const runResultsEndpoint = `/api/v1/runs/${runId}/results`;
    const runResultsFields = [
        'id', 
        'project_id', 
        'run_id', 
        'test_id', 
        'status_id', 
        'note', 
        'elapsed', 
        'created_at'
    ]; // Adjust based on actual data structure

    try {
        const runResultsData = await getTestmoData(runResultsEndpoint);
        console.log("Run Results data:", runResultsData);
        
        if (runResultsData) {
            writeToCsv(runResultsData, 'run_results.csv', runResultsFields);
        }
    } catch (error) {
        console.error("Failed to retrieve or write run results:", error);
    }

    // Add similar code for Sessions and Automated Test Results
 }

 main();

        

Please note that in order to use the script above as-is, you’ll need to carry out the following steps:

  • Make sure you have Node.js and NPM installed. You can verify this by running node -v and npm -v in your terminal. 
  • Create a project, e.g:
    • mkdir testmo-api-csv
    • cd testmo-api-csv
    • npm init -y
  • You’ll need axios for making HTTP requests and json2csv for converting JSON data to CSV. You’ll also need the dotenv package to load environment variables from a .env file:
    • npm install axios json2csv dotenv
  • Once you’ve satisfied those requirements, you can add a .env file to your project containing the TESTMO_BASE_URL & TESTMO_API_TOKEN parameters, and execute the script using node your_filename.js.  

Example: Retrieving Results from All Open Test Runs in a Milestone

Following on from the example above, you may wish to see all of the results for all of your runs, under a project milestone. The script below will handle that for you, depositing the results into a CSV in the same way as before, with the same dependencies:

// Import required modules
const axios = require('axios');
const { Parser } = require('json2csv');
const fs = require('fs');
 require('dotenv').config();

// Function to fetch data from Testmo API
async function getTestmoData(endpoint, params = {}) {
    const baseUrl = process.env.TESTMO_BASE_URL;
    const apiToken = process.env.TESTMO_API_TOKEN;
    const headers = { "Authorization": `Bearer ${apiToken}` };

    try {
        const response = await axios.get(baseUrl + endpoint, { headers, params });
        return response.data;
    } catch (error) {
        console.error("Error fetching data:", error);
        throw error;
    }
 }

// Function to write data to CSV
function writeToCsv(data, filename, fields) {
    try {
        const json2csv = new Parser({ fields });
        const csv = json2csv.parse(data);
        fs.writeFileSync(filename, csv);
        console.log(`CSV saved to ${filename}`);
    } catch (error) {
        console.error("Error writing CSV:", error);
    }
 }

// Function to fetch all results for a test run
async function getAllResultsForRun(runId) {
    let allResults = [];
    let page = 1;
    const perPage = 100; // Adjust as needed

    while (true) {
        const resultsEndpoint = `/api/v1/runs/${runId}/results`;
        const resultsParams = { per_page: perPage, page: page };

        try {
            const resultsData = await getTestmoData(resultsEndpoint, resultsParams);

            // If there are no results, stop
            if (!resultsData.result || resultsData.result.length === 0) {
                break;
            }

            allResults = allResults.concat(resultsData.result);

            // Check if there's a next page
            if (resultsData.next_page) {
                page++;
            } else {
                break;
            }

        } catch (error) {
            console.error(`Error fetching results for run ${runId} page ${page}:`, error);
            break;
        }
    }

    return allResults;
 }

// Main function to get runs and export data
async function main() {
    const projectId = 3; // Replace with your project ID
    const milestoneId = '9'; // Replace with your milestone ID
    const runsEndpoint = `/api/v1/projects/${projectId}/runs`;
    const runsParams = { milestone_id: milestoneId };

    try {
        // 1. Get runs with milestone_id
        const runsData = await getTestmoData(runsEndpoint, runsParams);
        const runIds = runsData.result.map(run => run.id);

        console.log("Run IDs with milestone:", runIds);

        // 2. Generate and process results for each run ID
        let allResults = [];
        for (const runId of runIds) {
            const runResults = await getAllResultsForRun(runId);
            allResults = allResults.concat(runResults);
        }

        // 3. Convert and save results to CSV
        if (allResults.length > 0) {
            const resultsFields = ['id', 'project_id', 'run_id', 'test_id', 'status_id']; // Adjust fields as needed
            writeToCsv(allResults, 'all_run_results.csv', resultsFields);
        }

    } catch (error) {
        console.error("An error occurred:", error);
    }
 }

 main();
            
        

Run the script using node your_filename.js, as before. 

Please note that we created the code snippets above as simple examples, just to get you started. Testmo’s API also supports more rigorous integrations with third-party analytics, business intelligence and custom dashboard reporting tools, as we’ve mentioned on a previous blogpost.

Uploading test results for manual test runs

In Testmo, manual runs are used to track the scripted or steps-based tests that your team is running manually, and you can add results for those kinds of tests via the Runs page in the Testmo UI.

If you want to log results from automated test results, the easiest way to do so is using the Testmo CLI, which simplifies submitting test automation results to Testmo and integrating with your existing test automation framework(s) and/or CI/CD pipeline. You can also link automated test results to cases in your test repository if you want to, which can give you additional visibility into automated test results.

We want your feedback!

The ‘runs/results’ endpoint is the first in a series of planned API releases for Testmo in this coming year. To ensure this and further endpoints hit the target, we welcome your feedback. If you have some comments or suggestions relating to this or planned future endpoints (mainly relating to CRUD use cases for the Testmo test case Repository), please let us know about them in the linked form: Provide Your Feedback Here!

More from Testmo

Get news about Testmo & software testing

Also receive our free original testing & QA content directly in your inbox whenever we publish new guides and articles.
We will email you occasionally when news about Testmo and testing content is published. You can unsubscribe at any time with a single click. Learn more in our privacy policy.