You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
package net.equipment.runners;
import com.intuit.karate.Results;
import com.intuit.karate.Runner;
import java.io.File;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import net.masterthought.cucumber.Configuration;
import net.masterthought.cucumber.ReportBuilder;
import org.apache.commons.io.FileUtils;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
public class _EquipmentRunnerDomainV1 {
private static final String serviceName = "EquipmentService";
private static final String reportRoot = "target/" + serviceName + "/";
private static final Integer cores = Runtime.getRuntime().availableProcessors();
public String _path = "classpath:net/equipment/test/api/domain/v1";
public String[] _tags = { "@regression", "@hopper" };
public String _karateEnv = "dev";
public String _buildDir = reportRoot + "/";
public String _reportDir = reportRoot + "/domain/v1";
public _EquipmentRunnerDomainV1() {
}
public _EquipmentRunnerDomainV1 path(String _path) {
this._path = _path;
return this;
}
public _EquipmentRunnerDomainV1 tags(String[] _tags) {
this._tags = _tags;
return this;
}
public _EquipmentRunnerDomainV1 karateEnv(String _karateEnv) {
this._karateEnv = _karateEnv;
return this;
}
public _EquipmentRunnerDomainV1 buildDir(String _buildDir) {
this._buildDir = _buildDir;
return this;
}
public _EquipmentRunnerDomainV1 reportDir(String _reportDir) {
this._reportDir = _reportDir;
return this;
}
// Allows concatenated execution of runner
@Test
public void concatRunner() {
Results results = Runner
.path(_path)
.tags(_tags)
.karateEnv(_karateEnv)
.outputCucumberJson(true)
.outputHtmlReport(true)
.outputJunitXml(true)
.buildDir(_buildDir)
.reportDir(_reportDir)
.systemProperty("karate.dsRow", "1")
.parallel(cores);
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
// Allows manual execution of runner with results
@Test
public void testParallel() {
Results results = Runner
.path("classpath:net/equipment/test/api/domain/v1")
.tags("@regression", "@hopper")
.karateEnv(_karateEnv)
.outputCucumberJson(true)
.outputHtmlReport(true)
.outputJunitXml(true)
.reportDir(_reportDir)
.systemProperty("karate.dsRow", "1")
.parallel(cores);
generateReport(results.getReportDir());
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
public static void generateReport(String karateOutputPath) {
Collection<File> jsonFiles = FileUtils.listFiles(new File(karateOutputPath), new String[] { "json" }, true);
List<String> jsonPaths = new ArrayList<>(jsonFiles.size());
jsonFiles.forEach(file -> jsonPaths.add(file.getAbsolutePath()));
Configuration config = new Configuration(new File("target"), serviceName);
ReportBuilder reportBuilder = new ReportBuilder(jsonPaths, config);
reportBuilder.generateReports();
}
}
This works great to execute multiple runners with a single test or mvn command. This also consolidates all the tests into a single cucumber report, however the karate reports are tied to each single runner. Before we start manipulating the source code or building a custom reporting methodology, would it be possible to separate the generate report function(s) for the karate reports, from the actual test runs, similar to the cucumber reports, or provide a method to consolidate all the summary reports, from subdirectories into a parent directory?
The text was updated successfully, but these errors were encountered:
I agree that having a utility function to aggregate reports is good to have. one challenge is if tests started at different times - how do we calculate the total time of all threads. and how do we visualize what tests ran on which thread.
this requires refactoring of the Suite class. with luck, the functions within the Results class may be sufficient, but this needs investigation
Capture time per tests, at start of execution and completion, filters to
capture smallest and largest times are options that come to mind to start.
Regards
Mike
See https://stackoverflow.com/posts/76416148
We recently made some updates to our runners, to combine multiple runs into a single Test
and
This works great to execute multiple runners with a single test or mvn command. This also consolidates all the tests into a single cucumber report, however the karate reports are tied to each single runner. Before we start manipulating the source code or building a custom reporting methodology, would it be possible to separate the generate report function(s) for the karate reports, from the actual test runs, similar to the cucumber reports, or provide a method to consolidate all the summary reports, from subdirectories into a parent directory?
The text was updated successfully, but these errors were encountered: