Skip to content

Commit 618fde4

Browse files
qinsoonwks
andauthored
Document the policy about performance testing environment and epochs (#1206)
Co-authored-by: Kunshan Wang <[email protected]>
1 parent f032697 commit 618fde4

File tree

1 file changed

+17
-1
lines changed

1 file changed

+17
-1
lines changed

docs/team/ci.md

+17-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Continuous Integration
22

3-
## Testing
3+
## Correctness Testing
44

55
MMTK core runs CI tests *before* a pull request is merged.
66

@@ -11,3 +11,19 @@ MMTk core sets up two sets of tests, the *minimal tests* and the *extended tests
1111
* Extended tests only run for a pull request if the pull request is tagged with the label `PR-extended-testing`. This set of tests
1212
may take hours, and usually include integration tests with bindings which run the language implementation's standard test suite
1313
as much as possible.
14+
15+
## Performance Testing
16+
17+
We conduct performance testing for each MMTk core commit after it has been merged.
18+
19+
### Testing Environment and Epochs
20+
21+
We track the performance of MMTk over years. Naturally, changes in the testing environment (hardware or software) and methodology are sometimes necessary. Each time we make such a change, it marks the start of a new *epoch* in our performance evaluation.
22+
23+
Since changes in the testing environment can significantly impact performance, we do not directly compare performance results across different epochs. Within an epoch, we ensure that **MMTk does not experience performance regressions**, and **we only update the testing environment when there is no performance regression in the current epoch**.
24+
25+
### Regression Test Canary
26+
27+
To monitor unnoticed performance changes and to measure the level of noise in the testing environment, we use a canary when doing performance regression tests with the OpenJDK binding. A "canary" is a chosen revision that is run along with any merged pull request. Since the same revision is run again and again, its performance should be relatively constant, within the range of noise. If we notice a change in the performance of the canary (especially something that resembles a [step function](https://en.wikipedia.org/wiki/Heaviside_step_function) in the line plot), we should inspect our testing environment for hardware or software changes.
28+
29+
We keep running the same canary version until it is no longer possible, for reasons such as the toolchain for compiling that version is no longer available. When that happens, we may choose a different canary version or switch to an automatic mechanism for choosing canary.

0 commit comments

Comments
 (0)