-
Notifications
You must be signed in to change notification settings - Fork 134
Test pyramid in Datadog iOS SDK project
iOS SDK tests follow the standard model of test pyramid by distinguishing several product areas and adopting different testing approach for each:
- small- (S) and medium- (M) size components are tested as units with maximising their test coverage;
- tests for large (L) components (e.g. Logger or RUMMonitor interface) are less focused on heroic code coverage, but more on asserting all of their behaviours;
- at the highest level, we define dozen of tests that integrate largest components together (e.g. Logging with RUM) - those focus on performing an action in one and expecting result in the other.
All these are part of “unit test” bundle, where mocks and other convenience techniques are vastly leveraged. On top of that we define two “integration test” bundles for asserting business-critical behaviours without altering any implementation (no mocks):
-
SDK integration tests with local server - critical behaviours of all products (Logging, Tracing, RUM, WebView Tracking, …) are covered with predefined scenarios played with Xcode UI Tests runner:
- each curated scenario covers one or multiple behaviours of one or more products;
- in each scenario, SDK transmits data to a local server;
- after each scenario, data is retrieved back from the server and assertions do check its consistency;
-
E2E tests with real Datadog instance - (this is work in progress, so not all products are covered) all public APIs of the SDK are covered with individual tests:
- each test executes one public API from one product;
- each test sends data to real Datadog instance;
- Datadog Monitors are used to assert data consistency for last X hours.
"SDK integration", and "components (S/M/L)" tests form a pyramid, which balances the effort of adding new coverage (slow vs fast) with their practical benefit (code coverage vs functional coverage):