|
1 | 1 | --- |
2 | 2 | agent: "agent" |
3 | | -description: "Generate Google Test unit tests for C++ functions or methods in Graphitti" |
4 | | ---- |
5 | | - |
6 | | -## What is a Unit Test? |
7 | | - |
8 | | -A **unit test** is a type of software testing that verifies the correctness of individual, isolated components (or "units") of code—typically functions, methods, or classes. |
9 | | - |
10 | | -### Key Characteristics |
11 | | - |
12 | | -- **Isolated**: Tests a single piece of functionality in isolation from the rest of the system |
13 | | -- **Fast**: Should execute quickly (milliseconds) |
14 | | -- **Repeatable**: Produces the same result every time |
15 | | -- **Independent**: Doesn't depend on other tests or external systems (databases, networks, files) |
16 | | - |
17 | | -### Purpose |
18 | | - |
19 | | -1. **Catch bugs early** – Find issues before they propagate |
20 | | -2. **Enable refactoring** – Safely change code knowing tests will catch regressions |
21 | | -3. **Document behavior** – Tests serve as executable documentation of how code should work |
22 | | - |
23 | | ---- |
24 | | - |
25 | | -## What is Google Test (gtest)? |
26 | | - |
27 | | -**Google Test** is a popular open-source C++ testing framework developed by Google for writing and running unit tests. |
28 | | - |
29 | | -### Key Features |
30 | | - |
31 | | -- **Rich assertions** – Provides `EXPECT_*` (non-fatal) and `ASSERT_*` (fatal) macros |
32 | | -- **Test discovery** – Automatically finds and runs tests |
33 | | -- **Test fixtures** – Share setup/teardown code between tests using `TEST_F()` |
34 | | -- **Death tests** – Verify code crashes or exits as expected |
35 | | -- **Parameterized tests** – Run the same test with different inputs |
36 | | -- **XML/JSON output** – Integrates with CI/CD systems |
37 | | - |
38 | | -### EXPECT vs ASSERT |
39 | | - |
40 | | -| Type | Behavior on Failure | |
41 | | -| ---------- | ------------------------------------------------------ | |
42 | | -| `EXPECT_*` | Records failure but continues test execution | |
43 | | -| `ASSERT_*` | Records failure and stops the current test immediately | |
44 | | - |
45 | | -Use `EXPECT_*` when you want to see all failures in a test. Use `ASSERT_*` when continuing doesn't make sense (e.g., a null pointer would crash subsequent code). |
46 | | - |
| 3 | +description: "Generate concise Google Test unit tests for Graphitti (C++17)" |
47 | 4 | --- |
48 | 5 |
|
49 | 6 | ## Task |
50 | 7 |
|
51 | | -Analyze the selected C++ function/method and generate or improve Google Test unit tests that thoroughly validate its behavior. |
| 8 | +Generate or update Google Test unit tests for the selected C++ function/method. |
52 | 9 |
|
53 | | -## Project Context |
| 10 | +## Project context |
54 | 11 |
|
55 | | -This is the Graphitti project - a neural network simulator. Tests are located in `Testing/UnitTesting/` and use the Google Test framework. |
| 12 | +- Graphitti is a C++17 neural simulation project. |
| 13 | +- Tests live in `Testing/UnitTesting/` and use Google Test. |
56 | 14 |
|
57 | | ---- |
58 | | - |
59 | | -## Pre-Generation Check: Existing Test Analysis |
60 | | - |
61 | | -**Before generating new tests, perform these steps:** |
62 | | - |
63 | | -### Step 1: Check for Existing Test File |
64 | | - |
65 | | -Look for an existing test file at: |
66 | | - |
67 | | -- `Testing/UnitTesting/{ClassName}Tests.cpp` |
68 | | -- `Testing/UnitTesting/{ClassName}Test.cpp` |
69 | | -- `Testing/UnitTesting/Test{ClassName}.cpp` |
70 | | - |
71 | | -### Step 2: If Tests Exist, Analyze Them |
72 | | - |
73 | | -If a test file already exists, **analyze it before generating new tests**: |
| 15 | +## Before you write tests |
74 | 16 |
|
75 | | -#### 2a. Error Analysis |
| 17 | +1. Check for existing tests: |
| 18 | + - `Testing/UnitTesting/{ClassName}Tests.cpp` |
| 19 | + - `Testing/UnitTesting/{ClassName}Test.cpp` |
| 20 | + - `Testing/UnitTesting/Test{ClassName}.cpp` |
| 21 | +2. If tests exist, scan for errors and coverage gaps (missing methods, edge cases, error paths, branches). |
| 22 | +3. Only add what is missing; do not duplicate tests. |
76 | 23 |
|
77 | | -- Check for syntax errors or incorrect Google Test macro usage |
78 | | -- Identify deprecated assertions or patterns |
79 | | -- Look for tests that may produce false positives/negatives |
80 | | -- Verify proper use of `EXPECT_*` vs `ASSERT_*` |
81 | | -- Check for memory leaks or improper resource handling in tests |
82 | | -- Identify flaky test patterns (timing-dependent, order-dependent) |
| 24 | +## What to generate |
83 | 25 |
|
84 | | -#### 2b. Coverage Gap Analysis |
| 26 | +- 5-8 focused tests covering: |
| 27 | + - Core behavior |
| 28 | + - Boundary/invalid inputs (nullptr, empty containers, min/max) |
| 29 | + - Error handling (exceptions or error returns) |
| 30 | + - Resource ownership or side effects if relevant |
| 31 | +- Prefer behavior checks over implementation details. |
| 32 | +- Use realistic Graphitti data where possible. |
85 | 33 |
|
86 | | -Compare the existing tests against the source code to identify: |
| 34 | +## Test structure |
87 | 35 |
|
88 | | -- **Untested public methods** – Methods with no corresponding test cases |
89 | | -- **Missing edge cases** – Boundary conditions not covered |
90 | | -- **Missing error paths** – Exception handling or error returns not tested |
91 | | -- **Missing input variations** – Only happy path tested, no invalid inputs |
92 | | -- **Untested branches** – Conditional logic not fully exercised |
| 36 | +- Use `TEST()` or `TEST_F()` and AAA (Arrange, Act, Assert). |
| 37 | +- Use `EXPECT_*` for non-fatal checks and `ASSERT_*` when continuation is unsafe. |
| 38 | +- Use clear names: `TEST(ClassName, Method_Condition_Expected)`. |
93 | 39 |
|
94 | | -#### 2c. Report Findings |
| 40 | +## Output |
95 | 41 |
|
96 | | -Provide a summary: |
| 42 | +- Place tests in `Testing/UnitTesting/{ClassName}Tests.cpp`. |
| 43 | +- If updating existing tests, mark new additions with a short comment. |
97 | 44 |
|
98 | | -``` |
99 | | -## Existing Test Analysis: {ClassName}Tests.cpp |
100 | | -
|
101 | | -### Errors Found |
102 | | -- [List any errors or issues] |
103 | | -
|
104 | | -### Coverage Gaps |
105 | | -| Method/Function | Status | Missing Coverage | |
106 | | -|-----------------|--------|------------------| |
107 | | -| methodName() | Partial | Missing nullptr test, boundary cases | |
108 | | -| otherMethod() | None | No tests exist | |
109 | | -
|
110 | | -### Recommendations |
111 | | -1. [Specific improvements needed] |
112 | | -``` |
113 | | - |
114 | | -### Step 3: Generate or Update Tests |
115 | | - |
116 | | -Based on the analysis: |
117 | | - |
118 | | -- **No existing tests**: Generate complete test suite |
119 | | -- **Tests exist with errors**: Fix errors and add missing coverage |
120 | | -- **Tests exist with gaps**: Generate only the missing test cases, clearly marked as additions |
121 | | - |
122 | | ---- |
123 | | - |
124 | | -## Test Generation Strategy |
125 | | - |
126 | | -1. **Core Functionality Tests** |
127 | | - - Test the main purpose/expected behavior |
128 | | - - Verify return values with typical inputs |
129 | | - - Test with realistic simulation data scenarios |
130 | | - |
131 | | -2. **Input Validation Tests** |
132 | | - - Test with nullptr/null pointers |
133 | | - - Test with empty containers (vectors, maps) |
134 | | - - Test boundary values (min/max array indices, zero, negative numbers) |
135 | | - - Test invalid enum values or out-of-range parameters |
136 | | - |
137 | | -3. **Error Handling Tests** |
138 | | - - Test expected exceptions are thrown (use `EXPECT_THROW`, `ASSERT_THROW`) |
139 | | - - Verify error states are handled gracefully |
140 | | - - Test edge cases specific to simulation parameters |
141 | | - |
142 | | -4. **Resource Management Tests** (if applicable) |
143 | | - - Verify proper memory allocation/deallocation |
144 | | - - Test RAII patterns work correctly |
145 | | - - Validate GPU/device memory handling for CUDA code |
146 | | - |
147 | | -5. **Side Effects Tests** (if applicable) |
148 | | - - Verify external calls are made correctly |
149 | | - - Test state changes in simulation objects |
150 | | - - Validate interactions with factory classes and managers |
151 | | - |
152 | | -## Test Structure Requirements |
153 | | - |
154 | | -- Use Google Test framework (`gtest/gtest.h`) |
155 | | -- Use `TEST()` for standalone tests, `TEST_F()` for fixture-based tests |
156 | | -- Follow AAA pattern: Arrange, Act, Assert |
157 | | -- Use descriptive test names: `TEST(ClassName, MethodName_Condition_ExpectedResult)` |
158 | | -- Group related tests in test fixtures when sharing setup/teardown |
159 | | -- Use `EXPECT_*` for non-fatal assertions, `ASSERT_*` for fatal assertions |
160 | | - |
161 | | -## Google Test Assertion Reference |
162 | | - |
163 | | -```cpp |
164 | | -// Equality |
165 | | -EXPECT_EQ(expected, actual); |
166 | | -EXPECT_NE(val1, val2); |
167 | | - |
168 | | -// Boolean |
169 | | -EXPECT_TRUE(condition); |
170 | | -EXPECT_FALSE(condition); |
171 | | - |
172 | | -// Comparisons |
173 | | -EXPECT_LT(val1, val2); // less than |
174 | | -EXPECT_LE(val1, val2); // less than or equal |
175 | | -EXPECT_GT(val1, val2); // greater than |
176 | | -EXPECT_GE(val1, val2); // greater than or equal |
177 | | - |
178 | | -// Floating point (handles precision issues) |
179 | | -EXPECT_FLOAT_EQ(expected, actual); |
180 | | -EXPECT_DOUBLE_EQ(expected, actual); |
181 | | -EXPECT_NEAR(val1, val2, abs_error); |
182 | | - |
183 | | -// Strings |
184 | | -EXPECT_STREQ(expected, actual); // C-strings equal |
185 | | -EXPECT_STRNE(str1, str2); // C-strings not equal |
186 | | - |
187 | | -// Exceptions |
188 | | -EXPECT_THROW(statement, exception_type); |
189 | | -EXPECT_NO_THROW(statement); |
190 | | -EXPECT_ANY_THROW(statement); |
191 | | - |
192 | | -// Pointers |
193 | | -EXPECT_EQ(nullptr, ptr); |
194 | | -EXPECT_NE(nullptr, ptr); |
195 | | -``` |
196 | | -
|
197 | | -## Test File Template |
198 | | -
|
199 | | -```cpp |
200 | | -// {ClassName}Tests.cpp |
201 | | -
|
202 | | -#include "gtest/gtest.h" |
203 | | -#include "{HeaderFile}.h" |
204 | | -// Include other necessary headers |
205 | | -
|
206 | | -// Test fixture for tests requiring shared setup |
207 | | -class {ClassName}Fixture : public ::testing::Test { |
208 | | -protected: |
209 | | - void SetUp() override { |
210 | | - // Initialize test objects before each test |
211 | | - } |
212 | | -
|
213 | | - void TearDown() override { |
214 | | - // Clean up resources after each test |
215 | | - } |
216 | | -
|
217 | | - // Shared test data members |
218 | | -}; |
219 | | -
|
220 | | -// Standalone test example (no fixture needed) |
221 | | -TEST({ClassName}, MethodName_ValidInput_ReturnsExpected) { |
222 | | - // Arrange - set up test data |
223 | | -
|
224 | | - // Act - call the function under test |
225 | | -
|
226 | | - // Assert - verify the results |
227 | | -} |
228 | | -
|
229 | | -// Fixture-based test example (uses fixture for shared setup) |
230 | | -TEST_F({ClassName}Fixture, MethodName_EdgeCase_HandlesCorrectly) { |
231 | | - // Arrange - fixture provides shared setup |
232 | | -
|
233 | | - // Act |
234 | | -
|
235 | | - // Assert |
236 | | -} |
237 | | -``` |
238 | | - |
239 | | -## Input Parameters |
| 45 | +## Inputs |
240 | 46 |
|
241 | 47 | Target function: ${input:function_name:Which C++ function or method should be tested?} |
242 | 48 | Target class: ${input:class_name:Which class does this function belong to? (or 'standalone' for free functions)} |
243 | | - |
244 | | -## Guidelines |
245 | | - |
246 | | -- **Always check for existing tests first** before generating new ones |
247 | | -- Generate 5-8 focused test cases covering the most important scenarios |
248 | | -- When updating existing tests, clearly mark new additions with comments |
249 | | -- Include realistic test data matching Graphitti simulation parameters |
250 | | -- Add comments explaining complex test setup or non-obvious assertions |
251 | | -- Ensure tests are independent and can run in any order |
252 | | -- Focus on testing behavior, not implementation details |
253 | | -- Consider GPU/CUDA implications if testing device code |
254 | | -- Reference existing test patterns in the UnitTesting folder |
255 | | -- Place output files in `Testing/UnitTesting/{ClassName}Tests.cpp` |
256 | | - |
257 | | -Create tests that give confidence the function works correctly and help catch regressions in the neural simulation. |
0 commit comments