The purpose of this analysis is to assess how well the new marketing campaign performs compared to the existing one. By comparing the conversion rates— the percentage of people who make a purchase after clicking on the website—between the Test Group (new campaign) and the Control Group (existing campaign), we can determine whether the new campaign is more or less effective
- This tells us how successful each campaign is. (Number of Purchases / Total Website clicks) × 100
- A higher Z-Score means a bigger difference between the campaigns.
- A low Z-Score means the difference might be small or due to chance.
- A p-value below 0.05 means the difference is significant, and we can trust it.
- A p-value above 0.05 means the difference could be random, and we can’t be sure.
-
Null Hypothesis (H₀): The "Existing Campaign" strategy has a higher conversion rate compared to the "New Campaign" strategy.
-
Alternative Hypothesis (H₁): The "New Campaign" strategy has a higher conversion rate compared to "Existing Campaign."
The dataset consists of two groups:
-
Control Group: Users exposed to the existing campaign.
-
Test Group: Users exposed to the new campaign.
Columns in the Dataset:
Campaign Name | Date | Spend (USD) | Impressions | Reach | Website Clicks | Searches | View Content | Add to Cart | Purchase |
Steps Taken:
-
Checked for missing values and handled appropriately.
-
Converted date format where necessary.
-
Ensured numerical consistency in monetary values and conversion rates.
General Campaign Performance
- Ad Spend: The average ad spend per campaign is $2,288, with campaigns reaching an average of 88,844 people.
- Website Clicks: On average, 5,321 people click on the ads, but engagement gradually drops through the conversion funnel.
- Purchases: The final conversion (purchase) is significantly lower, averaging 523 purchases per campaign.
- Key Issue: There is a large drop-off between website visits and actual purchases, indicating potential conversion inefficiencies.
General Campaign Performance (Test Campaign)
- Ad Spend: The average ad spend per campaign is $2,563, with campaigns reaching an average of 53,491 people.
- Website Clicks: On average, 6,032 people click on the ads, but this gradually drops through the conversion funnel.
- Purchases: The final conversion (purchase) is significantly lower, averaging 521 purchases per campaign.
- Key Issue: There is a large drop-off between website visits and actual purchases, indicating potential conversion inefficiencies
We compare how well each campaign gets people to make a purchase.
- Control Group Conversion Rate = (Purchases in Control / Website clicks in Control) * 100
- Test Group Conversion Rate = (Purchases in Test / Website clicks in Test) * 100
- Pooled proportion represents the overall conversion rate across both groups.
- We check how much the results might naturally fluctuate to ensure we’re making a fair comparison.
- A standard error of 0.0010008 tells us that our test results are very reliable and not likely due to random chance
We calculate a number (Z-Score) that tells us how big the difference is between the two campaigns.
A Z-Score of 11.84 means the difference is extremely large and unlikely to be random.
We calculate a p-value to check if the difference happened by luck or if it’s a real pattern.
A p-value of 0 means this is not random—the new campaign truly performed worse.
- Control Group Conversion Rate = 9.83%
- Test Group Conversion Rate = 8.64% The test group performed worse than the control group (by 1.19% point).
- A Z-score of 11.84 means the difference between the two conversion rates is 11.84 standard deviations away from the mean of the distribution. This is an extremely large Z-score, indicating a highly significant difference.
- The p-value of 0 (or close to 0) means there is almost no chance that this difference happened due to random chance.
Decision: The new campaign negatively impacted conversion rates and should not be implemented. Therefore, we can confidently fail to reject the null hypothesis (which assumes that the existing campaign outperforms the new campaign).
- The new campaign resulted in a lower conversion rate.
- The results are statistically significant.
- Sticking with the control campaign is the best decision.
- Investigate Possible Causes of lower conversion rates (e.g., ad design, messaging, targeting).
- Optimize the Test Group Strategy before re-running the campaign.
- Explore Alternative Campaign Variations for future testing.






