Skip to content

Performance Regression: Significant Redis Hit Rate Drop and CPU Spikes after Implementation #212

@Aqours

Description

@Aqours

After migrating to nextjs-cache-handler with a Redis backend, we observed a significant regression in cache efficiency and resource utilization:

  1. Cache Hit Rate: Dropped from >90% (previous implementation) to approximately 60%.
  2. CPU Usage: Average CPU usage increased by 5%-10%, accompanied by frequent CPU spikes (spiky behavior) in our Kubernetes pods.
Image

Environment Details

  • Next.js Version: 15.3.8
  • Cache Handler Version: 2.5.3
  • Deployment: Kubernetes (K8s)

Questions & Potential Issues

1. Root Cause of Hit Rate Drop
We noticed a sharp decline in the Redis hit rate. Is this expected due to the way cache-handler manages tags or metadata? Are there specific "auxiliary" queries (like tag validation) that might be inflating the "Total Gets" and lowering the overall hit percentage?

2. CPU Spikes & Resource Overhead
Our pods are experiencing 5-10% higher CPU load with visible spikes. Could this be related to:

  • High frequency of serialization/deserialization?
  • Intensive Redis SCAN operations or tag-matching logic?
  • Lock contention during cache revalidation?

3. Integrated In-Memory (L1) Cache Support
We are looking for a hybrid caching strategy similar to nextjs-turbo-redis-cache that utilizes a local In-Memory L1 cache to shield Redis and reduce CPU overhead from network I/O and serialization.

  • Does nextjs-cache-handler support a built-in "Local Memory + Redis" multi-tier strategy out of the box?
  • If so, what is the recommended configuration to ensure L1 hits reduce the CPU load on the main thread?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions