You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/develop/ai/langcache/_index.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ categories:
5
5
- docs
6
6
- develop
7
7
- ai
8
-
description: Redis LangCache provides semantic caching-as-a-service to reduce LLM costs and improve response times for AI applications.
8
+
description: Store LLM responses for AI apps in a semantic cache.
9
9
linkTitle: LangCache
10
10
hideListLinks: true
11
11
weight: 30
@@ -29,13 +29,13 @@ Using LangCache as a semantic caching service has the following benefits:
29
29
30
30
-**Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses.
31
31
-**Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory.
32
-
-**Simpler Deployments**: Access our managed service via a REST API with automated embedding generation, configurable controls, and no database management required.
32
+
-**Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
33
33
-**Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
34
34
35
35
LangCache works well for the following use cases:
36
36
37
37
-**AI assistants and chatbots**: Optimize conversational AI applications by caching common responses and reducing latency for frequently asked questions.
38
-
-**RAG applications**: Enhance retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time..
38
+
-**RAG applications**: Enhance retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time.
39
39
-**AI agents**: Improve multi-step reasoning chains and agent workflows by caching intermediate results and common reasoning patterns.
40
40
-**AI gateways**: Integrate LangCache into centralized AI gateway services to manage and control LLM costs across multiple applications..
41
41
@@ -62,7 +62,7 @@ See the [LangCache API reference]({{< relref "/develop/ai/langcache/api-referenc
62
62
63
63
## Get started
64
64
65
-
LangCache is currently in preview through two different ways:
65
+
LangCache is currently in preview:
66
66
67
67
- Public preview on [Redis Cloud]({{< relref "/operate/rc/langcache" >}})
Copy file name to clipboardExpand all lines: content/operate/rc/langcache/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ bannerText: LangCache on Redis Cloud is currently available as a public preview.
13
13
bannerChildren: true
14
14
---
15
15
16
-
LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, customers can significantly reduce API costs and lower the average latency of their generative AI applications.
16
+
LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, you can significantly reduce API costs and lower the average latency of your generative AI applications.
17
17
18
18
For more information about how LangCache works, see the [LangCache overview]({{< relref "/develop/ai/langcache" >}}).
0 commit comments