Skip to content

Commit 6247f38

Browse files
Apply suggestions from code review
Co-authored-by: mich-elle-luna <[email protected]>
1 parent 72c5045 commit 6247f38

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

content/develop/ai/langcache/_index.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ categories:
55
- docs
66
- develop
77
- ai
8-
description: Redis LangCache provides semantic caching-as-a-service to reduce LLM costs and improve response times for AI applications.
8+
description: Store LLM responses for AI apps in a semantic cache.
99
linkTitle: LangCache
1010
hideListLinks: true
1111
weight: 30
@@ -29,13 +29,13 @@ Using LangCache as a semantic caching service has the following benefits:
2929

3030
- **Lower LLM costs**: Reduce costly LLM calls by easily storing the most frequently-requested responses.
3131
- **Faster AI app responses**: Get faster AI responses by retrieving previously-stored requests from memory.
32-
- **Simpler Deployments**: Access our managed service via a REST API with automated embedding generation, configurable controls, and no database management required.
32+
- **Simpler Deployments**: Access our managed service using a REST API with automated embedding generation, configurable controls, and no database management required.
3333
- **Advanced cache management**: Manage data access and privacy, eviction protocols, and monitor usage and cache hit rates.
3434

3535
LangCache works well for the following use cases:
3636

3737
- **AI assistants and chatbots**: Optimize conversational AI applications by caching common responses and reducing latency for frequently asked questions.
38-
- **RAG applications**: Enhance retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time..
38+
- **RAG applications**: Enhance retrieval-augmented generation performance by caching responses to similar queries, reducing both cost and response time.
3939
- **AI agents**: Improve multi-step reasoning chains and agent workflows by caching intermediate results and common reasoning patterns.
4040
- **AI gateways**: Integrate LangCache into centralized AI gateway services to manage and control LLM costs across multiple applications..
4141

@@ -62,7 +62,7 @@ See the [LangCache API reference]({{< relref "/develop/ai/langcache/api-referenc
6262

6363
## Get started
6464

65-
LangCache is currently in preview through two different ways:
65+
LangCache is currently in preview:
6666

6767
- Public preview on [Redis Cloud]({{< relref "/operate/rc/langcache" >}})
6868
- Fully-managed [private preview](https://redis.io/langcache/)

content/develop/ai/langcache/api-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ categories:
44
- docs
55
- develop
66
- ai
7-
description: null
7+
description: Learn to use the Redis LangCache API for semantic caching.
88
hideListLinks: true
99
linktitle: API reference
1010
title: LangCache API reference

content/operate/rc/langcache/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ bannerText: LangCache on Redis Cloud is currently available as a public preview.
1313
bannerChildren: true
1414
---
1515

16-
LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, customers can significantly reduce API costs and lower the average latency of their generative AI applications.
16+
LangCache is a semantic caching service available as a REST API that stores LLM responses for fast and cheaper retrieval, built on the Redis vector database. By using semantic caching, you can significantly reduce API costs and lower the average latency of your generative AI applications.
1717

1818
For more information about how LangCache works, see the [LangCache overview]({{< relref "/develop/ai/langcache" >}}).
1919

0 commit comments

Comments
 (0)