Currently, the best serverless implementation of DeepLX, optimized for Cloudflare Workers. Through intelligent proxy endpoint rotation, advanced rate limiting algorithms, and circuit breaker mechanisms, it almost completely avoids HTTP 429 errors, providing higher request rate limits and lower network round-trip times than traditional translation APIs. Now supports both DeepL and Google Translate services.
Unlike paid translation APIs, DeepLX is completely free to use - no API keys, no subscription fees, no usage limits. Simply deploy once and enjoy unlimited translation requests without any cost concerns.
- DeepL Translation (
/deepl
) - High-quality translation with advanced AI - Google Translate (
/google
) - Wide language support and fast processing - Legacy Compatibility (
/translate
) - Backward compatible endpoint using DeepL
DeepLX has significant improvements in performance and stability compared to the DeepL API. Here are key metric comparisons based on specific network environments:
Metric | DeepL API | DeepLX (Pre-deployed Instance) |
---|---|---|
Rate Limit | 50 requests/sec | 80 requests/sec (8 req/sec × 10 proxy endpoints) |
Average Network RTT | ~450ms | ~180ms (edge network acceleration) |
HTTP 429 Error Rate | 10-30% | <1% |
Concurrent Support | Single endpoint limit | Multi-endpoint load balancing |
Geographic Distribution | Limited | 330+ global edge nodes |
- Higher Rate Limits: Intelligent load balancing supports higher concurrent requests than DeepL API
- Lower Latency: Global edge network deployment based on Cloudflare Workers
- Zero Cold Start: Serverless architecture with instant response
- Intelligent Caching: Dual-layer caching system (memory + KV storage) reduces duplicate requests
- Intelligent Load Balancing: Multiple proxy endpoints automatically distribute requests
- Dynamic Rate Limiting Algorithm: Automatically adjusts rate limits based on proxy count
- Dual-layer Caching System: Memory cache + KV storage reduces duplicate requests
- Circuit Breaker Mechanism: Automatic failover for failed endpoints ensures service continuity
- Edge Computing: Global deployment on Cloudflare Workers reduces latency
- Avoid HTTP 429 Errors: Almost completely avoids rate limiting through proxy endpoint rotation and token bucket algorithm
- Circuit Breaker Mechanism: Automatically detects failed endpoints and performs failover
- Exponential Backoff Retry: Intelligent retry mechanism improves success rate
- Input Validation: Comprehensive parameter validation and text sanitization
- Rate Limiting: Multi-dimensional rate limiting based on client IP and proxy endpoints
- CORS Support: Flexible cross-origin resource sharing configuration
- Security Headers: Automatically adds security-related HTTP headers
- Error Sanitization: Sensitive information is never exposed
graph TB
%% Client Layer
subgraph "Client Layer"
APIClient[API Client]
end
%% Cloudflare Workers Layer
subgraph "Cloudflare Workers"
direction TB
Router[Hono Router]
subgraph "API Endpoints"
DeepL[POST /deepl]
Google[POST /google]
Translate[POST /translate]
Debug[POST /debug]
end
subgraph "Core Middleware & Components"
CORS[CORS Handler]
Security[Security Middleware]
RateLimit[Rate Limiting System]
Cache[Dual-layer Cache<br/>Memory + KV]
end
subgraph "Translation Services"
QueryEngine[DeepL Query Engine]
GoogleService[Google Translate Service]
end
subgraph "Support Systems"
ProxyManager[Proxy Manager<br/>& Load Balancer]
CircuitBreaker[Circuit Breaker]
RetryLogic[Retry Logic]
ErrorHandler[Error Handler]
end
end
%% Storage Layer
subgraph "Cloudflare Storage"
CacheKV[(Cache KV<br/>Translation Results)]
RateLimitKV[(Rate Limit KV<br/>Token Buckets)]
Analytics[(Analytics Engine<br/>Metrics & Monitoring)]
end
%% External Services
subgraph "External Translation APIs"
DeepLAPI[DeepL JSONRPC API<br/>www2.deepl.com]
GoogleAPI[Google Translate API<br/>translate.google.com]
XDPL[XDPL Proxy Cluster<br/>Multiple Vercel Instances]
end
%% Request Flow Connections
APIClient --> Router
Router --> CORS
CORS --> DeepL
CORS --> Google
CORS --> Translate
CORS --> Debug
DeepL --> Security
Google --> Security
Translate --> Security
Debug --> Security
Security --> RateLimit
RateLimit --> Cache
Cache --> QueryEngine
Cache --> GoogleService
QueryEngine --> ProxyManager
GoogleService --> GoogleAPI
ProxyManager --> CircuitBreaker
CircuitBreaker --> RetryLogic
RetryLogic --> ErrorHandler
%% External API Connections
ProxyManager -.-> XDPL
XDPL -.-> DeepLAPI
%% Storage Connections
Cache -.-> CacheKV
RateLimit -.-> RateLimitKV
Router -.-> Analytics
%% Styles
classDef clientClass fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef workerClass fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef middlewareClass fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
classDef serviceClass fill:#fff3e0,stroke:#f57c00,stroke-width:2px
classDef storageClass fill:#fce4ec,stroke:#e91e63,stroke-width:2px
classDef externalClass fill:#ffebee,stroke:#d32f2f,stroke-width:2px
class APIClient clientClass
class Router,DeepL,Google,Translate,Debug workerClass
class CORS,Security,RateLimit,Cache middlewareClass
class QueryEngine,GoogleService,ProxyManager,CircuitBreaker,RetryLogic,ErrorHandler serviceClass
class CacheKV,RateLimitKV,Analytics storageClass
class DeepLAPI,GoogleAPI,XDPL externalClass
Pre-deployed Instance: https://dplx.xi-xu.me
curl -X POST https://dplx.xi-xu.me/deepl \
-H "Content-Type: application/json" \
-d '{
"text": "Hello, world!",
"source_lang": "EN",
"target_lang": "ZH"
}'
curl -X POST https://dplx.xi-xu.me/google \
-H "Content-Type: application/json" \
-d '{
"text": "Hello, world!",
"source_lang": "EN",
"target_lang": "ZH"
}'
curl -X POST https://dplx.xi-xu.me/translate \
-H "Content-Type: application/json" \
-d '{
"text": "Hello, world!",
"source_lang": "EN",
"target_lang": "ZH"
}'
async function translateWithDeepL(text, sourceLang = 'auto', targetLang = 'zh') {
const response = await fetch('https://dplx.xi-xu.me/deepl', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
text: text,
source_lang: sourceLang,
target_lang: targetLang
})
});
const result = await response.json();
return result.data;
}
// Usage example
translateWithDeepL('Hello, world!', 'en', 'zh')
.then(result => console.log(result))
.catch(error => console.error(error));
async function translateWithGoogle(text, sourceLang = 'auto', targetLang = 'zh') {
const response = await fetch('https://dplx.xi-xu.me/google', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
text: text,
source_lang: sourceLang,
target_lang: targetLang
})
});
const result = await response.json();
return result.data;
}
// Usage example
translateWithGoogle('Hello, world!', 'en', 'zh')
.then(result => console.log(result))
.catch(error => console.error(error));
import requests
import json
def translate_with_deepl(text, source_lang='auto', target_lang='zh'):
url = 'https://dplx.xi-xu.me/deepl'
data = {
'text': text,
'source_lang': source_lang,
'target_lang': target_lang
}
response = requests.post(url, json=data)
result = response.json()
if result['code'] == 200:
return result['data']
else:
raise Exception(f"Translation failed: {result.get('message', 'Unknown error')}")
# Usage example
try:
result = translate_with_deepl('Hello, world!', 'en', 'zh')
print(result)
except Exception as e:
print(f"Error: {e}")
import requests
import json
def translate_with_google(text, source_lang='auto', target_lang='zh'):
url = 'https://dplx.xi-xu.me/google'
data = {
'text': text,
'source_lang': source_lang,
'target_lang': target_lang
}
response = requests.post(url, json=data)
result = response.json()
if result['code'] == 200:
return result['data']
else:
raise Exception(f"Translation failed: {result.get('message', 'Unknown error')}")
# Usage example
try:
result = translate_with_google('Hello, world!', 'en', 'zh')
print(result)
except Exception as e:
print(f"Error: {e}")
Configure API clients to use the pre-deployed instance:
DeepLX App (Open-source web app)
A modern, free web-based translation app powered by the DeepLX API. Features include:
- Multi-language auto-detection support
- Real-time translation as you type
- Translation history and language switching
- Responsive design for all devices
- RTL language support
Live Demo: https://deeplx.xi-xu.me
Pot (Open-source cross-platform Windows, macOS, and Linux app)
- Download and install Pot for your platform
- Open Pot settings and navigate to Service Settings
- Configure the DeepL service type as DeepLX and set the custom URL to
https://dplx.xi-xu.me/deepl
Zotero (Open-source reference management app)
- Download and install Zotero for your platform
- Download and install the Translate for Zotero plugin
- Open Zotero settings and navigate to the Services section under Translation
- Configure the translation service as DeepLX (API) and set the endpoint to
https://dplx.xi-xu.me/deepl
after clicking the config button
PDFMathTranslate (pdf2zh) (Open-source PDF document translation tool)
Refer to Advanced Options and Translate with different services.
Immersive Translate (Closed-source browser extension)
- Install Immersive Translate
- Go to developer settings and enable beta testing features
- Go to translation services and add a custom translation service DeepLX, configure the API URL to
https://dplx.xi-xu.me/deepl
- Configure the maximum requests per second and maximum text length per request to appropriate values (e.g.,
80
and5000
) to ensure stability and performance
Bob (Closed-source macOS app)
- Download and install Bob from the Mac App Store
- Download and install the bob-plugin-deeplx plugin
- Configure the plugin to use
https://dplx.xi-xu.me/deepl
- Node.js 18+
- Cloudflare Workers account
- Wrangler CLI
git clone https://github.com/xixu-me/DeepLX.git
cd DeepLX
npm install
Edit the wrangler.jsonc
file and update the following configuration:
# Create cache KV namespace
npx wrangler kv namespace create "CACHE_KV"
# Create rate limit KV namespace
npx wrangler kv namespace create "RATE_LIMIT_KV"
Update the returned namespace IDs to the kv_namespaces
configuration in wrangler.jsonc
.
# Development environment
npx wrangler dev
# Production deployment
npx wrangler deploy
For optimal performance and stability, it's recommended to deploy as many XDPL proxy endpoints as possible:
- Deploy multiple XDPL instances
- Add the deployed URLs to DeepLX's
PROXY_URLS
environment variable:
{
"vars": {
"PROXY_URLS": "https://your-xdpl-1.vercel.app/jsonrpc,https://your-xdpl-2.vercel.app/jsonrpc,https://your-xdpl-3.vercel.app/jsonrpc,https://your-xdpl-n.vercel.app/jsonrpc"
}
}
Endpoint | Provider | Description | Status |
---|---|---|---|
/deepl |
DeepL | Primary DeepL translation endpoint | Recommended |
/google |
Google Translate | Google Translate endpoint | Active |
/translate |
DeepL | Legacy endpoint (uses DeepL) | Legacy |
Request Method: POST
Request Headers: Content-Type: application/json
Request Parameters:
Parameter | Type | Description | Required |
---|---|---|---|
text |
string | Text to translate | Yes |
source_lang |
string | Source language code | No, default AUTO |
target_lang |
string | Target language code | No, default EN |
Response:
{
"code": 200,
"data": "Translation result",
"id": "Random identifier",
"source_lang": "Detected source language code",
"target_lang": "Target language code"
}
Request Method: POST
Request Headers: Content-Type: application/json
Request Parameters:
Parameter | Type | Description | Required |
---|---|---|---|
text |
string | Text to translate | Yes |
source_lang |
string | Source language code | No, default AUTO |
target_lang |
string | Target language code | No, default EN |
Response:
{
"code": 200,
"data": "Translation result",
"id": "Random identifier",
"source_lang": "Detected source language code",
"target_lang": "Target language code"
}
Request Method: POST
Request Headers: Content-Type: application/json
Note: This is a legacy endpoint that uses DeepL. For new integrations, please use /deepl
instead.
Request Parameters:
Parameter | Type | Description | Required |
---|---|---|---|
text |
string | Text to translate | Yes |
source_lang |
string | Source language code | No, default AUTO |
target_lang |
string | Target language code | No, default EN |
Response:
{
"code": 200,
"data": "Translation result",
"id": "Random identifier",
"source_lang": "Detected source language code",
"target_lang": "Target language code"
}
Supported Language Codes:
AUTO
- Auto-detect (source language only)AR
- ArabicBG
- BulgarianCS
- CzechDA
- DanishDE
- GermanEL
- GreekEN
- EnglishES
- SpanishET
- EstonianFI
- FinnishFR
- FrenchHE
- HebrewHU
- HungarianID
- IndonesianIT
- ItalianJA
- JapaneseKO
- KoreanLT
- LithuanianLV
- LatvianNB
- Norwegian BokmålNL
- DutchPL
- PolishPT
- PortugueseRO
- RomanianRU
- RussianSK
- SlovakSL
- SlovenianSV
- SwedishTH
- ThaiTR
- TurkishUK
- UkrainianVI
- VietnameseZH
- Chinese
For the latest language support list, please refer to Languages supported - DeepL Documentation.
Request Method: POST
Used to verify request format and troubleshoot issues.
Code | Description |
---|---|
200 | Translation successful |
400 | Request parameter error |
429 | Request rate too high |
500 | Internal server error |
503 | Service temporarily unavailable |
Variable | Description | Default |
---|---|---|
DEBUG_MODE |
Debug mode switch | false |
PROXY_URLS |
Proxy endpoint list, comma-separated | None |
Can be adjusted in src/lib/config.ts
:
// Request timeout
export const REQUEST_TIMEOUT = 10000; // 10 seconds
// Retry configuration
export const DEFAULT_RETRY_CONFIG = {
maxRetries: 3, // Maximum retry attempts
initialDelay: 1000, // Initial delay
backoffFactor: 2, // Backoff factor
};
// Rate limit configuration
export const RATE_LIMIT_CONFIG = {
PROXY_TOKENS_PER_SECOND: 8, // Tokens per proxy per second
PROXY_MAX_TOKENS: 16, // Maximum tokens per proxy
BASE_TOKENS_PER_MINUTE: 480, // Base tokens per minute
};
// Payload limits
export const PAYLOAD_LIMITS = {
MAX_TEXT_LENGTH: 5000, // Maximum text length
MAX_REQUEST_SIZE: 32768, // Maximum request size
};
# Run all tests
npm test
# Run unit tests
npm run test:unit
# Run integration tests
npm run test:integration
# Run performance tests
npm run test:performance
# Generate coverage report
npm run test:coverage
- Check if proxy endpoint configuration is correct
- Increase the number of proxy endpoints
- Adjust rate limiting configuration
- Confirm source language detection is correct
- Check if text encoding is correct
- Verify language code format
- Check Cloudflare account configuration
- Verify KV namespaces are created
- Confirm wrangler.jsonc configuration is correct
Enable debug mode for detailed information:
{
"vars": {
"DEBUG_MODE": "true"
}
}
Then use the debug endpoint:
curl -X POST https://your-domain.workers.dev/debug \
-H "Content-Type: application/json" \
-d '{"text": "test", "source_lang": "EN", "target_lang": "ZH"}'
- OwO-Network/DeepLX - Original implementation based on Go programming language
- Cloudflare Workers - Hosting platform
- Hono - Fast web framework
- XDPL - Proxy endpoint solution
We welcome all forms of contributions! Please check the Contributing Guide to learn how to participate in repository development.
- Report Issues: Use issue templates to report bugs or request features
- Submit Code: Fork the repository, create feature branches, submit pull requests
- Improve Documentation: Fix errors, add examples, improve descriptions
- Test Feedback: Test in different environments and provide feedback
- Author: Xi Xu
- Email: Contact Email
- Sponsor: Sponsor Link
This repository is for learning and research purposes only. When using this repository, please comply with the following terms:
- Compliant Use: Users are responsible for ensuring that use of this repository complies with local laws and regulations and relevant terms of service
- Commercial Use: Before commercial use, please confirm compliance with DeepL's terms of service and usage policies
- Service Stability: This repository relies on third-party services and does not guarantee 100% service availability
- Data Privacy: Translation content is processed through third-party services, please do not translate sensitive or confidential information
- The author is not responsible for any direct or indirect losses caused by using this repository
- Users should bear the risks of use, including but not limited to service interruption, data loss, etc.
- This repository provides no warranty of any kind, including merchantability, fitness for a particular purpose, etc.
By using this repository, you agree to:
- Not use this repository for any illegal or harmful purposes
- Not abuse the service or conduct malicious attacks
- Follow reasonable use principles and avoid excessive load on the service
Please use this repository only after fully understanding and agreeing to the above terms.
This repository is licensed under the MIT License - see the LICENSE file for details.
If this repository is helpful to you, please consider giving it a ⭐ star!
Made with ❤️ by Xi Xu