Skip to content

Commit

Permalink
Init the to chatgpt
Browse files Browse the repository at this point in the history
  • Loading branch information
SimFG committed Jun 18, 2023
1 parent b988174 commit bdbc1c4
Show file tree
Hide file tree
Showing 17 changed files with 2,234 additions and 0 deletions.
132 changes: 132 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
*.DS_Store
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
*.db

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

.idea
16 changes: 16 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM python:3.10-slim-buster

WORKDIR /app

COPY . .

RUN pip install poetry

RUN poetry install

ENV PATH=$PATH:/usr/local/bin

EXPOSE 8000

# Set the command to run the application
CMD ["poetry", "run", "python", "app.py", "-a", "new_bing"]
135 changes: 135 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# TO CHATGPT

**Make you feel free to use ALL chatgpt applications!!!**

There are many LLM online services now, but most of the desktop clients, browser plugins, and ChatGPT-enhanced Web currently use OpenAI interfaces. If this restriction can be broken, I believe it will make us **get off work earlier**, such as claude, cohere, new bing, google bard, etc.

For this, and inspired by the [claude to chatgpt](https://github.com/jtsang4/claude-to-chatgpt) project, I decided to give it a try, go go go !!!

## Usage

Before using, you need to ensure that the current environment can access the corresponding llm services.

### adapter param

It is worth noting that there are currently **two types of adapters**, one is asynchronous and the other is synchronous.
The factor that decides which category to use is the sdk that currently accesses the online LLM service.
If an asynchronous interface is provided, asynchronous implementation will be given priority.

Specify the adapter through `-a`, which has been implemented so far.

**async adapters**:

- claude, implemented using [anthropic-sdk-python](https://github.com/anthropics/anthropic-sdk-python). The current account needs to apply, if successful, you can use it for **free** for personal daily use.
- cohere, implemented using [cohere-python](https://github.com/cohere-ai/cohere-python). You only need to register an account, and you can use it for **free** for individuals, but there is a rate limit, five times a minute.
- new_bing, implemented using [EdgeGPT](https://github.com/acheong08/EdgeGPT), which is reverse engineered API of Microsoft's Bing Chat AI. Because Currently new bing does not provide an official sdk, and it caused this way is **unstable**.

**sync adapters**:

- bard, implemented using [Bard](https://github.com/acheong08/Bard/), which is reverse engineered API of Google Bard. And it's also **unstable**.

If you want to use the `cohere` api in ChatGPT applications now, you only need to start the service, like:

```bash
python app.py -a cohere
```

Another point to note is that because some APIs are non-asynchronous, such as the current `bard`, you need to run the `app_sync.py` file, like:

```bash
python app_sync.py -a bard
```

### source code

```bash
git clone https://github.com/SimFG/to-chatgpt
cd to-chatgpt

pip install poetry
poetry install
python app.py -a new_bing
```

### docker

```bash
docker pull simfg/to_chatgpt:latest

docker run -d -p 8000:8000 simfg/to_chatgpt:0.1
```

Specify the adapter to run the service.

```bash
docker run -d -p 8000:8000 simfg/to_chatgpt:latest poetry run python app.py -a new_bing
```

## How to use it

If you find that the service **does not respond normally, you can check if there is any error output in the service console.** It is very likely that there is a problem accessing the llm service.

If you want to specify the port of the service, you can use the `-p` parameter.

Set the **openai base url** in the chatgpt application as the service address. Generally, this option is near the openai api key.

Different adapters have different usage methods, and the instructions are as follows.

1. claude

After starting the service, specify the api key of the claude service where OPENAI_API_KEY is required.

2. cohere

After starting the service, specify the api key of the cohere service where OPENAI_API_KEY is required.

3. new bing

Nothing, but it's unstable.

4. bard

After starting the service, specify `__Secure-1PSID` cookie where OPENAI_API_KEY is required.

The way to get the cookie:

- F12 for console
- Copy the values
- Session: Go to Application → Cookies → __Secure-1PSID. Copy the value of that cookie.

more details: [Bard](https://github.com/acheong08/Bard/)

## Roadmap

### Support more llm services

- [text-generation-inference](https://github.com/huggingface/text-generation-inference)
- open-assistant

If there are other llm services, **welcome to open a pr and write it here**!!!

### Manage your llm service and its data

Through this service, in addition to conversion, LLM requests and related data can also be managed.

**Of course, I may not have time to do all the functions below, all of which are my personal imagination.**

- Customize the service key to limit service requests
- LLM service key management
- Request limit
- Whitelist and Blacklist IP
- Record the request history
- Manager the request info page, like show/download/delete/various charts...

## Awesome chatgpt applications

### plugins
- [openai-translator](https://github.com/openai-translator/openai-translator)
- [chathub](https://github.com/chathub-dev/chathub)

### client

- [raycast chatgpt extension](https://github.com/raycast/extensions/blob/c0f80c73f39b1cd7159e53b706c452c12648f0a9/extensions/chatgpt/README.md)

If there are other awesome chatgpt applications, **welcome to open a pr and write it here**!!!

62 changes: 62 additions & 0 deletions app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import argparse
from typing import Optional

import uvicorn
from fastapi import Request

from to_chatgpt import common
from to_chatgpt.common import BaseAdapter, init_app

adapter: Optional[BaseAdapter] = None
app = init_app()
server_name = "new_bing"


@app.get("/")
async def hello():
return f"hello, {server_name} to chatgpt server"


@app.api_route(
"/v1/chat/completions", methods=["POST", "OPTIONS"],
)
async def chat(request: Request):
return await common.achat(adapter, request)


def run():
parser = argparse.ArgumentParser()
parser.add_argument(
"-s", "--host", default="0.0.0.0", help="the hostname to listen on"
)
parser.add_argument(
"-p", "--port", type=int, default=8000, help="the port to listen on"
)
parser.add_argument(
"-a", "--adapter", default=server_name, help="the name of server adapter"
)
parser.add_argument("-l", "--log", default="debug", help="the log level")

args = parser.parse_args()

global adapter
if args.adapter == "new_bing":
from to_chatgpt.new_bing import NewBingAdapter

adapter = NewBingAdapter()
elif args.adapter == "claude":
from to_chatgpt.claude import ClaudeAdapter

adapter = ClaudeAdapter()
elif args.adapter == "cohere":
from to_chatgpt.cohere import CohereAdapter

adapter = CohereAdapter()
else:
raise ValueError(f"unknown adapter: {args.adapter}")

uvicorn.run(app, host=args.host, port=args.port, log_level=args.log)


if __name__ == "__main__":
run()
Loading

0 comments on commit bdbc1c4

Please sign in to comment.