Community Integration: Making Open-Assistant/ChatGPT cheaper, faster, and more efficient #3471
binmakeswell
started this conversation in
Ideas
Replies: 1 comment
-
Thanks for sharing! I am curious about your comparisons with deepspeed, which is what we are currently using. Is the comparison done in a fair basis? And if yes where do you attribute the added throughput? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Thank you for your outstanding contribution to Open-Assistant!
AIGC, e.g., ChatGPT, has recently risen to be one of the hottest topics in AI. We are happy to share a fantastic solution where the costs of training/inference Open-Assistant/ChatGPT can be cheaper a lot!
Colossal-AI provides an optimized open source low-cost solution that replicates ChatGPT training process. Compared to the PyTorch, a single-machine training process can be 7.73 times faster, and a mini demo training process requires only 1.62GB of GPU memory. More details can be found on the blog.
Open-source code:https://github.com/hpcaitech/ColossalAI#chatgpt
We would appreciate it if we could build the integration with you to benefit the community, and we are willing to provide help you need in this cooperation for free, making efforts towards the era of big AI models from the starting point of replicating ChatGPT!
Thank you very much.
Beta Was this translation helpful? Give feedback.
All reactions