Releases: mudler/LocalAI
v2.15.0
🎉 LocalAI v2.15.0! 🚀
Hey awesome people! I'm happy to announce the release of LocalAI version 2.15.0! This update introduces several significant improvements and features, enhancing usability, functionality, and user experience across the board. Dive into the key highlights below, and don't forget to check out the full changelog for more detailed updates.
🌍 WebUI Upgrades: Turbocharged!
🚀 Vision API Integration
The Chat WebUI now seamlessly integrates with the Vision API, making it easier for users to test image processing models directly through the browser interface - this is a very simple and hackable interface in less then 400L of code with Alpine.JS and HTMX!
💬 System Prompts in Chat
System prompts can be set in the WebUI chat, which guide the user through interactions more intuitively, making our chat interface smarter and more responsive.
🌟 Revamped Welcome Page
New to LocalAI or haven't installed any models yet? No worries! The updated welcome page now guides users through the model installation process, ensuring you're set up and ready to go without any hassle. This is a great first step for newcomers - thanks for your precious feedback!
🔄 Background Operations Indicator
Don't get lost with our new background operations indicator on the WebUI, which shows when tasks are running in the background.
🔍 Filter Models by Tag and Category
As our model gallery balloons, you can now effortlessly sift through models by tag and category, making finding what you need a breeze.
🔧 Single Binary Release
LocalAI is expanding into offering single binary releases, simplifying the deployment process and making it easier to get LocalAI up and running on any system.
For the moment we have condensed the builds which disables AVX and SSE instructions set. We are also planning to include cuda builds as well.
🧠 Expanded Model Gallery
This release introduces several exciting new models to our gallery, such as 'Soliloquy', 'tess', 'moondream2', 'llama3-instruct-coder' and 'aurora', enhancing the diversity and capability of our AI offerings. Our selection of one-click-install models is growing! We pick carefully model from the most trending ones on huggingface, feel free to submit your requests in a github issue, hop to our Discord or contribute by hosting your gallery, or.. even by adding models directly to LocalAI!
Want to share your model configurations and customizations? See the docs: https://localai.io/docs/getting-started/customize-model/
📣 Let's Make Some Noise!
A gigantic THANK YOU to everyone who’s contributed—your feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, you’re the real MVPs!
Remember, LocalAI thrives on community support—not big corporate bucks. If you love what we're building, show some love! A shoutout on social (@LocalAI_OSS and @mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Thanks a ton, and.. enjoy this release!
What's Changed
Bug fixes 🐛
- fix(webui): correct documentation URL for text2img by @mudler in #2233
- fix(ux): fix small glitches by @mudler in #2265
Exciting New Features 🎉
- feat: update ROCM and use smaller image by @cryptk in #2196
- feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants by @mudler in #2232
- fix(webui): display small navbar with smaller screens by @mudler in #2240
- feat(startup): show CPU/GPU information with --debug by @mudler in #2241
- feat(single-build): generate single binaries for releases by @mudler in #2246
- feat(webui): ux improvements by @mudler in #2247
- fix: OpenVINO winograd always disabled by @fakezeta in #2252
- UI: flag
trust_remote_code
to users // favicon support by @dave-gray101 in #2253 - feat(ui): prompt for chat, support vision, enhancements by @mudler in #2259
🧠 Models
- fix(gallery): hermes-2-pro-llama3 models checksum changed by @Nold360 in #2236
- models(gallery): add moondream2 by @mudler in #2237
- models(gallery): add llama3-llava by @mudler in #2238
- models(gallery): add llama3-instruct-coder by @mudler in #2242
- models(gallery): update poppy porpoise by @mudler in #2243
- models(gallery): add lumimaid by @mudler in #2244
- models(gallery): add openbiollm by @mudler in #2245
- gallery: Added some OpenVINO models by @fakezeta in #2249
- models(gallery): Add Soliloquy by @mudler in #2260
- models(gallery): add tess by @mudler in #2266
- models(gallery): add lumimaid variant by @mudler in #2267
- models(gallery): add kunocchini by @mudler in #2268
- models(gallery): add aurora by @mudler in #2270
- models(gallery): add tiamat by @mudler in #2269
📖 Documentation and examples
- docs: updated Transformer parameters description by @fakezeta in #2234
- Update readme: add ShellOracle to community integrations by @djcopley in #2254
- Add missing Homebrew dependencies by @michaelmior in #2256
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #2228
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2229
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #2230
- build(deps): bump tqdm from 4.65.0 to 4.66.3 in /examples/langchain/langchainpy-localai-example in the pip group across 1 directory by @dependabot in #2231
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2239
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2251
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2255
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2263
Other Changes
- test: check the response URL during image gen in
app_test.go
by @dave-gray101 in #2248
New Contributors
- @Nold360 made their first contribution in #2236
- @djcopley made their first contribution in #2254
- @michaelmior made their first contribution in #2256
Full Changelog: v2.14.0...v2.15.0
v2.14.0
🚀 AIO Image Update: llama3 has landed!
We're excited to announce that our AIO image has been upgraded with the latest LLM model, llama3, enhancing our capabilities with more accurate and dynamic responses. Behind the scenes uses https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF which is ready for function call, yay!
💬 WebUI enhancements: Updates in Chat, Image Generation, and TTS
Chat | TTS | Image gen |
---|---|---|
Our interfaces for Chat, Text-to-Speech (TTS), and Image Generation have finally landed. Enjoy streamlined and simple interactions thanks to the efforts of our team, led by @mudler, who have worked tirelessly to enhance your experience. The WebUI interface serves as a quick way to debug and assess models loaded in LocalAI - there is much to improve, but we have now a small, hackable interface!
🖼️ Many new models in the model gallery!
The model gallery has received a substantial upgrade with numerous new models, including Einstein v6.1, SOVL, and several specialized Llama3 iterations. These additions are designed to cater to a broader range of tasks , making LocalAI more versatile than ever. Kudos to @mudler for spearheading these exciting updates - now you can select with a couple of click the model you like!
🛠️ Robust Fixes and Optimizations
This update brings a series of crucial bug fixes and security enhancements to ensure our platform remains secure and efficient. Special thanks to @dave-gray101, @cryptk, and @fakezeta for their diligent work in rooting out and resolving these issues 🤗
✨ OpenVINO and more
We're introducing OpenVINO acceleration, and many OpenVINO models in the gallery. You can now enjoy fast-as-hell speed on Intel CPU and GPUs. Applause to @fakezeta for the contributions!
📚 Documentation and Dependency Upgrades
We've updated our documentation and dependencies to keep you equipped with the latest tools and knowledge. These updates ensure that LocalAI remains a robust and dependable platform.
👥 A Community Effort
A special shout-out to our new contributors, @QuinnPiers and @LeonSijiaLu, who have enriched our community with their first contributions. Welcome aboard, and thank you for your dedication and fresh insights!
Each update in this release not only enhances our platform's capabilities but also ensures a safer and more user-friendly experience. We are excited to see how our users leverage these new features in their projects, freel free to hit a line on Twitter or in any other social, we'd be happy to hear how you use LocalAI!
📣 Spread the word!
First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and.. exciting times ahead with LocalAI!
What's Changed
Bug fixes 🐛
- fix:
config_file_watcher.go
- root all file reads for safety by @dave-gray101 in #2144 - fix: github bump_docs.sh regex to drop emoji and other text by @dave-gray101 in #2180
- fix: undefined symbol: iJIT_NotifyEvent in import torch ##2153 by @fakezeta in #2179
- fix: security scanner warning noise: error handlers part 2 by @dave-gray101 in #2145
- fix: ensure GNUMake jobserver is passed through to whisper.cpp build by @cryptk in #2187
- fix: bring everything onto the same GRPC version to fix tests by @cryptk in #2199
Exciting New Features 🎉
- feat(gallery): display job status also during navigation by @mudler in #2151
- feat: cleanup Dockerfile and make final image a little smaller by @cryptk in #2146
- fix: swap to WHISPER_CUDA per deprecation message from whisper.cpp by @cryptk in #2170
- feat: only keep the build artifacts from the grpc build by @cryptk in #2172
- feat(gallery): support model deletion by @mudler in #2173
- refactor(application): introduce application global state by @dave-gray101 in #2072
- feat: organize Dockerfile into distinct sections by @cryptk in #2181
- feat: OpenVINO acceleration for embeddings in transformer backend by @fakezeta in #2190
- chore: update go-stablediffusion to latest commit with Make jobserver fix by @cryptk in #2197
- feat: user defined inference device for CUDA and OpenVINO by @fakezeta in #2212
- feat(ux): Add chat, tts, and image-gen pages to the WebUI by @mudler in #2222
- feat(aio): switch to llama3-based for LLM by @mudler in #2225
- feat(ui): support multilineand style
ul
by @mudler in #2226
🧠 Models
- models(gallery): add Einstein v6.1 by @mudler in #2152
- models(gallery): add SOVL by @mudler in #2154
- models(gallery): add average_normie by @mudler in #2155
- models(gallery): add solana by @mudler in #2157
- models(gallery): add poppy porpoise by @mudler in #2158
- models(gallery): add Undi95/Llama-3-LewdPlay-8B-evo-GGUF by @mudler in #2160
- models(gallery): add biomistral-7b by @mudler in #2161
- models(gallery): add llama3-32k by @mudler in #2183
- models(gallery): add openvino models by @mudler in #2184
- models(gallery): add lexifun by @mudler in #2193
- models(gallery): add suzume-llama-3-8B-multilingual-gguf by @mudler in #2194
- models(gallery): add guillaumetell by @mudler in #2195
- models(gallery): add wizardlm2 by @mudler in #2209
- models(gallery): Add Hermes-2-Pro-Llama-3-8B-GGUF by @mudler in #2218
📖 Documentation and examples
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #2149
- draft:Update model-gallery.md with correct gallery file by @QuinnPiers in #2163
- docs: update gallery, add rerankers by @mudler in #2166
- docs: enhance and condense few sections by @mudler in #2178
- [Documentations] Removed invalid numberings from
troubleshooting mac
by @LeonSijiaLu in #2174
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2150
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2159
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2176
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #2177
- update go-tinydream to latest commit by @cryptk in #2182
- build(deps): bump dependabot/fetch-metadata from 2.0.0 to 2.1.0 by @dependabot in #2186
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2189
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #2188
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2203
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2213
Other Changes
- Revert ":arrow_up: Update docs version mudler/LocalAI" by @mudler in #2165
- Issue-1720: Updated
Build on mac
documentations by @LeonSijiaLu in #2171 - ⬆️ Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pu...
🖼️ v2.13.0 - Model gallery edition
Hello folks, Ettore here - I'm happy to announce the v2.13.0 LocalAI release is out, with many features!
Below there is a small breakdown of the hottest features introduced in this release - however - there are many other improvements (especially from the community) as well, so don't miss out the changelog!
Check out the full changelog below for having an overview of all the changes that went in this release (this one is quite packed up).
🖼️ Model gallery
This is the first release with model gallery in the webUI, you can see now a "Model" button in the WebUI which lands now in a selection of models:
You can choose now models between stablediffusion, llama3, tts, embeddings and more! The gallery is growing steadly and being kept up-to-date.
The models are simple YAML files which are hosted in this repository: https://github.com/mudler/LocalAI/tree/master/gallery - you can host your own repository with your model index, or if you want you can contribute to LocalAI.
If you want to contribute adding models, you can by opening up a PR in the gallery
directory: https://github.com/mudler/LocalAI/tree/master/gallery.
Rerankers
I'm excited to introduce a new backend for rerankers
. LocalAI now implements the Jina API (https://jina.ai/reranker/#apiform) as a compatibility layer, and you can use existing Jina clients and point to those to the LocalAI address. Behind the hoods, uses https://github.com/AnswerDotAI/rerankers.
You can test this by using container images with python (this does NOT work with core
images) and a model config file like this, or by installing cross-encoder
from the gallery in the UI:
name: jina-reranker-v1-base-en
backend: rerankers
parameters:
model: cross-encoder
and test it with:
curl http://localhost:8080/v1/rerank \
-H "Content-Type: application/json" \
-d '{
"model": "jina-reranker-v1-base-en",
"query": "Organic skincare products for sensitive skin",
"documents": [
"Eco-friendly kitchenware for modern homes",
"Biodegradable cleaning supplies for eco-conscious consumers",
"Organic cotton baby clothes for sensitive skin",
"Natural organic skincare range for sensitive skin",
"Tech gadgets for smart homes: 2024 edition",
"Sustainable gardening tools and compost solutions",
"Sensitive skin-friendly facial cleansers and toners",
"Organic food wraps and storage solutions",
"All-natural pet food for dogs with allergies",
"Yoga mats made from recycled materials"
],
"top_n": 3
}'
Parler-tts
There is a new backend available for tts now, parler-tts
. It is possible to install and configure the model directly from the gallery. https://github.com/huggingface/parler-tts
🎈 Lot of small improvements behind the scenes!
Thanks to our outstanding community, we have enhanced the performance and stability of LocalAI across various modules. From backend optimizations to front-end adjustments, every tweak helps make LocalAI smoother and more robust.
📣 Spread the word!
First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI!
What's Changed
Bug fixes 🐛
- fix(autogptq): do not use_triton with qwen-vl by @thiner in #1985
- fix: respect concurrency from parent build parameters when building GRPC by @cryptk in #2023
- ci: fix release pipeline missing dependencies by @mudler in #2025
- fix: remove build path from help text documentation by @cryptk in #2037
- fix: previous CLI rework broke debug logging by @cryptk in #2036
- fix(fncall): fix regression introduced in #1963 by @mudler in #2048
- fix: adjust some sources names to match the naming of their repositories by @cryptk in #2061
- fix: move the GRPC cache generation workflow into it's own concurrency group by @cryptk in #2071
- fix(llama.cpp): set -1 as default for max tokens by @mudler in #2087
- fix(llama.cpp-ggml): fixup
max_tokens
for old backend by @mudler in #2094 - fix missing TrustRemoteCode in OpenVINO model load by @fakezeta in #2114
- Incl ocv pkg for diffsusers utils by @jtwolfe in #2115
Exciting New Features 🎉
- feat: kong cli refactor fixes #1955 by @cryptk in #1974
- feat: add flash-attn in nvidia and rocm envs by @golgeek in #1995
- feat: use tokenizer.apply_chat_template() in vLLM by @golgeek in #1990
- feat(gallery): support ConfigURLs by @mudler in #2012
- fix: dont commit generated files to git by @cryptk in #1993
- feat(parler-tts): Add new backend by @mudler in #2027
- feat(grpc): return consumed token count and update response accordingly by @mudler in #2035
- feat(store): add Golang client by @mudler in #1977
- feat(functions): support models with no grammar, add tests by @mudler in #2068
- refactor(template): isolate and add tests by @mudler in #2069
- feat: fiber logs with zerlog and add trace level by @cryptk in #2082
- models(gallery): add gallery by @mudler in #2078
- Add tensor_parallel_size setting to vllm setting items by @Taikono-Himazin in #2085
- Transformer Backend: Implementing use_tokenizer_template and stop_prompts options by @fakezeta in #2090
- feat: Galleries UI by @mudler in #2104
- Transformers Backend: max_tokens adherence to OpenAI API by @fakezeta in #2108
- Fix cleanup sonarqube findings by @cryptk in #2106
- feat(models-ui): minor visual enhancements by @mudler in #2109
- fix(gallery): show a fake image if no there is no icon by @mudler in #2111
- feat(rerankers): Add new backend, support jina rerankers API by @mudler in #2121
🧠 Models
- models(llama3): add llama3 to embedded models by @mudler in #2074
- feat(gallery): add llama3, hermes, phi-3, and others by @mudler in #2110
- models(gallery): add new models to the gallery by @mudler in #2124
- models(gallery): add more models by @mudler in #2129
📖 Documentation and examples
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1988
- docs: fix stores link by @adrienbrault in #2044
- AMD/ROCm Documentation update + formatting fix by @jtwolfe in #2100
👒 Dependencies
- deps: Update version of vLLM to add support of Cohere Command_R model in vLLM inference by @holyCowMp3 in #1975
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1991
- build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.33.0 by @dependabot in #1998
- build(deps): bump github.com/docker/docker from 20.10.7+incompatible to 24.0.9+incompatible by @dependabot in #1999
- build(deps): bump github.com/gofiber/fiber/v2 from 2.52.0 to 2.52.1 by @dependabot in #2001
- build(deps): bump actions/checkout from 3 to 4 by @dependabot in #2002
- build(deps): bump actions/setup-go from 4 to 5 by @dependabot in #2003
- build(deps): bump peter-evans/create-pull-request from 5 to 6 by @dependabot in #2005
- build(deps): bump actions/cache from ...
v2.12.4
v2.12.3
I'm happy to announce the v2.12.3 LocalAI release is out!
🌠 Landing page and Swagger
Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:
You can also now enjoy Swagger to try out the API calls directly:
🌈 AIO images changes
Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!
🚀 OpenVINO and transformers enhancements
Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!
To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples
🎈 Lot of small improvements behind the scenes!
Thanks for our outstanding community, we have enhanced several areas:
- The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
- @thiner worked hardly to get Vision support for AutoGPTQ
- ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!
📣 Spread the word!
First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI!
What's Changed
Bug fixes 🐛
- fix: downgrade torch by @mudler in #1902
- fix(aio): correctly detect intel systems by @mudler in #1931
- fix(swagger): do not specify a host by @mudler in #1930
- fix(tools): correctly render tools response in templates by @mudler in #1932
- fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
- fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
- fix(functions): respect when selected from string by @mudler in #1940
- fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
- fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
- fix(welcome): stable model list by @mudler in #1949
- fix(ci): manually tag latest images by @mudler in #1948
- fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
- fix regression #1971 by @fakezeta in #1972
Exciting New Features 🎉
- feat(aio): add intel profile by @mudler in #1901
- Enhance autogptq backend to support VL models by @thiner in #1860
- feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
- feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
- feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
- feat(welcome): add simple welcome page by @mudler in #1912
- fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
- feat(webui): add partials, show backends associated to models by @mudler in #1922
- feat(swagger): Add swagger API doc by @mudler in #1926
- feat(build): adjust number of parallel make jobs by @cryptk in #1915
- feat(swagger): update by @mudler in #1929
- feat: first pass at improving logging by @cryptk in #1956
- fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961
📖 Documentation and examples
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1903
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1904
- ⬆️ Update M0Rf30/go-tiny-dream by @M0Rf30 in #1911
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1913
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1914
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1923
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1924
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1928
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1933
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1934
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1937
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1941
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1953
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1958
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1959
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1964
Other Changes
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1927
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1960
- fix(hermes-2-pro-mistral): correct dashes in template to suppress newlines by @mudler in #1966
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1969
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1970
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1973
New Contributors
Full Changelog: v2.11.0...v2.12.3
v2.12.1
I'm happy to announce the v2.12.1 LocalAI release is out!
🌠 Landing page and Swagger
Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:
You can also now enjoy Swagger to try out the API calls directly:
🌈 AIO images changes
Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!
🚀 OpenVINO and transformers enhancements
Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!
To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples
🎈 Lot of small improvements behind the scenes!
Thanks for our outstanding community, we have enhanced several areas:
- The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
- @thiner worked hardly to get Vision support for AutoGPTQ
- ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!
📣 Spread the word!
First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI!
What's Changed
Bug fixes 🐛
- fix: downgrade torch by @mudler in #1902
- fix(aio): correctly detect intel systems by @mudler in #1931
- fix(swagger): do not specify a host by @mudler in #1930
- fix(tools): correctly render tools response in templates by @mudler in #1932
- fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
- fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
- fix(functions): respect when selected from string by @mudler in #1940
- fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
- fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
- fix(welcome): stable model list by @mudler in #1949
- fix(ci): manually tag latest images by @mudler in #1948
- fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
- fix regression #1971 by @fakezeta in #1972
Exciting New Features 🎉
- feat(aio): add intel profile by @mudler in #1901
- Enhance autogptq backend to support VL models by @thiner in #1860
- feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
- feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
- feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
- feat(welcome): add simple welcome page by @mudler in #1912
- fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
- feat(webui): add partials, show backends associated to models by @mudler in #1922
- feat(swagger): Add swagger API doc by @mudler in #1926
- feat(build): adjust number of parallel make jobs by @cryptk in #1915
- feat(swagger): update by @mudler in #1929
- feat: first pass at improving logging by @cryptk in #1956
- fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961
📖 Documentation and examples
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1903
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1904
- ⬆️ Update M0Rf30/go-tiny-dream by @M0Rf30 in #1911
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1913
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1914
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1923
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1924
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1928
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1933
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1934
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1937
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1941
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1953
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1958
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1959
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1964
Other Changes
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1927
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1960
- fix(hermes-2-pro-mistral): correct dashes in template to suppress newlines by @mudler in #1966
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1969
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1970
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1973
New Contributors
Full Changelog: v2.11.0...v2.12.1
v2.12.0
I'm happy to announce the v2.12.0 LocalAI release is out!
🌠 Landing page and Swagger
Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:
You can also now enjoy Swagger to try out the API calls directly:
🌈 AIO images changes
Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!
🚀 OpenVINO and transformers enhancements
Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!
To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples
🎈 Lot of small improvements behind the scenes!
Thanks for our outstanding community, we have enhanced several areas:
- The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
- @thiner worked hardly to get Vision support for AutoGPTQ
- ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!
📣 Spread the word!
First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI!
What's Changed
Bug fixes 🐛
- fix: downgrade torch by @mudler in #1902
- fix(aio): correctly detect intel systems by @mudler in #1931
- fix(swagger): do not specify a host by @mudler in #1930
- fix(tools): correctly render tools response in templates by @mudler in #1932
- fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
- fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
- fix(functions): respect when selected from string by @mudler in #1940
- fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
- fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
- fix(welcome): stable model list by @mudler in #1949
- fix(ci): manually tag latest images by @mudler in #1948
- fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
- fix regression #1971 by @fakezeta in #1972
Exciting New Features 🎉
- feat(aio): add intel profile by @mudler in #1901
- Enhance autogptq backend to support VL models by @thiner in #1860
- feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
- feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
- feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
- feat(welcome): add simple welcome page by @mudler in #1912
- fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
- feat(webui): add partials, show backends associated to models by @mudler in #1922
- feat(swagger): Add swagger API doc by @mudler in #1926
- feat(build): adjust number of parallel make jobs by @cryptk in #1915
- feat(swagger): update by @mudler in #1929
- feat: first pass at improving logging by @cryptk in #1956
- fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961
📖 Documentation and examples
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1903
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1904
- ⬆️ Update M0Rf30/go-tiny-dream by @M0Rf30 in #1911
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1913
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1914
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1923
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1924
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1928
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1933
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1934
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1937
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1941
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1953
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1958
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1959
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1964
Other Changes
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1927
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1960
- fix(hermes-2-pro-mistral): correct dashes in template to suppress newlines by @mudler in #1966
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1969
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1970
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1973
New Contributors
Full Changelog: v2.11.0...v2.12.0
v2.11.0
Introducing LocalAI v2.11.0: All-in-One Images!
Hey everyone! 🎉 I'm super excited to share what we've been working on at LocalAI - the launch of v2.11.0. This isn't just any update; it's a massive leap forward, making LocalAI easier to use, faster, and more accessible for everyone.
🌠 The Spotlight: All-in-One Images, OpenAI in a box
Imagine having a magic box that, once opened, gives you everything you need to get your AI project off the ground with generative AI. A full clone of OpenAI in a box. That's exactly what our AIO images are! Designed for both CPU and GPU environments, these images come pre-packed with a full suite of models and backends, ready to go right out of the box.
Whether you're using Nvidia, AMD, or Intel, we've got an optimized image for you. If you are using CPU-only you can enjoy even smaller and lighter images.
To start LocalAI, pre-configured with function calling, llm, tts, speech to text, and image generation, just run:
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
## Do you have a Nvidia GPUs? Use this instead
## CUDA 11
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-11
## CUDA 12
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-12
❤️ Why You're Going to Love AIO Images:
- Ease of Use: Say goodbye to the setup blues. With AIO images, everything is configured upfront, so you can dive straight into the fun part - hacking!
- Flexibility: CPU, Nvidia, AMD, Intel? We support them all. These images are made to adapt to your setup, not the other way around.
- Speed: Spend less time configuring and more time innovating. Our AIO images are all about getting you across the starting line as fast as possible.
🌈 Jumping In Is a Breeze:
Getting started with AIO images is as simple as pulling from Docker Hub or Quay and running it. We take care of the rest, downloading all necessary models for you. For all the details, including how to customize your setup with environment variables, our updated docs have got you covered here, while you can get more details of the AIO images here.
🎈 Vector Store
Thanks to the great contribution from @richiejp now LocalAI has a new backend type, "vector stores" that allows to use LocalAI as in-memory Vector DB (#1792). You can learn more about it here!
🐛 Bug fixes
This release contains major bugfixes to the watchdog component, and a fix to a regression introduced in v2.10.x which was not respecting --f16
, --threads
and --context-size
to be applied as model's defaults.
🎉 New Model defaults for llama.cpp
Model defaults has changed to automatically offload maximum GPU layers if a GPU is available, and it sets saner defaults to the models to enhance the LLM's output.
🧠 New pre-configured models
You can now run llava-1.6-vicuna
, llava-1.6-mistral
and hermes-2-pro-mistral
, see Run other models for a list of all the pre-configured models available in the release.
📣 Spread the word!
First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI!
🔗 Links
- Quickstart docs (how to run with AIO images): https://localai.io/basics/getting_started/
- More reference on AIO image: https://localai.io/docs/reference/aio-images/
- List of embedded models that can be started: https://localai.io/docs/getting-started/run-other-models/
🎁 What's More in v2.11.0?
Bug fixes 🐛
- fix(config): pass by config options, respect defaults by @mudler in #1878
- fix(watchdog): use ShutdownModel instead of StopModel by @mudler in #1882
- NVIDIA GPU detection support for WSL2 environments by @enricoros in #1891
- Fix NVIDIA VRAM detection on WSL2 environments by @enricoros in #1894
Exciting New Features 🎉
- feat(functions/aio): all-in-one images, function template enhancements by @mudler in #1862
- feat(aio): entrypoint, update workflows by @mudler in #1872
- feat(aio): add tests, update model definitions by @mudler in #1880
- feat(stores): Vector store backend by @richiejp in #1795
- ci(aio): publish hipblas and Intel GPU images by @mudler in #1883
- ci(aio): add latest tag images by @mudler in #1884
🧠 Models
📖 Documentation and examples
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1856
- docs(mac): improve documentation for mac build by @tauven in #1873
- docs(aio): Add All-in-One images docs by @mudler in #1887
- fix(aio): make image-gen for GPU functional, update docs by @mudler in #1895
👒 Dependencies
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1508
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1857
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1864
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1866
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1867
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1874
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1875
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1881
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1885
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1889
Other Changes
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1896
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1897
New Contributors
- @enricoros made their first contribution in #1891
Full Changelog: v2.10.1...v2.11.0
v2.10.1
What's Changed
Bug fixes 🐛
- fix(llama.cpp): fix eos without cache by @mudler in #1852
- fix(config): default to debug=false if not set by @mudler in #1853
- fix(config-watcher): start only if config-directory exists by @mudler in #1854
Exciting New Features 🎉
Other Changes
- fixes #1051: handle openai presence and request penalty parameters by @blob42 in #1817
- fix(make): allow to parallelize jobs by @cryptk in #1845
- fix(go-llama): use llama-cpp as default by @mudler in #1849
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1847
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1848
- test/fix: OSX Test Repair by @dave-gray101 in #1843
Full Changelog: v2.10.0...v2.10.1
v2.10.0
LocalAI v2.10.0 Release Notes
Excited to announce the release of LocalAI v2.10.0! This version introduces significant changes, including breaking changes, numerous bug fixes, exciting new features, dependency updates, and more. Here's a summary of what's new:
Breaking Changes 🛠
- The
trust_remote_code
setting in the YAML config file of the model are now consumed for enhanced security measures also for the AutoGPTQ and transformers backend, thanks to @dave-gray101's contribution (#1799). If your model relied on the old behavior and you are sure of what you are doing, settrust_remote_code: true
in the YAML config file.
Bug Fixes 🐛
- Various fixes have been implemented to enhance the stability and performance of LocalAI:
- SSE no longer omits empty
finish_reason
fields for better compatibility with the OpenAI API, fixed by @mudler (#1745). - Functions now correctly handle scenarios with no results, also addressed by @mudler (#1758).
- A Command Injection Vulnerability has been fixed by @ouxs-19 (#1778).
- OpenCL-based builds for llama.cpp have been restored, thanks to @cryptk's efforts (#1828, #1830).
- An issue with OSX build
default.metallib
has been resolved, which should now allow running the llama-cpp backend on Apple arm64, fixed by @dave-gray101 (#1837).
- SSE no longer omits empty
Exciting New Features 🎉
- LocalAI continues to evolve with several new features:
- Ongoing implementation of the assistants API, making great progress thanks to community contributions, including an initial implementation by @christ66 (#1761).
- Addition of diffusers/transformers support for Intel GPU - now you can generate images and use the
transformer
backend also on Intel GPUs, implemented by @mudler (#1746). - Introduction of Bitsandbytes quantization for transformer backend enhancement and a fix for transformer backend error on CUDA by @fakezeta (#1823).
- Compatibility layers for Elevenlabs and OpenAI TTS, enhancing text-to-speech capabilities: Now LocalAI is compatible with Elevenlabs and OpenAI TTS, thanks to @mudler (#1834).
- vLLM now supports
stream: true
! This feature was introduced by @golgeek (#1749).
Dependency Updates 👒
- Our continuous effort to keep dependencies up-to-date includes multiple updates to
ggerganov/llama.cpp
,donomii/go-rwkv.cpp
,mudler/go-stable-diffusion
, and others, ensuring that LocalAI is built on the latest and most secure libraries.
Other Changes
- Several internal changes have been made to improve the development process and documentation, including updates to integration guides, stress reduction on self-hosted runners, and more.
Details of What's Changed
Breaking Changes 🛠
- feat(autogpt/transformers): consume
trust_remote_code
by @dave-gray101 in #1799
Bug fixes 🐛
- fix(sse): do not omit empty finish_reason by @mudler in #1745
- fix(functions): handle correctly when there are no results by @mudler in #1758
- fix(tests): re-enable tests after code move by @mudler in #1764
- Fix Command Injection Vulnerability by @ouxs-19 in #1778
- fix: the correct BUILD_TYPE for OpenCL is clblas (with no t) by @cryptk in #1828
- fix: missing OpenCL libraries from docker containers during clblas docker build by @cryptk in #1830
- fix: osx build default.metallib by @dave-gray101 in #1837
Exciting New Features 🎉
- fix: vllm - use AsyncLLMEngine to allow true streaming mode by @golgeek in #1749
- refactor: move remaining api packages to core by @dave-gray101 in #1731
- Bump vLLM version + more options when loading models in vLLM by @golgeek in #1782
- feat(assistant): Initial implementation of assistants api by @christ66 in #1761
- feat(intel): add diffusers/transformers support by @mudler in #1746
- fix(config): set better defaults for inferencing by @mudler in #1822
- fix(docker-compose): update docker compose file by @mudler in #1824
- feat(model-help): display help text in markdown by @mudler in #1825
- feat: Add Bitsandbytes quantization for transformer backend enhancement #1775 and fix: Transformer backend error on CUDA #1774 by @fakezeta in #1823
- feat(tts): add Elevenlabs and OpenAI TTS compatibility layer by @mudler in #1834
- feat(embeddings): do not require to be configured by @mudler in #1842
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1752
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1753
- deps(llama.cpp): update by @mudler in #1759
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1756
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1767
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1772
- ⬆️ Update donomii/go-rwkv.cpp by @localai-bot in #1771
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1779
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1789
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1791
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1794
- depedencies(sentencentranformers): update dependencies by @TwinFinz in #1797
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1801
- ⬆️ Update mudler/go-stable-diffusion by @localai-bot in #1802
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1805
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1811
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1827
Other Changes
- ci: add stablediffusion to release by @sozercan in #1757
- Update integrations.md by @Joshhua5 in #1765
- ci: reduce stress on self-hosted runners by @mudler in #1776
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1785
- Revert "feat(assistant): Initial implementation of assistants api" by @mudler in #1790
- Edit links in readme and integrations page by @lunamidori5 in #1796
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1813
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1816
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1818
- fix(doc/examples): set defaults to mirostat by @mudler in #1820
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1821
- fix: OSX Build Files for llama.cpp by @dave-gray101 in #1836
- ⬆️ Update go-skynet/go-llama.cpp by @localai-bot in #1835
- docs(transformers): add docs section about transformers by @mudler in #1841
- ⬆️ Update mudler/go-piper by @localai-bot in #1844
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1840
New Contributors
- @golgeek made their first contribution in #1749
- @Joshhua5 made their first contribution in #1765
- @ouxs-19 made their first contribution in #1778
- @TwinFinz made their first contribution in #1797
- @cryptk made their first contribution in #1828
- @fakezeta made their first contribution in #1823
Thank you to all contributors and users for your continued support and feedback, making LocalAI better with each release!
Full Changelog: v2.9.0...v2.10.0