diff --git a/README.md b/README.md
index 86bd851d..a4715e27 100644
--- a/README.md
+++ b/README.md
@@ -60,7 +60,7 @@ We warmly welcome contributions of all kinds! For guidelines on how to get invol
| | Provider | HuggingFace Collection | Blog | Description |
|:-------------|:-------------|:----------------------------:|:----------------------------:|:----------------------------|
|DeepSeek | Deepseek | [DeepSeek-V3.2](https://huggingface.co/deepseek-ai/DeepSeek-V3.2)
[DeepSeek-R1](https://huggingface.co/collections/deepseek-ai/deepseek-r1)
| [Deep Seek AI Launches Revolutionary Language Model](https://deepseek.ai/blog/deepseek-v32) | Deep Seek AI is proud to announce the launch of our latest language model, setting new standards in natural language processing and understanding. This breakthrough represents a significant step forward in AI technology, offering unprecedented capabilities in text generation, comprehension, and analysis. |
-|MiniMax-M2 | MiniMax AI | [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2) | [MiniMax M2 & Agent: Ingenious in Simplicity](https://www.minimax.io/news/minimax-m2) | MiniMax-M2 is a compact, fast, and cost-effective MoE model (230B parameters, 10B active) built for advanced coding and agentic workflows. It offers state-of-the-art intelligence and coding abilities, delivering efficient, reliable tool use and strong multi-step reasoning for developers and agents, with high throughput and low latency for easy deployment. |
+|MiniMax-M2 | MiniMax AI | [MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
[MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1) | [MiniMax M2.1: Significantly Enhanced Multi-Language Programming](https://www.minimax.io/news/minimax-m21) | MiniMax-M2.1 is an enhanced sparse MoE model (230B parameters, 10B active) built for advanced coding and agentic workflows. It offers state-of-the-art intelligence, delivering efficient, reliable tool use and strong multi-step reasoning. |
|GLM | Z AI | [GLM-4.7](https://huggingface.co/zai-org/GLM-4.7)
[GLM-4.6](https://huggingface.co/zai-org/GLM-4.6) | [GLM-4.7: Advancing the Coding Capability](https://z.ai/blog/glm-4.7) | "GLM" is an advanced large language model series from Z AI, including GLM-4.6 and GLM-4.7. These models feature long-context support, strong coding and reasoning performance, enhanced tool-use and agent integration, and competitive results across leading open-source benchmarks. |
|Kimi-K2 | Moonshot AI | [Kimi-K2](https://huggingface.co/collections/moonshotai/kimi-k2-6871243b990f2af5ba60617d) | [Kimi K2: Open Agentic Intelligence](https://moonshotai.github.io/Kimi-K2/) | "Kimi-K2" is Moonshot AI's Kimi-K2 model family, including Kimi-K2-Base, Kimi-K2-Instruct and Kimi-K2-Thinking. Kimi K2 Thinking is a state-of-the-art open-source agentic model designed for deep, step-by-step reasoning and dynamic tool use. It features native INT4 quantization and a 256k context window for fast, memory-efficient inference. Uniquely stable in long-horizon tasks, Kimi K2 enables reliable autonomous workflows with consistent performance across hundreds of tool calls.
|Qwen | Qwen | [Qwen3-Next](https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d)
[Qwen3](https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f)
[Qwen2.5](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e)| [Qwen3-Next: Towards Ultimate Training & Inference Efficiency](https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9cb7d27cd&from=research.latest-advancements-list) | The Qwen series is a family of large language models developed by Alibaba's Qwen team. It includes multiple generations such as Qwen2.5, Qwen3, and Qwen3-Next, which improve upon model architecture, efficiency, and capabilities. The models are available in various sizes and instruction-tuned versions, with support for cutting-edge features like long context and quantization. Suitable for a wide range of language tasks and open-source use cases. |
diff --git a/src/backend/server/static_config.py b/src/backend/server/static_config.py
index 1933f5bb..d3b0b4d2 100644
--- a/src/backend/server/static_config.py
+++ b/src/backend/server/static_config.py
@@ -33,6 +33,7 @@
"zai-org/GLM-4.5-Air": "lmstudio-community/GLM-4.5-Air-MLX-8bit",
"zai-org/GLM-4.7": "mlx-community/GLM-4.7-4bit",
# Other Models
+ "MiniMaxAI/MiniMax-M2.1": "mlx-community/MiniMax-M2.1-4bit",
"MiniMaxAI/MiniMax-M2": "mlx-community/MiniMax-M2-4bit",
# ======================================= End ========================================#
#