@@ -580,3 +580,74 @@ This is the first stable 1.0 release of ReqLLM, marking production readiness wit
580580[ 1.0.0-rc.3 ] : https://github.com/agentjido/req_llm/releases/tag/v1.0.0-rc.3
581581[ 1.0.0-rc.2 ] : https://github.com/agentjido/req_llm/releases/tag/v1.0.0-rc.2
582582[ 1.0.0-rc.1 ] : https://github.com/agentjido/req_llm/releases/tag/v1.0.0-rc.1
583+
584+ <!-- changelog -->
585+
586+ ## [ v1.1.0] ( https://github.com/agentjido/req_llm/compare/v1.0.0...v1.1.0 ) (2025-12-21)
587+
588+
589+
590+
591+ ### Features:
592+
593+ * preserve cache_control metadata in OpenAI content encoding (#291 ) by Itay Adler
594+
595+ * add load_dotenv config option to control .env file loading (#287 ) by mikehostetler
596+
597+ * Support inline JSON credentials for Google Vertex AI (#260 ) by shelvick
598+
599+ * anthropic: Add message caching support for conversation prefixes (#281 ) by shelvick
600+
601+ * anthropic: Add message caching support for conversation prefixes by shelvick
602+
603+ * anthropic: Add offset support to message caching by shelvick
604+
605+ * vertex: Add Google Search grounding support for Gemini models (#284 ) by shelvick
606+
607+ * vertex: Add Google Search grounding support for Gemini models by shelvick
608+
609+ * add AI PR review workflow by mikehostetler
610+
611+ * change to typedstruct (#256 ) by JoeriDijkstra
612+
613+ * Add Google Context Caching support for Gemini models (#193 ) by neilberkman
614+
615+ * Add Google Vertex Gemini support by Neil Berkman
616+
617+ * Add credential fallback for fixture recording (#218 ) by neilberkman
618+
619+ * Integrate llm_db for model metadata (v1.1.0) (#212 ) by mikehostetler
620+
621+ * req_llm: accept LLMDB.Model; remove runtime fields from Model struct by mikehostetler
622+
623+ * allow task_type with google embeddings by Kasun Vithanage
624+
625+ * add StreamResponse.process_stream/2 for real-time callbacks (#178 ) by Edgar Gomes
626+
627+ ### Bug Fixes:
628+
629+ * Propagate streaming errors to process_stream result (#286 ) by mikehostetler
630+
631+ * Add anthropic_cache_messages to Bedrock and Vertex schemas by shelvick
632+
633+ * bedrock: Remove incorrect Converse API requirement for inference profiles by shelvick
634+
635+ * vertex: Extract google_grounding from nested provider_options by shelvick
636+
637+ * vertex: Remove incorrect camelCase transformation for grounding tools by shelvick
638+
639+ * increase default timeout for OpenAI reasoning models (#252 ) by mikehostetler
640+
641+ * merge consecutive tool results into single user message (#243 ) (#250 ) by mikehostetler
642+
643+ * respect existing env vars when loading .env (#239 ) (#249 ) by mikehostetler
644+
645+ * typespec on object generation to allow zoi schemas (#208 ) by Kasun Vithanage
646+
647+ * typespec for zoi schemas on object generation by Kasun Vithanage
648+
649+ ### Refactoring:
650+
651+ * req_llm: move max_retries to request options by mikehostetler
652+
653+ * req_llm: delegate model metadata to LLMDB; keep provider registry by mikehostetler
0 commit comments