A TypeScript/JavaScript library for building composable, chainable, and transactional chat flows with Ollama models. ollama-chain provides a fluent API to construct, manage, and execute chat conversations, including streaming and transaction support. Based on Ollama Javascript Library.
- Fluent, chainable API for building chat prompts
- System, user, and assistant message management
- Streaming and non-streaming responses
- Transaction support (begin, commit, rollback message history)
- Customizable model, options, and response format
- Easy integration with Ollama's API
- Language support for multilingual conversations
- Short response mode for concise answers
- Logging capabilities for debugging and monitoring
- TypeScript support with full type definitions
npm install ollama-chain
TypeScript
import OllamaChain from "ollama-chain";
const main = async () => {
const ollamachain = OllamaChain();
const response = await ollamachain()
.model("gemma3:4b")
.logger(true) // Enable logging
.setLanguage("eng") // Set response language
.shortResponse() // Enable short response mode
.systemMessage("You are a helpful assistant.")
.userMessage("What is the capital of France?")
.stream({ temperature: 0.7, top_p: 0.9 });
let responseText = "";
for await (const chunk of response) {
responseText += chunk.message?.content || "";
console.log("responseText:", responseText);
}
console.log("Response finished.");
};
main();const response = await ollamachain()
.model("gemma3:4b")
.systemMessage("You are a helpful assistant.")
.userMessage("Tell me a joke.")
.chat({ temperature: 0.7 });
console.log(response.message.content);const chain = ollamachain();
chain.trx();
chain.userMessage("First message");
// ... add more messages
chain.rollback();CommonJS
const { OllamaChain } = require('ollama-chain')
const main = async () => {
const ollamachain = OllamaChain();
const response = await ollamachain()
.model("gemma3:4b")
.systemMessage("You are a helpful assistant.")
.userMessage("What is the capital of France?")
.stream({ temperature: 0.7, top_p: 0.9 });
let responseText = "";
for await (const chunk of response) {
responseText += chunk.message?.content || "";
console.log("responseText:", responseText);
}
console.log("Response finished.");
};
main();ollamachain()
.setLanguage("ukr") // Set Ukrainian language
.userMessage("Tell me about Ukraine");ollamachain()
.shortResponse() // Enable concise responses
.userMessage("What is quantum computing?");ollamachain()
.stepByStep() // Enable step-by-step problem solving
.userMessage("How do I solve a quadratic equation?");ollamachain()
.thinking() // Instruct the model to show its thought process before answering
.userMessage("Why is the sky blue?");ollamachain()
.logger(true) // Enable logging for debugging
.userMessage("Debug this conversation");Check out more examples in the examples directory:
chat.ts- Interactive chat examplesprompt-builder.ts- Advanced prompt constructionstream.ts- Streaming response handlingtransaction.ts- Message history managementset-language.ts- Multilingual conversation examples
Tested with gemma3:4b and compatible with other Ollama models.
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.
.model(modelName: string)— Set the Ollama model to use.chat(options?: object)— Get a single response (non-streaming).stream(options?: object)— Get a streaming response (async iterable).execute(query: ChatRequestStream | ChatRequestBase)— Execute a custom query object directly
.systemMessage(message: string, overload?: boolean)— Add or update a system message. Whenoverloadis true, replaces the existing system message completely. When false (default), appends the new message to the existing system message with a newline separator.userMessage(message: string)— Add a user message to the conversation.assistantMessage(message: string)— Add an assistant message to the conversation.getHistory()— Get the current message history array
.setLanguage(language: string)— Set the response language (e.g., "eng", "ukr").detailedResponse()— Configure the model to provide detailed, comprehensive responses.shortResponse()— Configure the model to provide concise, brief responses.thinking()— Instructs the model to write its thought process and reasoning before answering the question.stepByStep()— Instructs the model to break down the problem and solve it step by step.format(format?: ResponseFormat)— Set custom response format parameters
.trx()— Begin a transaction to track message history changes.commit()— Save changes and end the current transaction.rollback()— Revert message history to the state before the transaction started
.logger(isActive: boolean)— Enable or disable query logging for debugging.keepAlive(param: string | number)— Set how long to keep the model loaded. Accept a number (seconds) or a duration string ("300ms", "1.5h", "2h45m").toQuery(options?: object)— Get the raw query object for inspection or custom execution
MIT