Your Local AI Pair Programmer: Point this AI assistant to your project folder and start asking questions!
It reads your code locally to provide truly context-aware help, explanations, and suggestions directly related to your files. Code smarter, locally.
- Python: Version 3.11 or higher recommended.
tree
command: Theget_tree_folders
tool relies on this.- Linux (Debian/Ubuntu):
sudo apt update && sudo apt install tree
- macOS (using Homebrew):
brew install tree
- Linux (Debian/Ubuntu):
-
Clone the repository:
git clone https://github.com/Bessouat40/coding-assistant cd coding-assistant
-
Install Python dependencies:
pip install -r requirements.txt
-
API Keys & Settings: Sensitive information like LLM API keys should be stored in a
.env
file in the project's root directory. - Copy the example file or create a new file named.env
:cp .env.example .env
-
Edit the
.env
file and add your keys:# .env MISTRAL_API_KEY=your_mistral_api_key_here # GOOGLE_API_KEY=your_google_api_key_here # Add other variables if needed, e.g., MAX_CONTEXT_TOKENS=7000
- LLM Provider: Change LLM provider if you want :
- Go to
api/utils.py
and modifyloadl_llm
function. Uncomment the line with the provider you need. By default, it's set toGoogle
:
def load_llm(logger):
try:
model = ChatOllama(model="llama3.1:8b")
# model = ChatMistralAI(model="codestral-latest")
# model = ChatGoogleGenerativeAI(model="gemini-2.0-flash-001")
logger.info(f"ChatGoogleGenerativeAI model '{model.model}' initialized successfully.")
return model
except Exception as e:
logger.error(f"Failed to initialize the LLM model: {e}")
raise RuntimeError(f"Could not initialize LLM: {e}") from e
A convenience script launch_assistant.sh
is provided to start all components.
-
Make the script executable:
chmod +x launch_assistant.sh
-
Run the script:
./launch_assistant.sh
This script will:
- Load environment variables from
.env
. - Start the MCP Tool Server (
agent_tools.py
) in the background (logs tomcp_server.log
). - Start the FastAPI Backend (
api.py
) in the background (logs tofastapi_api.log
). - Start the Streamlit UI (
streamlit_app.py
) in the background (logs tostreamlit_ui.log
). - Print the URLs and PIDs for each component.
You can view logs in the specified .log
files for debugging. Press Ctrl+C
in the terminal where you ran the script to stop all components gracefully.
- Access the UI: Open your web browser and navigate to the Streamlit URL (usually
http://localhost:8501
). - Set Project Directory: In the sidebar, enter the absolute path to the local project directory you want the assistant to work with. Click "Set Directory".
- Chat: Use the chat input at the bottom to ask questions about the code in the specified directory. Examples:
- "Explain the purpose of the
run_agent
function inapi.py
." - "Show me the file structure of this project." (Uses
tree
) - "What arguments does the
generate_response_api
function take?" - "Read the contents of
prompt.py
." (Usescat
) - "Can you suggest improvements to the error handling in
streamlit_app.py
?"
- "Explain the purpose of the
- Follow-up: The assistant remembers the context of your current chat session. Ask follow-up questions naturally.
- Clear History: Use the "Clear Chat History" button in the sidebar to start a fresh conversation.