Skip to content

vios-s/PhenoAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PhenoAssistant

This is the official code repository for our paper "PhenoAssistant: A Conversational Multi-Agent AI System for Automated Plant Phenotyping".

(✨New) PhenoAssistant is a multi-agent AI system designed to streamline complex plant phenotyping workflows through natural language interaction. It integrates a central manager agent with a specialised toolkit that combines modules (e.g., an extendable vision model zoo) and other LLM agents with specific roles (e.g., coding and data visualisation). Together, these components enable phenotype extraction and the subsequent data analysis tasks including statistical testing, plot creating and pipeline reproducing, allowing users to conduct image-based plant phenotyping with less technical effort.

PhenoAssistant


Chat logs


Key components of PhenoAssistant

  • Implementation of agents is available at agents.py
  • Implementation of tools is available at functions

Environment setup and demo

To play with a demo, make sure you have GPU (to infer deep learning models) and Azure OpenAI (or OpenAI) API key available, then follow these steps:

  1. Clone the repository:
    git clone https://github.com/fengchen025/PhenoAssistant.git
  2. Navigate into the project directory:
    cd PhenoAssistant
  3. Create the conda environment (this may take ~15 minutes):
    conda env create -f environment.yml
  4. Activate the environment:
    conda activate phenoassistant
  5. Install requirements for Leaf-only-sam:
    • mkdir -p ./models
    • pip install git+https://github.com/facebookresearch/segment-anything.git
    • wget -O ./models/sam_vit_h_4b8939.pth https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
  6. Set up .env.yaml with your API key. See comments inside the file for guidance.
  7. Run the demo at demo.ipynb. Depending on your machine, it may take ~15 minutes to complete. Example outputs are shown in the notebook and saved at ./results/demo.

Note: All case studies, evaluations, and demo results were generated using GPT-4o (version: 2024-08-06) via Azure OpenAI. Using a different model or provider may lead to different results.


Adding External Tools and Agents

  • To add new tools to PhenoAssistant, see the example in new_tools.py. You can either copy & modify existing tools or define your own using the @pheno_tool decorator in this file.
  • Similarly, you can add new agents in new_agents.py.
  • (✨New) You can also prompt PhenoAssistant to train new computer vision models on your own annotated data. Please refer to case3.ipynb for an example. PhenoAssistant supports both LoRA and full fine-tuning approaches. Be sure to check your GPU configuration and choose the method that best matches your computational resources and needs.

HuggingFace Inference API

(This feature is disabled by default.)

If you want to connect PhenoAssistant to the HuggingFace Inference API:

  1. Uncomment the code starting from Line 15 in new_tools.py
  2. Provide your HF_TOKEN in .env.yaml.

After enabling, you can prompt PhenoAssistant to call a vision model via HuggingFace Inference API by supplying its model identifier.

⚠️ Note:

  • Please make sure you understand the model’s input/output format to minimise the risk of errors or unexpected results.
  • We recommend monitoring and controlling your HuggingFace API and OpenAI API usage to avoid unnecessary costs or overuse.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published