Skip to content

AEmanuelli/DolphinWhistleExtractor

Repository files navigation

Dolphin Whistle Detection

A comprehensive tool for detecting and extracting dolphin whistles from audio and video recordings using deep neural networks.

🌟 Features

  • Automated detection of dolphin whistles using deep learning
  • User-friendly GUI interface
  • Command-line interface for advanced users
  • Multi-threaded processing for improved performance
  • Exportable results in CSV format
  • Automatic extraction of detected whistle segments

📋 Prerequisites

  • Python 3.9

Installation

  1. Clone the repository:
git clone https://github.com/AEmanuelli/Dolphins.git
cd Dolphins/DNN_whistle_detection/
  1. Set up a virtual environment (recommended):
# Using conda (recommended)
conda create -n DolphinWhistleExtractor python=3.9
conda activate DolphinWhistleExtractor
conda install tk

```markdown
# OR using venv
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install tkinter
  • Ubuntu
sudo apt-get install python3-tk 
  • Fedora
sudo dnf install python3-tkinter
  • MacOS
brew install python-tk
  1. Install dependencies:
  • Option 1: Use the GUI's "Install Dependencies" button
  • Option 2: Manual installation via command line:
    pip install -r requirements.txt

💻 Usage

GUI Interface (Recommended)

  1. Launch the GUI:
python GUI_app.py
  1. In the GUI:
    • Select your pre-trained model
    • Choose input folder containing recordings
    • Select output folder for results
    • Configure processing parameters:
      • Start/End times
      • Batch size
      • Frequency cutoffs (CLF/CHF)
      • Image normalization options
    • Click "Start Processing"

Command Line Interface

For advanced users who prefer command-line operations:

# Run complete pipeline
python main.py --input_folder /path/to/recordings --output_folder /path/to/output

# Process specific files
python predict_and_extract_online.py --input_folder /path/to/recordings --model_path /path/to/model

# Process predictions
python process_predictions.py --predictions_folder /path/to/predictions

📁 Project Structure

├── __init__.py
├── models/
├── Models_vs_years_comparison
│   ├── figs/
│   └── models_vs_years_predictions.ipynb
├── Predict_and_extract
│   ├── Extraction_with_kaggle
│   │   ├── classify_again.ipynb
│   │   ├── utility_script (extraction_using_kaggle).py
│   │   └── Whistle_pipeline_(extraction_with_kaggle).ipynb
│   ├── __init__.py
│   ├── main.py
│   ├── GUI_app.py
│   ├── maintenance.py
│   ├── pipeline.ipynb
│   ├── predict_and_extract_online.py
│   ├── predict_online.py
│   ├── process_predictions.py
│   ├── Show_newly_detected_stuff.py
│   ├── utils.py
│   └── vidéoaudio.py
├── README.md (this file)
├── requirements.txt
└── Train_model
    ├── Create_dataset
    │   ├── AllWhistlesSubClustering_final.csv
    │   ├── create_batch_data.py
    │   ├── DNN précision - CSV_EVAL.csv
    │   ├── dolphin_signal_train.csv
    │   ├── save_spectrogram.m
    │   └── timings
    │       ├── negatives.csv
    │       └── positives.csv
    ├── csv2dataset.py
    ├── dataset2csv.py
    ├── fine-tune2.ipynb
    ├── fine-tune.ipynb
    ├── __init__.py
    └── Train.py

🔧 Key Components

Prediction Pipeline

The system uses a three-stage pipeline:

  1. Detection: Neural network processes audio/video files
  2. Extraction: Identified whistles are extracted
  3. Post-processing: Results are organized and saved

Available Tools

  • GUI_app.py: User-friendly interface for all operations
  • predict_and_extract_online.py: Core detection engine
  • process_predictions.py: Results processor
  • Show_newly_detected_stuff.py: Comparison tool for new model outputs

📚 Additional Resources

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📧 Contact

[email protected]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published