Skip to content
This repository was archived by the owner on May 13, 2025. It is now read-only.

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

| | | |

Introduction

This source code implementation aims to assist researcher, user in creating DL-based algorithm for VAD problem quickly and systematically. In this project, we have:

  • preprocessing pipeline script
  • YouTube video crawling
  • VAD train test splitter
  • training script
  • testing script
  • mlflow export & import

You can find all of them in here and associated running script in here. At the moment, we just implemented two algorithms due to time and resource constraints. The former is referenced from Real-world Anomaly Detection in Surveillance Videos and the second one is from Distilling Aggregated Knowledge for Weakly-Supervised Video Anomaly Detection.

Installation

Prerequisites

1/ NVIDIA CUDA on Linux (Optional)

check at here

2/ Docker (Required)

On Linux (Ubuntu): check at here
On Windows: check at here

3/ NVIDIA Container Toolkit (Optional)

check at here

3/ FFmpeg

Only cpu:

# Ubuntu 24.04
sudo apt update
sudo apt install ffmpeg

With GPU support (tested on Ubuntu 24.04):

# Install ffnvcodec
git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git
cd nv-codec-headers && sudo make install && cd ~

# Install necessary packages
sudo apt-get install build-essential yasm cmake libtool libc6 libc6-dev unzip wget libnuma1 libnuma-dev

# Install ffmpeg
git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg/ && cd ffmpeg
./configure --enable-nonfree --enable-cuda-nvcc --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 --disable-static --enable-shared
make -j 8
sudo make install

4/ Swap memory (Linux only)

In case of insufficient memory during training or testing phases, we suggest that you should monitor your RAM and swap memory and have a suitable increase based on your need. For increasing swap memory on Ubuntu, check at here

5/ Shared memory - shm (Linux only)

Due to multiprocessing mechanism of Pytorch DataLoader class, it requires a vast amount of shared memory for loading a torch-tensor-converted video.

# Add this line to the end of /etc/fstab file.
# This line increase shm capacity up to 80Gb
tmpfs /dev/shm tmpfs defaults,size=80G 0 0

6/ Virtual memory (Windows only)

As above-mentioned reason, you should enlarge your virtual memory in case of using Windows OS. For more specific details, please check at here.

7/ Python virtual environment

In order to evade dependencies conflict with other environment, we highly recommend you to create a completely new virtual python environment via venv.

Dependencies

Via pip

pip install -r requirement.txt

Usage

1/ For crawling data from YouTube

Please read this.

2/ For data preprocessing

Please read this.

3/ For training model

Please read this.

3/ For testing model

Please read at here

4/ For deploying model

Please read this.