quadrotor-simulation-unity is a Unity-based quadrotor simulation designed for reinforcement learning and AI-based navigation.
- Quadrotor Simulation in Unity
Quadrotor Simulation in Unity is a high-fidelity quadrotor simulation designed for:
- AI Training: Train reinforcement learning models to control drones
- Computer Vision: Develop and test vision-based navigation systems
- Autonomous Racing: Develop AI for drone racing competitions
- Robot Learning: Experiment with flight physics and control models
✔ Quadrotor Flight Simulation:
Simulates realistic aerodynamics and drone motion
✔ Physics-Based Flight Controls:
Customizable Flight Controls & Physics Parameters
✔ ML-Agents Integration:
Train AI models and integrate with python using Unity ML-Agents Toolkit
The drone's movement is simulated using rigidbody physics in Unity. The key aspects of the physics model include:
-
The drone applies an upward force (
engineForce
) to counteract gravity. -
The drone applies forces using rb.AddForce(), ensuring realistic acceleration and deceleration.
-
Initial drag and angular drag are stored and can be adjusted for stability.
-
The agent receives continuous action inputs to control:
- Pitch (Forward/Backward Tilt)
- Roll (Side-to-Side Tilt)
- Yaw (Rotation Around Vertical Axis)
- Throttle (Up/Down Movement)
-
Movements are smoothly interpolated using:
finalPitch = Mathf.Lerp(finalPitch, pitch, Time.deltaTime * lerpSpeed); finalRoll = Mathf.Lerp(finalRoll, roll, Time.deltaTime * lerpSpeed); finalYaw = Mathf.Lerp(finalYaw, yaw, Time.deltaTime * lerpSpeed);
Parameter | Description | Default |
---|---|---|
maxPower |
Maximum engine thrust power | 100f |
minMaxPitch |
Max forward/back tilt (pitch) | 20f |
minMaxRoll |
Max side tilt (roll) | 20f |
YawPower |
Rotation speed around vertical axis | 5f |
horizontalSpeedFactor |
Speed multiplier for movement | 2f |
lerpSpeed |
Smooth control response | 2f |
weightLbs |
Drone weight (lbs) | 1f |
You can modify these parameters in the Agent's behavior configuration to customize flight dynamics.
Action | Key Bindings |
---|---|
Pitch | W (forward) / S (backward) |
Roll | A (left) / D (right) |
Yaw | Q (left) / E (right) |
Throttle | Space (up) / Left Shift (down) |
- Install Unity and Unity Hub (Recommended version:
Unity 2022.3.12f1 LTS
). - Clone the repository:
git clone https://github.com/Oneiben/quadrotor-simulation-unity.git cd quadrotor-simulation-unity
- Open in Unity: Launch Unity Hub, select
Open Project
, and choose the cloned folder.
- Open Package Manager and add:
ml-agents
Input System
For reinforcement learning training or testing AI models:
🔗 RL Quadrotor Navigation Repository - A complementary repository for training RL-based drone navigation.
This project supports Python integration for AI-based control using ML-Agents.
Before running the simulation in Python, build the Unity environment:
- In Unity, go to
File
→Build Settings
. - Select Windows/Linux/Mac based on your OS.
- Click Build, and save the executable.
First, create a conda environment with Python 3.10.12:
conda create -n quad-sim python=3.10.12 -y
conda activate quad-sim
Now, install the necessary dependencies:
pip install mlagents_envs==1.0.0
Tip: Make sure you have the latest version of pip
before installing packages:
pip install --upgrade pip
from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.envs.unity_gym_env import UnityToGymWrapper
class UnityEnvironmentWrapper:
def __init__(self, env_path):
"""
Initialize the Unity environment wrapper with the given path.
Args:
env_path (str): Path to the Unity environment application.
"""
self.unity_env = UnityEnvironment(env_path, no_graphics_monitor=False, no_graphics=False)
self.env = UnityToGymWrapper(self.unity_env, uint8_visual=True, allow_multiple_obs=True)
def reset(self):
"""
Reset the Unity environment and return the initial observation.
Returns:
tuple: Initial observation from the environment.
"""
return self.env.reset()
def step(self, action):
"""
Step the environment forward with the given action.
Args:
action (list): Action to take in the environment.
Returns:
tuple: Updated observation, reward, done flag, and additional info.
"""
return self.env.step(action)
def close(self):
"""
Close the Unity environment when done.
"""
self.env.close()
# Example Usage
if __name__ == "__main__":
env_path = "path/to/quad_sim" # Update with the actual build path
env = UnityEnvironmentWrapper(env_path)
obs = env.reset()
done = False
while not done:
action = [0, 0, 0, 0] # Replace with actual control inputs
obs, reward, done, info = env.step(action)
env.close()
Contributions are welcome! If you have suggestions or improvements, feel free to fork the repository and create a pull request.
- Fork the repository.
- Create a new branch:
git checkout -b feature-name
- Commit your changes:
git commit -m "Description of changes"
- Push the changes and open a pull request.
This project is licensed under the MIT License. See the 📜 LICENSE file for more details.