Skip to content

ucsdarclab/motion_planning_transformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Motion Planning Transformers: One Model to Plan them All

The network architecture for Motion Planning Transformers (MPT).

Transformer Figure

Requirements

All our experiments were conducted on Ubuntu18.04 with Python3.6 with CUDA11.0. To generate the data, and evaluate the planner you will need the OMPL-1.4.2 library with the Python bindings.

Other python dependencies are given in requirements.txt. You can install the package using pip:

pip3 install -r requirments.txt

Using Docker

We highly recommend that to replicate our testing environment, users can use our docker container which contains all the necessary libraries packages. Please use the latest version of nvidia-docker2. Download the .tar file.

To load the image from the tar file, run the following:

docker load -i mpt_container.tar

To run the container, run the following command:

docker run -it --gpus all --shm-size="16G" -v ~/global_planner_data:/root/data -v <link-to-code-base>:/workspace bash

You can run the script in docker/ompl_torch_docker.sh. Attach the folder containing the data to /root/data and the folder with the code to /workspace.

Creating Dataset

To generate training or validation data set for the point environment you can run the following command:

python3 rrt_star_map.py --start=... --numEnv=... --envType=... --numPaths=... --fileDir=... --mapFile

To collect data samples for the car environment you can run the following command:

python3 sst_map.py --start=... --numEnv=... --numPaths=... --fileDir=...

You can download all the data we used for training from here.

Training

To train the data, run the following command:

python3 train.py --batchSize=... --mazeDir=... --forestDir=... --fileDir=...

Pre-trained Models

You can download the pretrained models for the point robot and Dubins Car Model from here.

Evaluation

To evaluate a set of validation paths, you can run the following code:

python3 eval_model.py --modelFolder=... --valDataFolder=... --start=... --numEnv=... --epoch=... --numPaths=...

Results

Environment Random Forest Maze
Accuracy Time (sec) Vertices Accuracy Time (sec) Vertices
RRT* 100% 5.44 3227.5 100% 5.36 2042
IRRT* 100% 0.42 267 100% 3.13 1393.5
UNet-RRT* 30.27% 0.13 168 21.4% 0.31 275.5
MPNet 92.35% 0.29 634 71.76% 1.72 1408.5
MPT-RRT* 99.4% 0.19 233 99.16% 0.84 626
MPT-IRRT* 99.4% 0.08 136 99.16% 0.73 566
MPT-RRT*-EE 100% 0.2 247 100% 0.82 585

Visualizing Trajectories

To visualize trajectories, you can use the VisualizeTrajectories.ipynb notebook to view the paths. To spin up the jupyter notebook, you can run the following command inside the container environment.

export JUPYTER_TOKEN=mpt
jupyter notebook --allow-root --no-browser --ip=0.0.0.0. --port=8888

When prompted for a token enter mpt.

Contributing

This code base is currently for reviewing purpose alone. Please do not distribute.

About

Motion Planning Transformer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published