Skip to content
/ CSMLC Public

Machine learning model trained on CS2, used for detecting the location of players with nothing more than an image of the screen.

License

Notifications You must be signed in to change notification settings

SpikeHD/CSMLC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Counter Strike Machine Learning Cheats is a project I created to learn image recognition machine learning. Using a modified CS2 ESP program to dump bounding box data into YOLO format, I am able to efficiently gather training data after just a little bit of cleanup!

Inspired by Sequoia :)

Disclaimer: This is created both as practice for myself and as an educational example of practical image-recognition and machine learning applications. It is not intended for use outside of private matches.

Table of contents

Progress

Detection

Here is an example video of the current model (as of April 7, 2023)! There are a couple mis-identifications, but overall I think it's really accurate.

2023-04-09.mp4

Triggerbot

Two examples of the triggerbot implementation (for whole bodies, not just heads yet). At no point in either of these clips is there a single real mouse press:

reactiontime.mp4
trigger_fix.mp4

Setup

Both training and usage require the installation of Python 3. Trust me, I also wish this were not the case. C++ compiled binaries one day I promise.

This repo should always contain a best.pt model, which will always be updated to the best model I have trained so far. You can use this instead of training it yourself.

For training

  1. (optional) use a venv:
python -m venv [venv name]
.\[venv name]\Scripts\activate
  1. Install torch with CUDA (if your GPU supports it) before installing everything else: https://pytorch.org/get-started/locally/
  2. Install packages:
pip install numpy ultralytics opencv-python
  1. Set up folder structure:
CSMLC/
└── training/
  ├── all/
  │   ├── input/
  │   └── labels/
  ├── train/
  │   ├── images/
  │   └── labels/
  ├── test/
  │   ├── images/
  │   └── labels/
  └── val/
      ├── images/
      └── labels/

The all folder is not required, you can split your data manually if you'd like, but if you want your folder splitting automated, you can put everything in the all folder and run node split.js (requires NodeJS, will be making a python version soon probably)

  1. Train it!
python training/train.py

Additional notes: You can tweak anything training related in custom.yaml and train.py.

For using

  1. (optional) use a venv:
python -m venv [venv name]
.\[venv name]\Scripts\activate
  1. Install torch with CUDA (if your GPU supports it) before installing everything else: https://pytorch.org/get-started/locally/
  2. Install packages:
pip install numpy ultralytics opencv-python dxcam pywin32
  1. Run it!
python training/screen.py

This will run the script that opens a new window, which displays what the computer sees and shows the bounding boxes of where players are (see the example video for how that looks).

TODO

  • Trigger bot (train on heads only, perhaps)
  • Aim snapping
    • Natural-looking aim adjustment too?

Technical Details

This project is built off the backbone of YOLOv8, a super-fast and surprisingly accurate model for machine learning applications. A lot of the projects you will see that do something similar to this one are using slightly older versions (Sequoia uses v5, for example), so this one is currently the most state-of-the-art!

To train the custom model, I gather data using a CS2 ESP cheat that I modified (thank you to the original author of it!) to dump bounding box data directly to YOLO format. I then use my personal favorite labelling program, OpenLabeling, to clean up the data (for example, getting rid of ESP boxes that are showing players through walls). A couple games of Deathmatch was enough to get it to it's current state.

The YOLOv8 model is awesome. In fact, I'm sure you're curious about the training time on the above example video:

+-------+
| specs |
+-------+
GPU: RTX 3070 8gb
CPU: i7-9700k (not overclocked)
RAM: 32GB

+------------------+
| training metrics |
+------------------+
Images: ~1.6k (about ~17gb of uncompressed BMPs)
Epochs: 30
Resolution: 928
Batch: 8
Approx. Time Taken to Train: 1h

(I will try to remember to time the next training session properly)

An hour to train?? Only ~1.6k images?? I don't know what is considered "good" or "bad" in the machine learning world yet, but I'd say that's pretty impressive for the result it gives.

About

Machine learning model trained on CS2, used for detecting the location of players with nothing more than an image of the screen.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published