Run Minecraft, Valheim, or any container automatically in AWS!
This CDK project spins up the container when someone connects, then spins it back down when they're done automatically! It's a great way to host game/container servers for your friends cheaply, without opening your home network to the outside world.
- First install aws_cdk.
- Setup your ./aws/credentials file, along with the region you want to deploy to.
- Run make cdk-bootstrap to bootstrap cdk to your account in both the region from the last step, and
us-east-1
(which is required for Route53).
-
Make sure
python3
andnpm
are installed in your system. -
Setup a python environment with:
python3 -m venv .venv source .venv/bin/activate
-
Update/Install everything with
make update
.- Note: If it complains about NPM not being ran with root, follow this stackoverflow guide to let non-sudo work. (I couldn't get the
~/.profile
line working with vscode, so I added it to~/.bashrc
instead).
- Note: If it complains about NPM not being ran with root, follow this stackoverflow guide to let non-sudo work. (I couldn't get the
Note
Now that you have it setup, you'll only have to do source .venv/bin/activate
on new shells from here on out. (And make update
once in a while to get the latest packages).
There's two commands: one for the 'base' stacks, and the 'leaf' stacks. You should only have to deploy the 'base' once. Multiple leaf's can/should use the same base to save costs. Deploy the base stack first, but you shouldn't have to again unless you change something in it.
First setup your Environment Variables used for deploying, and just delete any sections you're not using:
source .venv/bin/activate
cp vars.env.example vars.env
nano vars.env # Use the text editor that's better than vim :)
source vars.env # Do this after every edit you make too!
- For more Advanced Customization while Deploying: see (cdk) Synth / Deploy / Destroy below.
The config options for the stack are in ./base-stack-config.yaml. Info on each option is in ./ContainerManager/README.md.
If you need a HostedZoneId
, you can buy a domain from AWS, then copy the Id from the console. (AWS won't let you automate this step).
make cdk-deploy-base
The config examples are in ./Examples/*.example.yaml
. Info on each config option and writing your own config is in ./Examples/README.md.
For a QuickStart example, if you're running Minecraft, just run:
# Edit the config to what you want:
cp ./Examples/Minecraft.java.example.yaml ./Minecraft.yaml
nano ./Minecraft.yaml
# Actually deploy:
make cdk-deploy-leaf config-file=./Minecraft.yaml
- Info on how it Works behind the scenes: see the
./ContainerManager
's README.md.
Now your game should be live at <FileName>.<DOMAIN_NAME>
! (So minecraft.<DOMAIN_NAME>
in this case. No ".yaml"). This means one file per stack. If you want to override this, see the container-id section below.
Note
It takes ~2-4 minutes for the game to spin up when it sees the first DNS connection come in. Just spam refresh.
If it's installing updates, keep spamming refresh. It sees those connection attempts, and resets the watchdog threshold (time before spinning down).
You have to clean up all the 'leaf stacks' first, then the 'base stack'.
If your config has Volume.KeepOnDelete set to True
(the default), it'll keep the server files inside AWS but still remove the stack.
# Destroying one leaf:
make cdk-destroy-leaf config-file=./Minecraft.yaml
# Destroying the base stack
make cdk-destroy-base
Core AWS docs for this are here.
(I can't get it automated. Use the SSH method below for now. Details are here if you're interested!).
Note
There likely won't be enough traffic from JUST ssh to stop the container from spinning down. Just connect to the container with whatever client it needs (Minecraft, Valheim, etc) to keep it up.
The files are mounted to /mnt/efs/<Volumes>
on the HOST of the container, to give easy access to modify them with SFTP/SSH/etc.
To connect to the container:
-
Get SSH private key from AWS System Manager (SSM) Param Storage
If you have more than one key: Go to
EC2
=>Network & Security
=>Key Pairs
. Look forContainerManager-BaseStack-SshKey
, and copy it'sID
. Now go toSSM
=>Parameter Store
, and select the key that matches/ec2/keypair/<ID>
. (I've tried adding tags/descriptions to the SSM key to skip the first step, they don't go through.) -
Add it to agent:
nano ~/.ssh/container-manager # Paste the key from SSM chmod 600 ~/.ssh/container-manager ssh-add ~/.ssh/container-manager
-
Add this to your
~/.ssh/config
:NOTE: The DOMAIN_NAME must be all lowercase! Otherwise it won't be case-insensitive when you
ssh
later.Host *.<DOMAIN_NAME> # <- i.e: "Host *.example.com" StrictHostKeyChecking=accept-new # Don't have to say `yes` first time connecting CheckHostIP no # IP Changes on every startup UserKnownHostsFile=/dev/null # Keep quiet that IP is changing User=ec2-user # Default AWS User IdentityFile=~/.ssh/container-manager # The Key we just setup
-
Access the host!
-
ssh
into the instance:ssh <CONTAINER_ID>.<DOMAIN_NAME>
And now you can use docker commands if you need to jump into the container! Or view the files with
ls -halt /mnt/efs
-
Use
FileZilla
to add/backup files:- To add the private key, go to
Edit -> Settings -> Connection -> SFTP
and add the key file there. - For the URl, put
sftp://<GAME_URL>
. The username isec2-user
. Password is blank. Port is 22.
- To add the private key, go to
-
If you have an existing EFS left over from deleting a stack, there's no way to tell the new stack to "just use it". You have to transfer the files over.
- Using SFTP: The easiest, but most expensive since the files leave AWS, then come back in. Follow the ssh guide to setup a SFTP application.
- Using DataSync: Probably the cheapest, but I haven't figured it out yet. If you do this a-lot, it's worth looking into.
The config examples are in ./Examples/*.example.yaml
. Info on each config option and writing your own config is in ./Examples/README.md.
There's a few alarms inside the app that are supposed to shut down the system when specific events happen. Check the Dashboard to see which alarm is (or isn't) triggering. (If you disabled the dashboard, view the Alarms in CloudWatch). Details on each alarm is also in the leaf_stack Watchdog README
- If the
Container Activity
alarm is the problem, adjust the Watchdog.Threshold config key. - If the
Instance Left Up
alarm is triggered, adjust the whatever config keys. - If the
Break Crash Loop
alarm is triggered, the container either crashed or is refusing to start. View the container in the console to see what's going on. (Select your cluster from ECS Clusters ->*/* Tasks running
. Debug info is likely in eitherLogs
orEvents
, depending what is causing this).
- TODO: Create Cost Estimate (It's not much).
The point of the base stack, is exactly to combine resources to save costs. You only have to count the following once:
- Buying a domain from AWS is an extra
$3/year
for the cheapest I could find (Register domains
->Standard pricing
->Price
to sort by price). - The Hosted Zone that holds that domain is
$0.50/month
(or$6/year
).
- The EC2 Costs aren't included because they're the highest factor. You're only charged while people are actively online, but the bigger instances are also more pricey.
- The EFS Costs are
$0.30/GB/month
. - The Backup costs are
$0.05/GB/month
.
Those are the only charges I've seen of note in my account.
These are the core commands of cdk. Both deploy and destroy are broken into two for the base and leaf stacks. So in total, you have: cdk-synth
, cdk-deploy-base
, cdk-deploy-leaf
, cdk-destroy-base
, cdk-destroy-leaf
.
With the exception of the *-base
commands, the other three commands have three parameters for customization:
![NOTE] When deploying/destroying a stack, all three parameters must be exactly the same as the first deployment.
If you change one and deploy again, you'll create a new stack. If the
cdk-destroy-leaf
command doesn't have the same params as thecdk-deploy-leaf
did, it won't be able to find a stack to delete.
This controls which "leaf stack" you're working on. It's a path to the config yaml.
Optional for cdk-synth
:
# Just lint the base stack:
make cdk-synth
# Lint the base stack, and a leaf stack with a config:
make cdk-synth config-file=./Examples/<MyConfig>.yaml
Required for both *-leaf
commands:
make cdk-deploy-leaf config-file=./Examples/Minecraft.java.example.yaml
# Domain will be: `minecraft.java.example.<YOUR_DOMAIN>`
Optional for all three commands. This fixes two issues:
- The
container-id
has to be unique per aws account. If you want to deploy two of the same yaml to your account, at least one will need to set this. - This overrides the domain prefix. If you want a descriptive yaml name, but small domain name, use this.
make cdk-deploy-leaf config-file=./Examples/Minecraft.java.example.yaml container-id=minecraft
# Domain will be: `minecraft.<YOUR_DOMAIN>`
There's currently two maturities you can set, devel
and prod
(prod being the default). devel
has defaults for developing (i.e removes any storage with it when deleted). It also keeps the containers you're testing with, separate from any games you're activity running.
# Create the devel base stack:
make cdk-deploy-base maturity=devel
# Add an application to it:
make cdk-deploy-leaf maturity=devel config-file=<FILE>
# Delete said leaf stack
make cdk-destroy-leaf maturity=devel config-file=<FILE>
# And never touch the stuff in the normal stacks!
Lints all python files. Useful when developing.
make pylint
Prints your current user arn, including account id. Useful for checking if aws-cli is setup correctly, and if you're using the right aws account before deploying.
Updates both npm
and python pip
packages.
make update
For setting up cdk into your AWS Account. See the AWS QuickStart section at the top for more details.
make cdk-bootstrap
To automatically deploy your stack with the latest cdk changes as they come out, see the workflows docs.
See ./ContainerManager/README.md for diagrams and an overview of the app's architecture.
In each directory has a README.md
that explains what's in that directory. The farther you get down a path, the more detailed the info gets. This README.md
in the root of the project is a high-level overview of the project.
I made a maturity key when deploying to specifically help developers. There's a few other nice commands in the Makefile too to help out!