This module supports the following configurations:
- Create a Virtual Machine from a QEMU-Enabled template.
- Combined the syntax of ipconfig0-15 and Networks for easier multi-network card configurations
- Support of multi-disk deployments
This module assumes you have a Virtual Machine template with QEMU installed. If you do not, follow the instructions under Getting Started to create yourself a template.
- Terraform Module for Proxmox Cloudinit Virtual Machines
- Table of Contents
- Usage
- Examples
- Getting Started
- Common Issues
- Terraform Module Information
NOTE: To utilize this module, you MUST have a QEMU-Enabled Template. To see how to build one, follow the instructions under Getting Started
module "cloudinit_vm" {
source = "github.com/ZacksHomeLab/terraform-proxmox-cloudinit-vm"
vm_name = "ubuntu-simple-vm"
target_node = "pve1"
clone = "name-of-template"
# Disk virtio0
disks = [{
size = "10G"
storage = "local-pve"
}]
# Network Adapter net0
networks = [{
dhcp = true
}]
}
module "cloudinit_vm" {
source = "github.com/ZacksHomeLab/terraform-proxmox-cloudinit-vm"
vm_name = "ubuntu-simple-vm"
target_node = "pve1"
clone = "name-of-template"
cores = 2
memory = 2048
# Disk 1: virtio0
# Disk 2: scsi0
disks = [
{
size = "20G"
storage = "local-pve"
},
{
size = "10G"
type = "scsi"
storage = "my-other-storage"
}
]
# Network Adapter net0
networks = [{
dhcp = true
}]
}
module "cloudinit_vm" {
source = "github.com/ZacksHomeLab/terraform-proxmox-cloudinit-vm"
vm_name = "ubuntu-simple-vm"
target_node = "pve1"
clone = "name-of-template"
cores = 2
memory = 2048
# Disk 1: virtio0
disks [{
size = "20G"
storage = "local-pve"
}]
# Network Adapter 1: net0
# Network Adapter 2: net1
networks = [
{
ip = "192.168.2.51/24"
gateway = "192.168.2.1"
bridge = "vmbr0
},
{
ip = "192.168.3.51/24"
gateway = "192.168.3.1"
bridge = "vmbr1"
vlan_tag = 3
}
]
}
- Simple-VM - Basic Virtual Machine deployment in Proxmox.
- Multiple Network Adapters - Virtual Machine with multiple Network Adapters.
- Complete - An advanced deployment with misc. configurations.
You must have an image, template, or Clone that supports Cloudinit with QEMU Guest Agent installed.
If you DO NOT have QEMU Guest Agent installed on your image, template, or clone, Terraform will timeout during said deployment while responding with 500
status codes as it cannot see the IP Address of said machine.
If you want to setup a template to test this moodule, steps 1 through 5 will demonstrate how to create a Virtual Machine template with Ubuntu 22.04.
- First, we'll need to access our node's shell.
- Log into your Proxmox's node via SSH or Web Browser
- We will need a Cloudinit-based image. In this example, I will be downloading Ubuntu 22.04 (Jammy). For other releases, you can retrieve said URL from Ubuntu's images website here.
- Download the image
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
- We're required to install QEMU Agent on our image for Terraform to work with Proxmox. To achieve this, we'll need to download a package on our Proxmox node. Run the following command
apt-get -y install libguestfs-tools
- Once installed, install QEMU Agent into your downloaded image
virt-customize -a jammy-server-cloudimg-amd64.img --install qemu-guest-agent
virt-customize -a jammy-server-cloudimg-amd64.img --run-command "systemctl enable qemu-guest-agent"
- (OPTIONAL): If you're NOT using an SSL Certificate within Cloudinit drive on said Virtual Machine, you'll need to modify SSH in said image to allow local authentication, which can be done running said command
virt-customize -a jammy-server-cloudimg-amd64.img --run-command "touch /etc/ssh/ssh_config.d/ssh_changes.conf && sed -i 's/.*PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/ssh_config.d/ssh_changes.conf"
With our image downloaded, QEMU Agent installed, and (optionally) SSH configured, we can now create our template in Proxmox.
First, we'll need to set our environment variables for this process. Modify these variables to meet your needs:
export STORAGE_POOL="local-lvm"
export VM_ID="900"
export VM_NAME="ubuntu2204"
With our variables created, we can move onward to create the virtual machine in Proxmox.
Create the Virtual Machine with 2GB of RAM, create a virtio
network adapter net0
, and set it to bridge vmbr0
qm create $VM_ID --memory 1024 --net0 virtio,bridge=vmbr0
Import the Virtual Machine's disk into the provided Storage Pool (this will allow us to see the VM in Proxmox's Web-UI)
qm importdisk $VM_ID jammy-server-cloudimg-amd64.img $STORAGE_POOL
Set the Virtual Machine's name, enable QEMU Guest Agent, and enable trimming of the disk upon cloning
qm set $VM_ID --name $VM_NAME && \
qm set $VM_ID --agent enabled=1,fstrim_cloned_disks=1
The following commands will perform the following:
- Create a variable to determine the location of our imported disk. This is necessary as some storage pools create the destination directory differently. A variable fixes said inconsistencies.
- Add a CD-ROM to
ide0
(in case you need to reinstall the Virtual Machine at a later date) - Import the unused disk image as a
virtio0
disk. This requires scsihwvirtio-scsi-pci
- Add a Cloudinit drive to
ide2
- Set the boot order to
CD-ROM (ide0) -> Disk (virtio0) -> Network Adapter (net0)
and set the bootdisk to diskvirtio0
- Set the OS Type to
Linux: 6.x - 2.6 Kernel
- Add a serial adapter
serial0
and update our display toserial0
- (Optional): Set CPU Type to
host
. This is necessary if you plan on running any sort of nested virtualization on said Virtual Machine (e.g., Docker, Hyper-V, etc.)
export DISK_LOC="$STORAGE_POOL:$(qm config $VM_ID | grep -Po "(?<=unused\d: $STORAGE_POOL:).*")"
qm set $VM_ID --ide0 file=none && \
qm set $VM_ID --scsihw virtio-scsi-pci --virtio0 $DISK_LOC && \
qm set $VM_ID --ide2 $STORAGE_POOL:cloudinit && \
qm set $VM_ID --boot "order=ide0;virtio0;net0" --bootdisk virtio0 && \
qm set $VM_ID --ostype l26 && \
qm set $VM_ID --serial0 socket --vga serial0
# Optional
qm set $VM_ID --cpu cputype=host
With our template mostly configured, we can set Cloudinit settings
If you are using a username/password for authentication or would like to use a different username and password instead of the Cloudinit defaults, you can do so by setting these two environment variables
export CI_USER="administrator"
export CI_PASS="my_admin_password"
Add these options to your Cloudinit template by running
qm set $VM_ID --ciuser $CI_USER --cipassword $CI_PASS
(Optional) If you want to utilize SSL for SSH, you'll need to add your SSH file to your Cloudinit template. On your machine that will be connecting to said Virtual Machines using said template, you'll need to either:
- Retrieve SSH Key
- Generate SSH Key
Option 1. Retrieve SSH Key
To retrieve the public SSH key on your machine, there's two common areas where said key will reside:
- On Linux:
cat ~/.ssh/id_rsa.pub
- On Windows:
type %USERPROFILE%\.ssh\id_rsa.pub
The above command(s) will geenerate something along the lines of
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXTcvRHItt6hRmWq3q5UbtDsg6byjJMm/6gApTiDj46caI7DfYZ+EI3Yi+LZJC7/M+fZLP+bRQVWo7ZG/IuWIp2fy1JzafSSlnoZo/hexeD3dzkn3ERPA6QJlHoVR7fyMxwhqMT0IPmc10Werv8Etd4W0Kq7fY1j1L33aCADe4WsOrXEorU4qxSjSbc0KbVc4j6NYcWDYakZ+PzUTDIyDyMLutUgM1BYcZ63kKNUDdUXmymE7SjpvdNk7....= zack@zackshomelab.com
Option 2. Generate SSH Key
If you do NOT have a public SSH key, you can create one by running the following command on your system
ssh-keygen -t rsa -b 4096
The above command should output where the file will be located. Once you run said command, you can follow steps under Option 1. Retrieve SSH Key
Add SSH Key to Cloudinit Template
With our SSH key retrieved, we will need to create a temporary file on our Proxmox host to add said key into our template. Run the following command to create a temporary file with the contents of your id_rsa.pub
file
# Replace the contents between the 'EOF' and EOF with your public SSH Key
tee /tmp/id_rsa.pub <<'EOF'
YOUR_PUBLIC_SSH_KEY_INFO_HERE
EOF
Example of what mine would look like:
tee /tmp/id_rsa.pub <<'EOF'
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxxUTK7ZgN8F7r+HZRUy6z2EhaCMYcS+LkeTl9JaW/XzZrzGplDf+uTv0ZCBpDs0wl23zAukDOrG0hnLENs/liwxM/LZMcDEy8WMcBVS4UJzJNpMpAEdJiERvC+3bN36F7EMhAchVj0evqHqjjk3Dcre5CvwarRs9BG/YZKC25ZsraoEMAWIeTi6G5sMk3qUvRW+0kGjzJxNOs8/JeXq5++xKP7RyxGBjTeIHltawgT06yFtFWIh0/6GU8hdQJ3LKHch9PowSspTfUvR//CFCRGcavEnoGBqOtNHC1plpCcdr51yiLLPBwhXlsxKaMGA2YbmpUB4BFDFdLXteaGVtQvFukIlPiYCJNoFRR62xKGrW0a3B8i1RBNKnZH4SswsIyJfEIduwI4DGE2vZNH1sqJXRAx4mK3Z9l3srW2zhYDcSpi7SlpfVVF/XYishDApFLf8Vh44sukffImA7LnyFi8lRFdsKJOL4t03XFUMdpVyv21fTe9B9eyFjs9EivXEh2MUiI9mJfwHfphxMnsA07pAQKv7ykhil4KgdoDj3jM2ypvDLhIRHaw+1dgZftlimF68cLPRmrqAgHusouu5t/T7IX8RBPXrtLoMp50EF2g6bDkoJFhH9FG9mf5EFfUpen3NPc+WWDk5qOoe5Zc5ZuLPTIXxYJpub5kQhBNXoSXQ== zackshomelab\zack@ZHLDT01
EOF
With /tmp/id_rsa.pub
created, add the SSH key to your template
qm set $VM_ID --sshkeys /tmp/id_rsa.pub
You can preconfigure network settings for your Cloudinit template by setting the following options (NOTE: if searchdomain or namserver are NOT set, it will use the Proxmox Host's settings)
export SEARCH_DOMAIN='yourdomain.com'
# If you have more than one DNS Server, you can't use a variable.
export DNS_SERVER="192.168.1.2"
export IP_CONFIG="ip=dhcp"
qm set $VM_ID --searchdomain $SEARCH_DOMAIN --nameserver $DNS_SERVER --ipconfig0 $IP_CONFIG
# Example using more than one DNS Server
qm set $VM_ID --searchdomain $SEARCH_DOMAIN --nameserver "192.168.1.2 192.168.1.3" --ipconfig0 $IP_CONFIG
With our Virtual Machine configured, the last step would be to convert our Virtual Machine to a template, which we can do by running the following command
qm template $VM_ID
Once the Virtual Machine has been converted, you're ready to use this module!
If you run this Terraform module and notice Terraform timing out (IIRC 5 minutes), you may have forgotten to install QEMU Guest Agent or QEMU Guest Agent is NOT enabled on your Virtual Machine template. If you export the following variable
export TF_LOG=TRACE
Terraform should display status code 500
.
NOTE: This issue should be resolved in v1.6.1
From my testing, if you have your Cloudinit drive on anything other than ide2
, you may experience the following error
Cloudinit drive already exists on drive ...
This error occurs frequenly when you try to add additonal hardware that was NOT present with your Virtual Machine template. To resolve this, you may need to adjust your Virtual Machine template and have the Cloudinit drive mounted to ide2
. Follow the steps under Getting Started
.
NOTE: This issue should be resolved in v1.6.1
If your Virtual Machine template has preconfigured Cloudinit settings For example:
and you do NOT mention these settings in your Terraform code, Terraform will see this as a change
and will attempt to do so:
Terraform will perform the following actions:
# module.Cloudinit_vm.proxmox_vm_qemu.Cloudinit[0] will be updated in-place
~ resource "proxmox_vm_qemu" "Cloudinit" {
- ciuser = "zack" -> null
Plan: 0 to add, 1 to change, 0 to destroy.
However, Terraform will not be able to modify said settings after the deployment, BUT, it will always see that changes need to be made for your Virtual Machine(s).
To prevent this issue, you MUST have these Cloudinit settings referenced in your Terraform code. For example, to match my provided screenshot, I would have the following references in my Terraform code
# In main.tf
ciuser = var.ciuser
searchdomain = var.searchdomain
nameserver = var.nameserver
sshkeys = var.sshkeys
# in variables.tfvars
ciuser = 'zack'
searchdomain = 'zackshomelab.com'
nameserver = '192.168.2.15 192.168.2.16'
sshkeys = <<EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCxxUTK7ZgN8F7r+HZRUy6z2EhaCMYcS+LkeTl9JaW/XzZrzGplDf+uTv0ZCBpDs0wl23zAukDOrG0hnLENs/liwxM/LZMcDEy8WMcBVS4UJzJNpMpAEdJiERvC+3bN36F7EMhAchVj0evqHqjjk3Dcre5CvwarRs9BG/YZKC25ZsraoEMAWIeTi6G5sMk3qUvRW+0kGjzJxNOs8/JeXq5++xKP7RyxGBjTeIHltawgT06yFtFWIh0/6GU8hdQJ3LKHch9PowSspTfUvR//CFCRGcavEnoGBqOtNHC1plpCcdr51yiLLPBwhXlsxKaMGA2YbmpUB4BFDFdLXteaGVtQvFukIlPiYCJNoFRR62xKGrW0a3B8i1RBNKnZH4SswsIyJfEIduwI4DGE2vZNH1sqJXRAx4mK3Z9l3srW2zhYDcSpi7SlpfVVF/XYishDApFLf8Vh44sukffImA7LnyFi8lRFdsKJOL4t03XFUMdpVyv21fTe9B9eyFjs9EivXEh2MUiI9mJfwHfphxMnsA07pAQKv7ykhil4KgdoDj3jM2ypvDLhIRHaw+1dgZftlimF68cLPRmrqAgHusouu5t/T7IX8RBPXrtLoMp50EF2g6bDkoJFhH9FG9mf5EFfUpen3NPc+WWDk5qOoe5Zc5ZuLPTIXxYJpub5kQhBNXoSXQ== zackshomelab\zack@ZHLDT01
EOF
If done correctly, you should see the following results upon running
terraform plan -lock=false
module.Cloudinit_vm.proxmox_vm_qemu.Cloudinit[0]: Refreshing state... [id=pve1/qemu/113]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
If you provision a Virtual Machine with a non-existant bridge, you may get the following error:
generating cloud-init ISO
kvm: -netdev type=user,id=net0,hostname=test123,queues=1: Invalid parameter 'queues'
TASK ERROR: start failed: QEMU exited with code 1
I did not have bridge nat
created on my Proxmox host, thus, generating the above error. By setting the network bridge to an existing bridge, the error has subsided.
During development of this module, I've encountered numerous Proxmox provider crashes. All of them have all happened during the creation of the VMDisk(s).
It is VERY important that you configure the correct scsihw
associated with the type of disk that you have.
For example, this is a correct configuration:
scsihw = 'virtio-scsi-pci'
disks = [
# Disk #1
{
type = "virtio"
storage = "pve1-zfs"
size = "20G"
}
]
This is an INCORRECT configuration (disk type virtio
must match scsihw = 'virtio-scsi-pci'
. It is not compatible with scsihw = 'lsi'
):
scsihw = 'lsi'
disks = [
# Disk #1
{
type = "virtio"
storage = "pve1-zfs"
size = "20G"
}
]
Removed from this module is configuring SSD Emulation for disks. If your template does NOT have SSD Emulation enabled as the default, the Proxmox provider will crash. To prevent accidental crashes, said feature was removed from this module.
Name | Version |
---|---|
terraform | >=1.3.0 |
proxmox | 2.9.14 |
Name | Version |
---|---|
proxmox | 2.9.14 |
No modules.
Name | Type |
---|---|
proxmox_vm_qemu.cloudinit | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
clone | The base VM from which to clone to create the new VM. Note that clone is mutually exclussive with pxe and iso modes. | string |
n/a | yes |
disks | The disk(s) of the Virtual Machine. | list(object({ |
n/a | yes |
target_node | The name of the Proxmox Node on which to place the VM. | string |
n/a | yes |
vm_name | The virtual machine name. | string |
n/a | yes |
agent | Set to 1 to enable the QEMU Guest Agent. Note, you must run the qemu-guest-agent daemon in the guest for this to have any effect. | number |
1 |
no |
automatic_reboot | Automatically reboot the VM when parameter changes require this. If disabled the provider will emit a warning when the VM needs to be rebooted. | bool |
true |
no |
balloon | The minimum amount of memory to allocate to the VM in Megabytes, when Automatic Memory Allocation is desired. Proxmox will enable a balloon device on the guest to manage dynamic allocation. See the docs about memory for more info. | number |
0 |
no |
bios | The BIOS to use, options are seabios or ovmf for UEFI. | string |
"seabios" |
no |
boot | The boot order for the VM. For example: order=scsi0;ide2;net0. | string |
"" |
no |
bootdisk | Enable booting from specified disk. You shouldn't need to change it under most circumstances. | string |
"" |
no |
ci_wait | How to long in seconds to wait for before provisioning. | number |
30 |
no |
cicustom | Instead specifying ciuser, cipasword, etc… you can specify the path to a custom cloud-init config file here. Grants more flexibility in configuring cloud-init. | string |
"" |
no |
cipassword | Override the default cloud-init user's password. Sensitive. | string |
"" |
no |
ciuser | Override the default cloud-init user for provisioning. | string |
"" |
no |
cloudinit_cdrom_storage | Set the storage location for the cloud-init drive. Required when specifying cicustom. | string |
"" |
no |
cores | The number of CPU cores per CPU socket to allocate to the VM. | number |
1 |
no |
cpu | The type of CPU to emulate in the Guest. See the docs about CPU Types for more info. | string |
"host" |
no |
create_vm | Controls if virtual machine should be created. | bool |
true |
no |
description | The description of the VM. Shows as the 'Notes' field in the Proxmox GUI. | string |
"" |
no |
force_create | If false, and a vm of the same name, on the same node exists, terraform will attempt to reconfigure that VM with these settings. Set to true to always create a new VM (note, the name of the VM must still be unique, otherwise an error will be produced.). | bool |
false |
no |
force_recreate_on_change_of | If the value of this string changes, the VM will be recreated. Useful for allowing this resource to be recreated when arbitrary attributes change. An example where this is useful is a cloudinit configuration (as the cicustom attribute points to a file not the content). | string |
"" |
no |
full_clone | Set to true to create a full clone, or false to create a linked clone. See the docs about cloning for more info. Only applies when clone is set. | bool |
true |
no |
hagroup | The HA group identifier the resource belongs to (requires hastate to be set!). | string |
"" |
no |
hastate | Requested HA state for the resource. One of 'started', 'stopped', 'enabled', 'disabled', or 'ignored'. See the docs about HA for more info. | string |
"" |
no |
hotplug | Comma delimited list of hotplug features to enable. Options: network, disk, cpu, memory, usb. Set to 0 to disable hotplug. | string |
"cpu,network,disk,usb" |
no |
memory | The amount of memory to allocate to the VM in Megabytes. | number |
1024 |
no |
nameserver | Sets default DNS server for guest. | string |
"" |
no |
networks | The network adapters affiliated with the Virtual Machine. | list(object({ |
[] |
no |
numa | Whether to enable Non-Uniform Memory Access in the guest. | bool |
false |
no |
onboot | Whether to have the VM startup after the PVE node starts. | bool |
false |
no |
oncreate | Whether to have the VM startup after the VM is created. | bool |
true |
no |
os_type | Which provisioning method to use, based on the OS type. Options: ubuntu, centos, cloud-init. | string |
"cloud-init" |
no |
pool | The resource pool to which the VM will be added. | string |
"" |
no |
qemu_os | The type of OS in the guest. Set properly to allow Proxmox to enable optimizations for the appropriate guest OS. It takes the value from the source template and ignore any changes to resource configuration parameter. | string |
"l26" |
no |
scsihw | The SCSI controller to emulate. Options: lsi, lsi53c810, megasas, pvscsi, virtio-scsi-pci, virtio-scsi-single. | string |
"virtio-scsi-pci" |
no |
searchdomain | Sets default DNS search domain suffix. | string |
"" |
no |
serials | Creates a serial device inside the Virtual Machine (up to a max of 4). | list(object({ |
[] |
no |
sockets | The number of CPU sockets for the Master Node. | number |
1 |
no |
sshkeys | Newline delimited list of SSH public keys to add to authorized keys file for the cloud-init user. | string |
"" |
no |
startup | The startup and shutdown behaviour. | string |
"" |
no |
tablet | Enable/disable the USB tablet device. This device is usually needed to allow absolute mouse positioning with VNC. | bool |
true |
no |
tags | Tags of the VM. This is only meta information. | list(string) |
[] |
no |
usbs | The usb block is used to configure USB devices. It may be specified multiple times. | list(object({ |
[] |
no |
vgas | The vga block is used to configure the display device. It may be specified multiple times, however only the first instance of the block will be used. | list(object({ |
[] |
no |
vmid | The ID of the VM in Proxmox. The default value of 0 indicates it should use the next available ID in the sequence. | number |
0 |
no |
Name | Description |
---|---|
disks | The Disk(s) affiliated with said Virtual Machine. |
ip | The Virtual Machine's IP on the first Network Adapter. |
name | The Virtual Machine's name. |
nics | The Network Adapter(s) affiliated with said Virtual Machine. |
node | The Proxmox Node the Virtual Machine was created on. |
ssh_settings | The Virtual Machine's SSH Settings. |
template | The name of the template in which the Virtual Machine was created on. |
vmid | The Virtual Machine's Id. |