Skip to content

Commit

Permalink
Release 1.6.2
Browse files Browse the repository at this point in the history
Merge branch 'develop' into master
  • Loading branch information
lae committed Oct 12, 2019
2 parents f85ec2f + 71b1212 commit 2c98a0c
Show file tree
Hide file tree
Showing 22 changed files with 358 additions and 210 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@
*.*~*
*.bak

ansible.cfg
/ansible.cfg
*.retry
/.project
/.pydevproject

fetch/
.vagrant/
10 changes: 5 additions & 5 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ env:
install:
- if [ "$ANSIBLE_GIT_VERSION" ]; then pip install "https://github.com/ansible/ansible/archive/${ANSIBLE_GIT_VERSION}.tar.gz";
else pip install "ansible${ANSIBLE_VERSION}"; fi;
pip install --pre ansible-lint; pip install jmespath
pip install --pre ansible-lint; pip install jmespath netaddr
- ansible --version
- ansible-galaxy install lae.travis-lxc,v0.8.1
- ansible-playbook tests/install.yml -i tests/inventory
Expand All @@ -25,11 +25,11 @@ before_script: cd tests/
script:
- ansible-lint ../ || true
- ansible-playbook -i inventory deploy.yml --syntax-check
- ansible-playbook -i inventory -v deploy.yml --skip skiponlxc
- 'ANSIBLE_STDOUT_CALLBACK=debug unbuffer ansible-playbook --skip skiponlxc -vv
-i inventory deploy.yml > idempotency.log 2>&1 || (e=$?; cat idempotency.log; exit $e)'
- 'ansible-playbook -i inventory -v deploy.yml --skip skiponlxc & pid=$!; { while true; do sleep 1; kill -0 $pid 2>/dev/null || break; printf "\0"; done }'
- 'ANSIBLE_STDOUT_CALLBACK=debug ANSIBLE_DISPLAY_SKIPPED_HOSTS=no ANSIBLE_DISPLAY_OK_HOSTS=no
unbuffer ansible-playbook --skip skiponlxc -vv -i inventory deploy.yml &> idempotency.log'
- 'grep -A1 "PLAY RECAP" idempotency.log | grep -qP "changed=0 .*failed=0 .*" &&
(echo "Idempotence: PASS"; exit 0) || (echo "Idempotence: FAIL"; exit 1)'
(echo "Idempotence: PASS"; exit 0) || (echo "Idempotence: FAIL"; cat idempotency.log; exit 1)'
- ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -i inventory -v test.yml
notifications:
webhooks:
Expand Down
46 changes: 32 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,9 +340,9 @@ For example:
This will ask for a sudo password, then login to the `admin1` user (using public
key auth - add `-k` for pw) and run the playbook.

That's it! You should now have a fully deployed Proxmox cluster. You may want to
create Ceph storage on it afterward, which this role does not (yet?) do, and
other tasks possibly, but the hard part is mostly complete.
That's it! You should now have a fully deployed Proxmox cluster. You may want
to create Ceph storage on it afterwards (see Ceph for more info) and other
tasks possibly, but the hard part is mostly complete.


## Example Playbook
Expand Down Expand Up @@ -394,7 +394,8 @@ pve_zfs_enabled: no # Specifies whether or not to install and configure ZFS pack
# pve_zfs_zed_email: "" # Should be set to an email to receive ZFS notifications
pve_ceph_enabled: false # Specifies wheter or not to install and configure Ceph packages. See below for an example configuration.
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-nautilus buster main" # apt-repository configuration. Will be automatically set for 5.x and 6.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}" # Ceph cluster network
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}" # Ceph public network
# pve_ceph_cluster_network: "" # Optional, if the ceph cluster network is different from the public network (see https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_install_wizard)
pve_ceph_mon_group: "{{ pve_group }}" # Host group containing all Ceph monitor hosts
pve_ceph_mds_group: "{{ pve_group }}" # Host group containing all Ceph metadata server hosts
pve_ceph_osds: [] # List of OSD disks
Expand All @@ -417,18 +418,18 @@ pve_cluster_enabled: no # Set this to yes to configure hosts to be clustered tog
pve_cluster_clustername: "{{ pve_group }}" # Should be set to the name of the PVE cluster
```

Information about the following can be found in the PVE Documentation in the
[Cluster Manager][pvecm-network] chapter.
The following variables are used to provide networking information to corosync.
These are known as ring0_addr/ring1_addr or link0_addr/link1_addr, depending on
PVE version. They should be IPv4 or IPv6 addresses. For more information, refer
to the [Cluster Manager][pvecm-network] chapter in the PVE Documentation.

```
pve_cluster_ring0_addr: "{{ ansible_default_ipv4.address }}"
pve_cluster_bindnet0_addr: "{{ pve_cluster_ring0_addr }}"
# pve_cluster_ring1_addr: "another interface's IP address or hostname"
# pve_cluster_bindnet1_addr: "{{ pve_cluster_ring1_addr }}"
# pve_cluster_addr0: "{{ ansible_default_ipv4.address }}"
# pve_cluster_addr1: "another interface's IP address or hostname"
```

You can set options in the datacenter.cfg configuration file:

```
pve_datacenter_cfg:
keyboard: en-us
Expand Down Expand Up @@ -553,11 +554,22 @@ documentation.

## Ceph configuration

*This section could use a little more love. If you are actively using this role
to manage your PVE Ceph cluster, please feel free to flesh this section more
thoroughly and open a pull request! See issue #68.*

**PVE Ceph management with this role is experimental.** While users have
successfully used this role to deploy PVE Ceph, it is not fully tested in CI
(due to a lack of usable block devices to use as OSDs in Travis CI). Please
deploy a test environment with your configuration first prior to prod, and
report any issues if you run into any.

This role can configure the Ceph storage system on your Proxmox hosts.

```
pve_ceph_enabled: true
pve_ceph_network: '172.10.0.0/24'
pve_ceph_cluster_network: '172.10.1.0/24'
pve_ceph_osds:
# OSD with everything on the same device
- device: /dev/sdc
Expand Down Expand Up @@ -592,15 +604,21 @@ pve_ceph_fs:
mountpoint: /srv/proxmox/backup
```

`pve_ceph_network` by default uses the `ipaddr` filter, which requires the
`netaddr` library to be installed and usable by your Ansible controller.

## Contributors

Musee Ullah ([@lae](https://github.com/lae), <lae@lae.is>)
Musee Ullah ([@lae](https://github.com/lae), <lae@lae.is>) - Main developer
Fabien Brachere ([@Fbrachere](https://github.com/Fbrachere)) - Storage config support
Gaudenz Steinlin ([@gaundez](https://github.com/gaudenz)) - Ceph support, etc
Thoralf Rickert-Wendt ([@trickert76](https://github.com/trickert76)) - PVE 6.x support, etc
Engin Dumlu ([@roadrunner](https://github.com/roadrunner))
Jonas Meurer ([@mejo-](https://github.com/mejo-))
Ondrej Flider ([@SniperCZE](https://github.com/SniperCZE))
Ondrej Flidr ([@SniperCZE](https://github.com/SniperCZE))
niko2 ([@niko2](https://github.com/niko2))
Christian Aublet ([@caublet](https://github.com/caublet))
Fabien Brachere ([@Fbrachere](https://github.com/Fbrachere))
Michael Holasek ([@mholasek](https://github.com/mholasek))

[pve-cluster]: https://pve.proxmox.com/wiki/Cluster_Manager
[install-ansible]: http://docs.ansible.com/ansible/intro_installation.html
Expand Down
23 changes: 23 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
Vagrant.configure("2") do |config|
config.vm.box = "debian/buster64"

config.vm.provider :libvirt do |libvirt|
libvirt.memory = 2048
libvirt.cpus = 2
end

N = 3
(1..N).each do |machine_id|
config.vm.define "pve-#{machine_id}" do |machine|
machine.vm.hostname = "pve-#{machine_id}"

if machine_id == N
machine.vm.provision :ansible do |ansible|
ansible.limit = "all"
ansible.playbook = "tests/vagrant/provision.yml"
ansible.verbose = true
end
end
end
end
end
14 changes: 4 additions & 10 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ pve_zfs_enabled: no
# pve_zfs_options: "parameters to pass to zfs module"
# pve_zfs_zed_email: "email address for zfs events"
pve_ceph_enabled: false
pve_ceph_repository_line: "{{ pve_ceph_repo }}"
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/{% if ansible_distribution_release == 'stretch' %}ceph-luminous stretch{% else %}ceph-nautilus buster{% endif %} main"
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}"
pve_ceph_mon_group: "{{ pve_group }}"
pve_ceph_mds_group: "{{ pve_group }}"
Expand All @@ -29,18 +29,12 @@ pve_ceph_crush_rules: []
# pve_ssl_certificate: "contents of certificate"
pve_cluster_enabled: no
pve_cluster_clustername: "{{ pve_group }}"
# PVE 5.x (Debian Stretch) clustering options
pve_cluster_ring0_addr: "{{ ansible_default_ipv4.address }}"
pve_cluster_bindnet0_addr: "{{ pve_cluster_ring0_addr }}"
# pve_cluster_ring1_addr: "another interface's IP address or hostname"
# pve_cluster_bindnet1_addr: "{{ pve_cluster_ring1_addr }}"
# PVE 6.x (Debian Buster) clustering options
pve_cluster_link0_addr: "{{ ansible_default_ipv4.address }}"
# pve_cluster_link1_addr: "another interface's IP address or hostname"
# pve_cluster_addr0: "{{ ansible_default_ipv4.address }}"
# pve_cluster_addr1: "{{ ansible_eth1.ipv4.address }}
pve_datacenter_cfg: {}
pve_ssl_letsencrypt: false
pve_groups: []
pve_users: []
pve_acls: []
pve_storages: []
pve_ssh_port: 22
pve_ssh_port: 22
7 changes: 4 additions & 3 deletions library/collect_kernel_info.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import subprocess

from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text

def main():
module = AnsibleModule(
Expand Down Expand Up @@ -51,17 +52,17 @@ def main():

# This will likely output a path that considers the boot partition as /
# e.g. /vmlinuz-4.4.44-1-pve
booted_kernel = subprocess.check_output(["grep", "-o", "-P", "(?<=BOOT_IMAGE=).*?(?= )", "/proc/cmdline"]).strip()
booted_kernel = to_text(subprocess.check_output(["grep", "-o", "-P", "(?<=BOOT_IMAGE=).*?(?= )", "/proc/cmdline"]).strip())

booted_kernel_package = ""
old_kernel_packages = []

if params['lookup_packages']:
for kernel in kernels:
if kernel.split("/")[-1] == booted_kernel.split("/")[-1]:
booted_kernel_package = subprocess.check_output(["dpkg-query", "-S", kernel]).split(":")[0]
booted_kernel_package = to_text(subprocess.check_output(["dpkg-query", "-S", kernel])).split(":")[0]
elif kernel != latest_kernel:
old_kernel_packages.append(subprocess.check_output(["dpkg-query", "-S", kernel]).split(":")[0])
old_kernel_packages.append(to_text(subprocess.check_output(["dpkg-query", "-S", kernel])).split(":")[0])

# returns True if we're not booted into the latest kernel
new_kernel_exists = booted_kernel.split("/")[-1] != latest_kernel.split("/")[-1]
Expand Down
7 changes: 5 additions & 2 deletions module_utils/pvesh.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import json
import re

from ansible.module_utils._text import to_text

class ProxmoxShellError(Exception):
"""Exception raised when an unexpected response code is thrown from pvesh."""
def __init__(self, response):
Expand All @@ -23,12 +25,13 @@ def run_command(handler, resource, **params):
handler,
resource,
"--output=json"]
for parameter, value in params.iteritems():
for parameter, value in params.items():
command += ["-{}".format(parameter), "{}".format(value)]

pipe = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(result, stderr) = pipe.communicate()
stderr = stderr.splitlines()
result = to_text(result)
stderr = to_text(stderr).splitlines()

if len(stderr) == 0:
if not result:
Expand Down
Loading

0 comments on commit 2c98a0c

Please sign in to comment.