Skip to content
This repository has been archived by the owner on Aug 13, 2024. It is now read-only.

nginx role doesn't work with with_items #164

Open
msm-code opened this issue Mar 1, 2017 · 7 comments
Open

nginx role doesn't work with with_items #164

msm-code opened this issue Mar 1, 2017 · 7 comments

Comments

@msm-code
Copy link

msm-code commented Mar 1, 2017

I'm new to ansible, but I feel like this should work:

dependencies:
  - role: jdauphant.nginx
    with_items: "{{vhost_proxies}}"
    become: true
    keep_only_specified: True
    nginx_sites:
      "{{item.hostname}}":
        - listen "*:80"

But instead the result is:

fatal: [web.lxc]: FAILED! => {"failed": true, "msg": "The conditional check 'item.key not in nginx_remove_sites' failed. The error was: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}' (
(...)
}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: {'value': [u'listen \"*:80\"'], 'key': u'{{item.hostname}}'}: recursive loop detected in template string: {{item.hostname}}\n\nThe error appears to have been in '/etc/ansible/roles/jdauphant.nginx/tasks/configuration.yml': line 18, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Create the configurations for sites\n  ^ here\n"}

Is this something wrong with my config? Or is this specific to ansible-role-nginx?

(I understand that my rule is recursive for some reason, but I don't understand why)

@msm-code
Copy link
Author

msm-code commented Mar 1, 2017

Ok, I've read more about this issue and it turns out that roles are not supposed to work with with_items.

So, to rephrase my question: I have central nginx reverse proxy, and list of "hosts". Is there a way to set everything up using ansible-role-nginx, such that when I add new host to some configuration file, proxy vhost will be added to nginx automatically?

@philipwigg
Copy link
Contributor

philipwigg commented Mar 1, 2017

Can you explain more about what the end result configuration would look like?

i.e. do you need a separate server block for each host?

Are the proxy back-end servers the same for each host you want to add?

@msm-code
Copy link
Author

msm-code commented Mar 1, 2017

I'm rather flexible at exact configuration format. I thought about configuring hosts like that (websrv and website are my role names):

- hosts: web.lxc
  roles:
    - websrv

- hosts: web-aaa.lxc
  roles:
    - role: website
      hostname: aaa.example.com


- hosts: web-bbb.lxc
  roles:
    - role: website
      hostname: bbb.example.com

What I want to achieve is end result, and that should be:

nginx instance on web.lxc, with two vhosts:

and two additional nginx instances (but that should be easy):

  • nginx instance on web-aaa.lxc
  • nginx instance on web-bbb.lxc

Hard part (at least for me) is using ansible-role-nginx with dynamically generated list of sites.

@llybin
Copy link

llybin commented May 13, 2017

@philipwigg I think I have a same problem. How can optimize this config? I need for many vhosts a similar configuration.

  roles:
    - role: jdauphant.nginx

      project_path: /path

      nginx_configs:
        upstream:
          - |
            upstream domain1_backend {
                server unix:{{ project_path }}/run/gunicorn_domain1_app.sock fail_timeout=0;
            }
          - |
            upstream domain2_backend {
                server unix:{{ project_path }}/run/gunicorn_domain2_app.sock fail_timeout=0;
            }
          - |
            upstream domain3_backend {
                server unix:{{ project_path }}/run/gunicorn_domain3_app.sock fail_timeout=0;
            }

      nginx_sites:
        https_domain1.com:
          - listen *:443 ssl
          - server_name domain1.com
          - |
            location / {
                proxy_pass http://domain1_backend;
            }
        https_domain2.com:
          - listen *:443 ssl
          - server_name domain2.com
          - |
            location / {
                proxy_pass http://domain2_backend;
            }
        https_domain3.com:
          - listen *:443 ssl
          - server_name domain3.com
          - |
            location / {
                proxy_pass http://domain3_backend;
            }

@jorgeuriarte
Copy link

I got bitten by the same problem, and just came up with a simple solution for the problem (via patch, that's it).

Let's assume that we might extend de configuration with this 'alias', so any of the sites defined can use a var instead of the hardcoded name:

    nginx_site_alias:
      site_80: "{{nginx_sitename}}_80"
      site_443: "{{nginx_sitename}}_443"
    nginx_sites:
      site_80:
        - listen 80
        - server_name *.........
        - return 301 https://$host$request_uri
      site_443:
        - access_log  /var/log/nginx/labs-access.log
        - error_log  /var/log/nginx/labs-error.log
        - listen 443 ssl
        - ssl_certificate {{ server_home }}/{{ server_cert_name }}.crt
        - ssl_certificate_key {{ server_home }}/{{ server_cert_name }}.key

In this example, I'd use site_80 and site_443 as hardcoded placeholders, but I could optionaly add the nginx_site_alias block to transform the hardcoded names into whatever dynamic name we'd like to be used.

If you omit the nginx_site_alias block, the behaviour will default to the current one, site_80 and site_443 will be the site names.

The patch needed for this to work would be:

diff --git a/setup/ansible/nginx/tasks/configuration.yml b/setup/ansible/nginx/tasks/configuration.yml
index d30d77a..6373ece 100644
--- a/setup/ansible/nginx/tasks/configuration.yml
+++ b/setup/ansible/nginx/tasks/configuration.yml
@@ -11,14 +11,14 @@
   tags: [configuration,nginx]
 
 - name: Create the configurations for sites
-  template: src=site.conf.j2 dest={{nginx_conf_dir}}/sites-available/{{ item }}.conf
+  template: src=site.conf.j2 dest={{nginx_conf_dir}}/sites-available/{{ nginx_site_alias[item] | default(item) }}.conf
   with_items: "{{ nginx_sites.keys() | difference(nginx_remove_sites) }}"
   notify:
    - reload nginx
   tags: [configuration,nginx]
 
 - name: Create links for sites-enabled
-  file: state=link src={{nginx_conf_dir}}/sites-available/{{ item }}.conf dest={{nginx_conf_dir}}/sites-enabled/{{ item }}.conf
+  file: state=link src={{nginx_conf_dir}}/sites-available/{{ nginx_site_alias[item] | default(item) }}.conf dest={{nginx_conf_dir}}/sites-enabled/{{ nginx_site_alias[item] | default(item) }}.conf
   with_items: "{{ nginx_sites.keys() | difference(nginx_remove_sites) }}"
   notify:
    - reload nginx

@jorgeuriarte
Copy link

(sorry I just made a diff because. I'm not working with the more recent version, I fear)

@HerrmannHinz
Copy link

HerrmannHinz commented Aug 30, 2017

i was stumbling on some same requirements, after a while of thinkering i came up with something working:

  become: true
  hosts: nginx-proxy
  vars:
    sites:
      api.log.mycicd.internal:
        template:             proxy_vhost_ssl.conf.j2 
        resolver:             "10.241.44.2"
        server_name:          "api.log.mycicd.internal"
        proxy_pass_backend:   "https://google.de"

      om.log.mycicd.internal:
        template:             proxy_vhost_ssl.conf.j2 
        resolver:             "10.241.44.2"
        server_name:          "om.log.mycicd.internal"
        proxy_pass_backend:   "https://google.de"

      mqs.log.mycicd.internal:
        template:             proxy_vhost_ssl.conf.j2 
        resolver:             "10.241.44.2"
        server_name:          "mqs.log.mycicd.internal"
        proxy_pass_backend:   "https://google.de"
        
  roles:
  - role: jdauphant.nginx
    nginx_http_params:
      - proxy_send_timeout "120"
      - proxy_read_timeout "300"
      - proxy_buffering    "off"
      - keepalive_timeout  5 5
      - tcp_nodelay "on"
      - server_tokens "off"
      - sendfile "on"
      - access_log "/var/log/nginx/access.log"
      - error_log "/var/log/nginx/error.log"

    nginx_sites:
      "{{ sites }}"

    nginx_configs:
      gzip:
        - gzip on
        - gzip_disable msie6        `

the template looks like:

server {
    listen 80;
    listen 443 ssl;
    server_name {{ item.value.server_name }};
    ssl_certificate /etc/pki/tls/certs/{{ item.value.server_name }}/cert.pem;
    ssl_certificate_key /etc/pki/tls/private/{{ item.value.server_name }}/key.pem;
    resolver {{ item.value.resolver }} valid=300s;
    resolver_timeout 10s;
    client_max_body_size 1G;

    if ($scheme = http) {
      return 301 https://{{ item.value.server_name }}l$request_uri;
    }

    location / {
      proxy_pass {{ item.value.proxy_pass_backend }};
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
}

this is what ends up on my server:

total 20K
drwxr-xr-x  2 root nginx 4.0K Aug 30 11:47 .
drwxr-xr-x 10 root root  4.0K Aug 30 11:47 ..
-rw-r--r--  1 root root   703 Aug 30 11:47 api.log.mycicd.internal.conf
-rw-r--r--  1 root root   703 Aug 30 11:47 mqs.log.mycicd.internal.conf
-rw-r--r--  1 root root   699 Aug 30 11:47 om.log.mycicd.internal.conf

and some file content:
cat /etc/nginx/sites-available/api.log.mycicd.internal.conf

    listen 80;
    listen 443 ssl;
    server_name api.log.mycicd.internal;
    ssl_certificate /etc/pki/tls/certs/api.log.mycicd.internal/cert.pem;
    ssl_certificate_key /etc/pki/tls/private/api.log.mycicd.internal/key.pem;
    resolver 10.241.44.2 valid=300s;
    resolver_timeout 10s;
    client_max_body_size 1G;

    if ($scheme = http) {
      return 301 https://api.log.mycicd.internall$request_uri;
    }

    location / {
      proxy_pass https://google.de;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
}

\o/

maybe this is of help

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants