Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Celery worker deploy on Azure fails: didn't respond to HTTP pings on port: 80 #1917

Open
sglebs opened this issue Feb 5, 2024 · 4 comments

Comments

@sglebs
Copy link

sglebs commented Feb 5, 2024

If we could have some port that these PaaS platforms could ping to check for liveness, it would make things so much easier.

2024-02-05T18:59:27.316Z ERROR - Container staging-foobarcelery_0_3ed668ba for site staging-foobarcelery has exited, failing site start 2024-02-05T18:59:27.318Z ERROR - Container staging-foobarcelery_0_3ed668ba didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.

How can I run Celery in worker mode and still have a port that can be pinged for something? Even a "nothing here" reply would do.

Any tips on how to do this? Subclass? Or is it in the roadmap? Thanks!

@sglebs
Copy link
Author

sglebs commented Feb 5, 2024

#1177 seems to be related. But I am using Redis and not RabbitMQ. And I am using Azure (free credits!), and not Kubernetes. It using this GitHub hook: https://github.com/Azure/webapps-deploy

I have not been able to disable the liveness check on the exposed port on deploy. And I have not being able to bundle a tiny web server to just reply "just go away". Docker compose is experimental on Azure (compose with celery+nginx might be a valid hack).

I am willing to write my own code which is like celery-worker+FastAPI for a single stupid endpoint, but I am not sure how to achieve this.

Help is welcome. Thanks.

@sglebs
Copy link
Author

sglebs commented Feb 12, 2024

The current workaround I am trying requires 2 more python VMs:

  1. supervisord
  2. http.server

Here's the supervisord.conf:

[supervisord]
# http://supervisord.org/configuration.html
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0

[program:celery]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
command=celery worker --loglevel=info -E

[program:web]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
#https://realpython.com/python-http-server/
command=python3 -m http.server -b "::" -d ./app/static 80

@iloveitaly
Copy link

Running into the same issue—curious if anyone has a workaround?

@sglebs
Copy link
Author

sglebs commented May 27, 2024

I ended up bundling Celery and Flower together. This way there is an http server (separate VMs, with supervisord). It sucks, but it works and I get a web viewer for Celery workers. Here:

[supervisord]
# http://supervisord.org/configuration.html
nodaemon=true
logfile=/dev/stdout
logfile_maxbytes=0

[program:celery]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
command=newrelic-admin run-program celery worker --loglevel=info -E

[program:httpd]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
redirect_stderr=true
#https://realpython.com/python-http-server/
#command=python3 -m http.server -b "::" -d ./app/static 80
# --basic_auth comes as default from $FLOWER_BASIC_AUTH - https://flower.readthedocs.io/en/latest/config.html#environment-variables
command=celery flower --port=80

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants