It was also the CloudAMQP support who first pointed out an issue with the way I ran my celery workers (see the very first code block above): if you run multiple processes on Heroku the way I did, of course it won't be able to notice when one of them dies and restart the dyno. ...
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment Labels Component: Prefork Workers PoolComponent: RabbitMQ BrokerIssue Type: Bug Reportmemory leakpyamqpSeverity: CriticalStatus: Needs Testcase ✘ 61 participants and others...
To process all the activity within our service, we run about 150 Celery workers with the settings-w 92on ourm3.2xlarge– which means each worker got approximately ~320 MB of RAM. This should have been plenty, yet it wasn't uncommon to have 8-12 GB of swap usage on a worker after ...
time.sleep(seconds)If the attributes are not set, then the workers default time limits will be used.New in this version you can also change the time limits for a task at runtime using the :func:`time_limit` remote control command::>>> from celery.task import control ...