Skip to content
Snippets Groups Projects
Commit 51b0a571 authored by Eliot Berriot's avatar Eliot Berriot
Browse files

Fix #997: Added support for CELERYD_CONCURRENCY env var to control the number of worker processes

parent b38a72e1
No related branches found
No related tags found
No related merge requests found
Added support for CELERYD_CONCURRENCY env var to control the number of worker processes (#997)
...@@ -36,7 +36,7 @@ services: ...@@ -36,7 +36,7 @@ services:
# of CPUs. You can adjust this, by explicitly setting the --concurrency # of CPUs. You can adjust this, by explicitly setting the --concurrency
# flag: # flag:
# celery -A funkwhale_api.taskapp worker -l INFO --concurrency=4 # celery -A funkwhale_api.taskapp worker -l INFO --concurrency=4
command: celery -A funkwhale_api.taskapp worker -l INFO command: celery -A funkwhale_api.taskapp worker -l INFO --concurrency=${CELERYD_CONCURRENCY-0}
environment: environment:
- C_FORCE_ROOT=true - C_FORCE_ROOT=true
volumes: volumes:
......
...@@ -86,6 +86,12 @@ REVERSE_PROXY_TYPE=nginx ...@@ -86,6 +86,12 @@ REVERSE_PROXY_TYPE=nginx
# For the Celery/asynchronous tasks part: # For the Celery/asynchronous tasks part:
# CELERY_BROKER_URL=redis+socket:///run/redis/redis.sock?virtual_host=0 # CELERY_BROKER_URL=redis+socket:///run/redis/redis.sock?virtual_host=0
# Number of worker processes to execute. Defaults to 0, in which case it uses your number of CPUs
# Celery workers handle background tasks (such file imports or federation
# messaging). The more processes a worker gets, the more tasks
# can be processed in parallel. However, more processes also means
# a bigger memory footprint.
# CELERYD_CONCURRENCY=0
# Where media files (such as album covers or audio tracks) should be stored # Where media files (such as album covers or audio tracks) should be stored
# on your system? # on your system?
...@@ -175,4 +181,4 @@ AWS_STORAGE_BUCKET_NAME= ...@@ -175,4 +181,4 @@ AWS_STORAGE_BUCKET_NAME=
# If you are using Amazon S3, use this setting to configure how long generated URLs should stay # If you are using Amazon S3, use this setting to configure how long generated URLs should stay
# valid. The default value is 3600 (60 minutes). The maximum accepted value is 604800 (7 days) # valid. The default value is 3600 (60 minutes). The maximum accepted value is 604800 (7 days)
# AWS_QUERYSTRING_EXPIRE= # AWS_QUERYSTRING_EXPIRE=
\ No newline at end of file
...@@ -8,15 +8,8 @@ User=funkwhale ...@@ -8,15 +8,8 @@ User=funkwhale
# adapt this depending on the path of your funkwhale installation # adapt this depending on the path of your funkwhale installation
WorkingDirectory=/srv/funkwhale/api WorkingDirectory=/srv/funkwhale/api
EnvironmentFile=/srv/funkwhale/config/.env EnvironmentFile=/srv/funkwhale/config/.env
# Celery workers handle background tasks (such file imports or federation ExecStart=/srv/funkwhale/virtualenv/bin/celery -A funkwhale_api.taskapp worker -l INFO --concurrency=${CELERYD_CONCURRENCY-0}
# messaging). The more processes a worker gets, the more tasks
# can be processed in parallel. However, more processes also means
# a bigger memory footprint.
# By default, a worker will span a number of process equal to your number
# of CPUs. You can adjust this, by explicitly setting the --concurrency
# flag:
# celery -A funkwhale_api.taskapp worker -l INFO --concurrency=4
ExecStart=/srv/funkwhale/virtualenv/bin/celery -A funkwhale_api.taskapp worker -l INFO
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
...@@ -86,7 +86,7 @@ services: ...@@ -86,7 +86,7 @@ services:
- postgres - postgres
# - minio # - minio
- redis - redis
command: celery -A funkwhale_api.taskapp worker -l debug -B command: celery -A funkwhale_api.taskapp worker -l debug -B --concurrency=${CELERYD_CONCURRENCY-0}
environment: environment:
- "FUNKWHALE_HOSTNAME=${FUNKWHALE_HOSTNAME-localhost}" - "FUNKWHALE_HOSTNAME=${FUNKWHALE_HOSTNAME-localhost}"
- "FUNKWHALE_HOSTNAME_SUFFIX=funkwhale.test" - "FUNKWHALE_HOSTNAME_SUFFIX=funkwhale.test"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment