Funkwhale periodically fetches own nodeinfo
Funkwhale seems to periodically fetch its own nodeinfo. I have recently set up a fresh Funkwhale instance and before it has ever federated with any other instance, I am seeing hits to nodeinfo every hour on the hour, from its own IP address, with a stock python-requests/2.28.2
user agent. When funkwhale is shut down, the hourly requests to nodeinfo stop (so it's not something else on my machine).
docker-compose logs from 01:00 to 01:01
funkwhale-celerybeat-1 | [2023-07-27 01:00:00,072: INFO/MainProcess] Scheduler: Sending due task federation.check_all_remote_instance_availability (federation.check_all_remote_instance_availability)
funkwhale-celerybeat-1 | [2023-07-27 01:00:00,080: INFO/MainProcess] Scheduler: Sending due task federation.refresh_nodeinfo_known_nodes (federation.refresh_nodeinfo_known_nodes)
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,081: INFO/MainProcess] Task federation.check_all_remote_instance_availability[6c22a3f5-2b72-4af8-91be-9413d60067b7] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,081: INFO/MainProcess] Task federation.check_all_remote_instance_availability[6c22a3f5-2b72-4af8-91be-9413d60067b7] received
funkwhale-celerybeat-1 | [2023-07-27 01:00:00,081: INFO/MainProcess] Scheduler: Sending due task music.clean_transcoding_cache (music.clean_transcoding_cache)
funkwhale-celerybeat-1 | [2023-07-27 01:00:00,082: INFO/MainProcess] Scheduler: Sending due task common.prune_unattached_attachments (common.prune_unattached_attachments)
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,082: INFO/MainProcess] Task federation.refresh_nodeinfo_known_nodes[aa8c06fb-a875-442c-b035-a3b9d29e2cef] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,082: INFO/MainProcess] Task federation.refresh_nodeinfo_known_nodes[aa8c06fb-a875-442c-b035-a3b9d29e2cef] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,084: INFO/MainProcess] Task music.clean_transcoding_cache[01573d10-84a4-4d79-9ec5-17b0ed95c8dd] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,084: INFO/MainProcess] Task music.clean_transcoding_cache[01573d10-84a4-4d79-9ec5-17b0ed95c8dd] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,088: INFO/MainProcess] Task common.prune_unattached_attachments[05d7c71a-dee6-42be-b587-eae8aad1f37c] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,088: INFO/MainProcess] Task common.prune_unattached_attachments[05d7c71a-dee6-42be-b587-eae8aad1f37c] received
funkwhale-celerybeat-1 | [2023-07-27 01:00:00,089: INFO/MainProcess] Scheduler: Sending due task audio.fetch_rss_feeds (audio.fetch_rss_feeds)
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,094: INFO/MainProcess] Task audio.fetch_rss_feeds[560b5d85-5d58-4ff0-b694-ea3a7894eed5] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,094: INFO/MainProcess] Task audio.fetch_rss_feeds[560b5d85-5d58-4ff0-b694-ea3a7894eed5] received
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,097: INFO/ForkPoolWorker-4] Task federation.refresh_nodeinfo_known_nodes[aa8c06fb-a875-442c-b035-a3b9d29e2cef] succeeded in 0.014368977397680283s: None
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,097: INFO/ForkPoolWorker-4] Task federation.refresh_nodeinfo_known_nodes[aa8c06fb-a875-442c-b035-a3b9d29e2cef] succeeded in 0.014368977397680283s: None
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,104: INFO/ForkPoolWorker-1] Task music.clean_transcoding_cache[01573d10-84a4-4d79-9ec5-17b0ed95c8dd] succeeded in 0.019608423113822937s: (0, {})
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,104: INFO/ForkPoolWorker-1] Task music.clean_transcoding_cache[01573d10-84a4-4d79-9ec5-17b0ed95c8dd] succeeded in 0.019608423113822937s: (0, {})
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,109: INFO/ForkPoolWorker-4] Task audio.fetch_rss_feeds[560b5d85-5d58-4ff0-b694-ea3a7894eed5] succeeded in 0.010215703397989273s: None
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,109: INFO/ForkPoolWorker-4] Task audio.fetch_rss_feeds[560b5d85-5d58-4ff0-b694-ea3a7894eed5] succeeded in 0.010215703397989273s: None
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,124: INFO/ForkPoolWorker-3] Task common.prune_unattached_attachments[05d7c71a-dee6-42be-b587-eae8aad1f37c] succeeded in 0.03442799672484398s: None
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,124: INFO/ForkPoolWorker-3] Task common.prune_unattached_attachments[05d7c71a-dee6-42be-b587-eae8aad1f37c] succeeded in 0.03442799672484398s: None
funkwhale-front-1 | 10.7.1.11 - - [27/Jul/2023:01:00:00 +0000] "GET /api/v1/instance/nodeinfo/2.0/ HTTP/1.1" 200 1225 "-" "python-requests/2.28.2" "[redacted, public IP address of the machine running funkwhale]"
funkwhale-front-1 | 10.7.1.11 - - [27/Jul/2023:01:00:00 +0000] "GET /api/v1/instance/nodeinfo/2.0/ HTTP/1.1" 200 1225 "-" "python-requests/2.28.2" "[redacted, ditto]"
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,334: INFO/ForkPoolWorker-2] Task federation.check_all_remote_instance_availability[6c22a3f5-2b72-4af8-91be-9413d60067b7] succeeded in 0.2507992200553417s: None
funkwhale-celeryworker-1 | [2023-07-27 01:00:00,334: INFO/ForkPoolWorker-2] Task federation.check_all_remote_instance_availability[6c22a3f5-2b72-4af8-91be-9413d60067b7] succeeded in 0.2507992200553417s: None
I don't know why some of these log lines are doubled up. The reverse proxy I have set up in front of this only saw one request to nodeinfo at 01:00:00.
The check_all_remote_instance_availability
task looks like a likely culprit, as it
- is the only one to finish after the front container logs the request to nodeinfo
- does run every hour on the hour
- uses
requests.get()
directly without setting a user agent - requests
/api/v1/instance/nodeinfo/2.0/
directly whereasrefresh_nodeinfo_known_nodes
, the only(?) other task concerned with nodeinfo, requests/.well-known/nodeinfo
first (which does not appear in my logs at all)
It looks like it should skip checking the local instance though. I do have FUNKWHALE_HOSTNAME
correctly set up in .env
.
Not sure what to do next to diagnose this.
Environment details: Funkwhale 1.3.1 deployed in Docker following the official setup guide.