Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • funkwhale funkwhale
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
    • Locked files
  • Issues 427
    • Issues 427
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 13
    • Merge requests 13
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Artifacts
    • Schedules
    • Test cases
  • Deployments
    • Deployments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Terraform modules
    • Model experiments
  • Analytics
    • Analytics
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • funkwhalefunkwhale
  • funkwhalefunkwhale
  • Issues
  • #2136

Prune skipped uploads is OOM killed

Steps to reproduce

  1. Set a job to import in-place
python manage.py import_files <lib_id> "/usr/local/src/funkwhale/data/music/" --recursive --async --noinput --in-place --prune --skip-checks
  1. Wait for a looong time until you reach a lot a Skipped uploads (due to the known in-place import bug). Something like 1Million Skipped
  2. Try to clean skipped_uploads using
python manage.py prune_skipped_uploads --force FORCE --skip-checks

What happens?

The prune command starts to consume all available ram, and is finally being OOM killed by linux.

What is expected?

The prune command handles ram usage and terminates without issue even after a very long time.

Context

I used to have the import and prune command running daily via cronjob. The prune command failed (because of typo on my side) and the import kept running for days (months?). Now I have 1.222 Millions uploads in that library, of which 1.119 Millions are skipped uploads. Trying to clean them using the webUI times out on nginx or is being OOM killed like the cli.

I've try setting docker-compose resources limit, but the container ends up being killed anyway after consuming all of its allocated ram.

I'm running the funkwhale multi-container docker-compose, inside a Proxmox VM.

VM is dedicated to run funkwhale with 4 vCpu and 8Go ram. Average usage of ram is around 4Go. So the prune command eats up to 4Go alone. I've try shutting down funkwhale on running the command directly with : docker compose run --rm api python manage.py prune_skipped_uploads --force FORCE (so nothing starts except postgres,redis and api) but it eats up all the 8Go available...

Allocating 32Go of ram might solve the issue, though I don't have that kind of machine XD

I don't have logs to share, even with setting --verbosity 3 on the prune command.

My Funkwhale is super slow as you can expect. I've disabled import and prune jobs for now.

Funkwhale version(s) affected: 1.2.10


I'll be glad to test stuff and provide logs and stats. Although, I think the quickest solution to unblock my users might be to run a script calling the API and deleting skipped_uploads by batch of 50 (I've try that but my coding skills handling oauth stuff are not enough it seems), so if someone have a beginning of a script I could adapt, it's highly appreciated.

Edited May 26, 2023 by nouts
Assignee
Assign to
Time tracking