Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2422-trigger-libraries-follow-on-user-follow
  • 2429-fix-popover-auto-close
  • 2448-complete-tags
  • 2452-fetch-third-party-metadata
  • 2469-Fix-search-bar-in-ManageUploads
  • 2476-deep-upload-links
  • 2490-experiment-use-rstore
  • 2490-experimental-use-simple-data-store
  • 2490-fix-search-modal
  • 2490-search-modal
  • 2501-fix-compatibility-with-older-browsers
  • 2502-drop-uno-and-jquery
  • 2533-allow-followers-in-user-activiy-privacy-level
  • 2539-drop-ansible-installation-method-in-favor-of-docker
  • 2560-default-modal-width
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • feature/2481-vui-translations
  • fix-amd64-docker-build-gfortran
  • fix-front-node-version
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • refactor/homepage
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • update-frontend-dependencies
  • upload-process-spec
  • user-concept-docs
  • v2-artists
  • vite-ws-ssl-compatible
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
200 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 278-search-browse
  • 303-json-ld
  • 316-ultrasonic
  • 334-don-t-display-an-empty-page-browser
  • 463-user-libraries
  • ButterflyOfFire/funkwhale-patch-1
  • avatar-everywhere
  • build-docker-unprivileged
  • develop
  • master
  • playlist-component
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
50 results
Show changes
Showing
with 2600 additions and 123 deletions
from django.core.management.base import BaseCommand
from django.db import transaction
from django.db.models import Q
from funkwhale_api.common import utils as common_utils
from funkwhale_api.music import models, utils
class Command(BaseCommand):
help = "Run common checks and fix against imported tracks"
def add_arguments(self, parser):
parser.add_argument(
"--dry-run",
action="store_true",
dest="dry_run",
default=False,
help="Do not execute anything",
)
parser.add_argument(
"--mimetype",
action="store_true",
dest="mimetype",
default=True,
help="Check and fix mimetypes",
)
parser.add_argument(
"--audio-data",
action="store_true",
dest="data",
default=False,
help="Check and fix bitrate and duration, can be really slow because it needs to access files",
)
parser.add_argument(
"--size",
action="store_true",
dest="size",
default=False,
help="Check and fix file size, can be really slow because it needs to access files",
)
parser.add_argument(
"--checksum",
action="store_true",
dest="checksum",
default=False,
help="Check and fix file size, can be really slow because it needs to access files",
)
parser.add_argument(
"--batch-size",
"-s",
dest="batch_size",
default=1000,
type=int,
help="Size of each updated batch",
)
def handle(self, *args, **options):
if options["dry_run"]:
self.stdout.write("Dry-run on, will not commit anything")
if options["mimetype"]:
self.fix_mimetypes(**options)
if options["data"]:
self.fix_file_data(**options)
if options["size"]:
self.fix_file_size(**options)
if options["checksum"]:
self.fix_file_checksum(**options)
@transaction.atomic
def fix_mimetypes(self, dry_run, **kwargs):
self.stdout.write("Fixing missing mimetypes...")
matching = models.Upload.objects.filter(
Q(source__startswith="file://") | Q(source__startswith="upload://")
).exclude(mimetype__startswith="audio/")
total = matching.count()
self.stdout.write(f"[mimetypes] {total} entries found with bad or no mimetype")
if not total:
return
for extension, mimetype in utils.EXTENSION_TO_MIMETYPE.items():
qs = matching.filter(source__endswith=f".{extension}")
self.stdout.write(
"[mimetypes] setting {} {} files to {}".format(
qs.count(), extension, mimetype
)
)
if not dry_run:
self.stdout.write("[mimetypes] committing...")
qs.update(mimetype=mimetype)
def fix_file_data(self, dry_run, **kwargs):
self.stdout.write("Fixing missing bitrate or length...")
matching = models.Upload.objects.filter(
Q(bitrate__isnull=True) | Q(duration__isnull=True)
)
total = matching.count()
self.stdout.write(f"[bitrate/length] {total} entries found with missing values")
if dry_run:
return
chunks = common_utils.chunk_queryset(
matching.only("id", "audio_file", "source"), kwargs["batch_size"]
)
handled = 0
for chunk in chunks:
updated = []
for upload in chunk:
handled += 1
self.stdout.write(
"[bitrate/length] {}/{} fixing file #{}".format(
handled, total, upload.pk
)
)
try:
audio_file = upload.get_audio_file()
data = utils.get_audio_file_data(audio_file)
upload.bitrate = data["bitrate"]
upload.duration = data["length"]
except Exception as e:
self.stderr.write(
"[bitrate/length] error with file #{}: {}".format(
upload.pk, str(e)
)
)
else:
updated.append(upload)
models.Upload.objects.bulk_update(updated, ["bitrate", "duration"])
def fix_file_size(self, dry_run, **kwargs):
self.stdout.write("Fixing missing size...")
matching = models.Upload.objects.filter(size__isnull=True)
total = matching.count()
self.stdout.write(f"[size] {total} entries found with missing values")
if dry_run:
return
chunks = common_utils.chunk_queryset(
matching.only("id", "audio_file", "source"), kwargs["batch_size"]
)
handled = 0
for chunk in chunks:
updated = []
for upload in chunk:
handled += 1
self.stdout.write(f"[size] {handled}/{total} fixing file #{upload.pk}")
try:
upload.size = upload.get_file_size()
except Exception as e:
self.stderr.write(f"[size] error with file #{upload.pk}: {str(e)}")
else:
updated.append(upload)
models.Upload.objects.bulk_update(updated, ["size"])
def fix_file_checksum(self, dry_run, **kwargs):
self.stdout.write("Fixing missing checksums...")
matching = models.Upload.objects.filter(
Q(checksum=None)
& (Q(audio_file__isnull=False) | Q(source__startswith="file://"))
)
total = matching.count()
self.stdout.write(f"[checksum] {total} entries found with missing values")
if dry_run:
return
chunks = common_utils.chunk_queryset(
matching.only("id", "audio_file", "source"), kwargs["batch_size"]
)
handled = 0
for chunk in chunks:
updated = []
for upload in chunk:
handled += 1
self.stdout.write(
f"[checksum] {handled}/{total} fixing file #{upload.pk}"
)
try:
upload.checksum = common_utils.get_file_hash(
upload.get_audio_file()
)
except Exception as e:
self.stderr.write(
f"[checksum] error with file #{upload.pk}: {str(e)}"
)
else:
updated.append(upload)
models.Upload.objects.bulk_update(updated, ["checksum"])
from django.core.management.base import BaseCommand
from funkwhale_api.typesense import tasks
class Command(BaseCommand):
help = """
Trigger the generation of a new typesense index for canonical Funkwhale tracks metadata.
This is use to resolve Funkwhale tracks to MusicBrainz ids"""
def handle(self, *args, **kwargs):
tasks.build_canonical_index.delay()
self.stdout.write("Tasks launched in celery worker.")
import collections
import datetime
import itertools
import os
import queue
import sys
import threading
import time
import urllib.parse
import watchdog.events
import watchdog.observers
from django.conf import settings
from django.core.cache import cache
from django.core.files import File
from django.core.management import call_command
from django.core.management.base import BaseCommand, CommandError
from django.db.models import Q
from django.db.utils import IntegrityError
from django.utils import timezone
from rest_framework import serializers
from funkwhale_api.common import utils as common_utils
from funkwhale_api.music import models, tasks, utils
def crawl_dir(dir, extensions, recursive=True, ignored=[]):
if os.path.isfile(dir):
yield dir
return
try:
scanner = os.scandir(dir)
except Exception as e:
m = f"Error while reading {dir}: {e.__class__.__name__} {e}\n"
sys.stderr.write(m)
return
try:
for entry in scanner:
try:
if entry.is_file():
for e in extensions:
if entry.name.lower().endswith(f".{e.lower()}"):
if entry.path not in ignored:
yield entry.path
elif recursive and entry.is_dir():
yield from crawl_dir(
entry.path, extensions, recursive=recursive, ignored=ignored
)
except Exception as e:
m = "Error while reading {}: {} {}\n".format(
entry.name, e.__class__.__name__, e
)
sys.stderr.write(m)
finally:
if hasattr(scanner, "close"):
scanner.close()
def batch(iterable, n=1):
has_entries = True
while has_entries:
current = []
for i in range(0, n):
try:
current.append(next(iterable))
except StopIteration:
has_entries = False
yield current
class CacheWriter:
"""
Output to cache instead of console
"""
def __init__(self, key, stdout, buffer_size=10):
self.key = key
cache.set(self.key, [])
self.stdout = stdout
self.buffer_size = buffer_size
self.buffer = []
def write(self, message):
# we redispatch the message to the console, for debugging
self.stdout.write(message)
self.buffer.append(message)
if len(self.buffer) > self.buffer_size:
self.flush()
def flush(self):
current = cache.get(self.key)
cache.set(self.key, current + self.buffer)
self.buffer = []
class Command(BaseCommand):
help = "Import audio files matching given glob pattern"
def add_arguments(self, parser):
parser.add_argument(
"library_id",
type=str,
help=(
"A local library identifier where the files should be imported. "
"You can use the full uuid such as e29c5be9-6da3-4d92-b40b-4970edd3ee4b "
"or only a small portion of it, starting from the beginning, such as "
"e29c5be9"
),
)
parser.add_argument("path", nargs="+", type=str)
parser.add_argument(
"--recursive",
action="store_true",
dest="recursive",
default=False,
help="Will match the pattern recursively (including subdirectories)",
)
parser.add_argument(
"--username",
dest="username",
help="The username of the user you want to be bound to the import",
)
parser.add_argument(
"--async",
action="store_true",
dest="async_",
default=False,
help="Will launch celery tasks for each file to import instead of doing it synchronously and block the CLI",
)
parser.add_argument(
"--exit",
"-x",
action="store_true",
dest="exit_on_failure",
default=False,
help="Use this flag to disable error catching",
)
parser.add_argument(
"--in-place",
"-i",
action="store_true",
dest="in_place",
default=False,
help=(
"Import files without duplicating them into the media directory."
"For in-place import to work, the music files must be readable"
"by the web-server and funkwhale api and celeryworker processes."
"You may want to use this if you have a big music library to "
"import and not much disk space available."
),
)
parser.add_argument(
"--replace",
action="store_true",
dest="replace",
default=False,
help=(
"Use this flag to replace duplicates (tracks with same "
"musicbrainz mbid, or same artist, album and title) on import "
"with their newest version."
),
)
parser.add_argument(
"--outbox",
action="store_true",
dest="outbox",
default=False,
help=(
"Use this flag to notify library followers of newly imported files. "
"You'll likely want to keep this disabled for CLI imports, especially if"
"you plan to import hundreds or thousands of files, as it will cause a lot "
"of overhead on your server and on servers you are federating with."
),
)
parser.add_argument(
"--watch",
action="store_true",
dest="watch",
default=False,
help=(
"Start the command in watch mode. Instead of running a full import, "
"and exit, watch the given path and import new files, remove deleted "
"files, and update metadata corresponding to updated files."
),
)
parser.add_argument("-e", "--extension", nargs="+")
parser.add_argument(
"--broadcast",
action="store_true",
dest="broadcast",
default=False,
help=(
"Use this flag to enable realtime updates about the import in the UI. "
"This causes some overhead, so it's disabled by default."
),
)
parser.add_argument(
"--prune",
action="store_true",
dest="prune",
default=False,
help=(
"Once the import is completed, prune tracks, albums and artists that aren't linked to any upload."
),
)
parser.add_argument(
"--reference",
action="store",
dest="reference",
default=None,
help=(
"A custom reference for the import. Leave this empty to have a random "
"reference being generated for you."
),
)
parser.add_argument(
"--noinput",
"--no-input",
action="store_false",
dest="interactive",
help="Do NOT prompt the user for input of any kind.",
)
parser.add_argument(
"--batch-size",
"-s",
dest="batch_size",
default=1000,
type=int,
help="Size of each batch, only used when crawling large collections",
)
def handle(self, *args, **kwargs):
cache.set("fs-import:status", "started")
if kwargs.get("update_cache", False):
self.stdout = CacheWriter("fs-import:logs", self.stdout)
self.stderr = self.stdout
try:
return self._handle(*args, **kwargs)
except CommandError as e:
self.stdout.write(str(e))
raise
finally:
if kwargs.get("update_cache", False):
cache.set("fs-import:status", "finished")
self.stdout.flush()
def _handle(self, *args, **options):
# handle relative directories
options["path"] = [os.path.abspath(path) for path in options["path"]]
self.is_confirmed = False
try:
library = models.Library.objects.select_related("actor__user").get(
uuid__startswith=options["library_id"]
)
except models.Library.DoesNotExist:
raise CommandError("Invalid library id")
if not library.actor.get_user():
raise CommandError(f"Library {library.uuid} is not a local library")
if options["in_place"]:
self.stdout.write(
"Checking imported paths against settings.MUSIC_DIRECTORY_PATH"
)
for import_path in options["path"]:
p = settings.MUSIC_DIRECTORY_PATH
if not p:
raise CommandError(
"Importing in-place requires setting the "
"MUSIC_DIRECTORY_PATH variable"
)
if p and not import_path.startswith(p):
raise CommandError(
"Importing in-place only works if importing "
"from {} (MUSIC_DIRECTORY_PATH), as this directory "
"needs to be accessible by the webserver."
"Culprit: {}".format(p, import_path)
)
reference = options["reference"] or f"cli-{timezone.now().isoformat()}"
import_url = "{}://{}/library/{}/upload?{}"
import_url = import_url.format(
settings.FUNKWHALE_PROTOCOL,
settings.FUNKWHALE_HOSTNAME,
str(library.uuid),
urllib.parse.urlencode([("import", reference)]),
)
self.stdout.write(
"For details, please refer to import reference '{}' or URL {}".format(
reference, import_url
)
)
extensions = options.get("extension") or utils.SUPPORTED_EXTENSIONS
if options["watch"]:
if len(options["path"]) > 1:
raise CommandError("Watch only work with a single directory")
return self.setup_watcher(
extensions=extensions,
path=options["path"][0],
reference=reference,
library=library,
in_place=options["in_place"],
prune=options["prune"],
recursive=options["recursive"],
replace=options["replace"],
dispatch_outbox=options["outbox"],
broadcast=options["broadcast"],
)
update = True
checked_paths = set()
if options["in_place"] and update:
self.stdout.write("Checking existing files for updates…")
message = (
"Are you sure you want to do this?\n\n"
"Type 'yes' to continue, or 'no' to skip checking for updates in "
"already imported files: "
)
if options["interactive"] and input("".join(message)) != "yes":
pass
else:
checked_paths = check_updates(
stdout=self.stdout,
paths=options["path"],
extensions=extensions,
library=library,
batch_size=options["batch_size"],
)
self.stdout.write("Existing files checked, moving on to next step!")
crawler = itertools.chain(
*[
crawl_dir(
p,
extensions=extensions,
recursive=options["recursive"],
ignored=checked_paths,
)
for p in options["path"]
]
)
errors = []
total = 0
start_time = time.time()
batch_start = None
batch_duration = None
self.stdout.write("Starting import of new files…")
for i, entries in enumerate(batch(crawler, options["batch_size"])):
if options.get("update_cache", False):
# check to see if the scan was cancelled
if cache.get("fs-import:status") == "canceled":
raise CommandError("Import cancelled")
total += len(entries)
batch_start = time.time()
time_stats = ""
if i > 0:
time_stats = " - running for {}s, previous batch took {}s".format(
int(time.time() - start_time),
int(batch_duration),
)
if entries:
self.stdout.write(
"Handling batch {} ({} items){}".format(
i + 1,
len(entries),
time_stats,
)
)
batch_errors = self.handle_batch(
library=library,
paths=entries,
batch=i + 1,
reference=reference,
options=options,
)
if batch_errors:
errors += batch_errors
batch_duration = time.time() - batch_start
message = "Successfully imported {} new tracks in {}s"
if options["async_"]:
message = "Successfully launched import for {} new tracks in {}s"
self.stdout.write(
message.format(total - len(errors), int(time.time() - start_time))
)
if len(errors) > 0:
self.stderr.write(f"{len(errors)} tracks could not be imported:")
for path, error in errors:
self.stderr.write(f"- {path}: {error}")
self.stdout.write(
"For details, please refer to import reference '{}' or URL {}".format(
reference, import_url
)
)
if options["prune"]:
self.stdout.write(
"Pruning dangling tracks, albums and artists from library…"
)
prune()
def handle_batch(self, library, paths, batch, reference, options):
matching = []
for m in paths:
# In some situations, the path is encoded incorrectly on the filesystem
# so we filter out faulty paths and display a warning to the user.
# see https://dev.funkwhale.audio/funkwhale/funkwhale/issues/138
try:
m.encode("utf-8")
matching.append(m)
except UnicodeEncodeError:
try:
previous = matching[-1]
except IndexError:
previous = None
self.stderr.write(
self.style.WARNING(
"[warning] Ignoring undecodable path. Previous ok file was {}".format(
previous
)
)
)
if not matching:
raise CommandError("No file matching pattern, aborting")
if options["replace"]:
filtered = {"initial": matching, "skipped": [], "new": matching}
message = " - {} files to be replaced"
import_paths = matching
else:
filtered = self.filter_matching(matching, library)
message = " - {} files already found in database"
import_paths = filtered["new"]
self.stdout.write(" Import summary:")
self.stdout.write(
" - {} files found matching this pattern: {}".format(
len(matching), options["path"]
)
)
self.stdout.write(message.format(len(filtered["skipped"])))
self.stdout.write(" - {} new files".format(len(filtered["new"])))
if batch == 1:
self.stdout.write(
" Selected options: {}".format(
", ".join(
["in place" if options["in_place"] else "copy music files"]
)
)
)
if len(filtered["new"]) == 0:
self.stdout.write(" Nothing new to import, exiting")
return
if options["interactive"] and not self.is_confirmed:
message = (
"Are you sure you want to do this?\n\n"
"Type 'yes' to continue, or 'no' to cancel: "
)
if input("".join(message)) != "yes":
raise CommandError("Import cancelled.")
self.is_confirmed = True
errors = self.do_import(
import_paths,
library=library,
reference=reference,
batch=batch,
options=options,
)
return errors
def filter_matching(self, matching, library):
sources = [f"file://{p}" for p in matching]
# we skip reimport for path that are already found
# as a Upload.source
existing = library.uploads.filter(source__in=sources, import_status="finished")
existing = existing.values_list("source", flat=True)
existing = {p.replace("file://", "", 1) for p in existing}
skipped = set(matching) & existing
result = {
"initial": matching,
"skipped": list(sorted(skipped)),
"new": list(sorted(set(matching) - skipped)),
}
return result
def do_import(self, paths, library, reference, batch, options):
message = "[batch {batch}] {i}/{total} Importing {path}..."
if options["async_"]:
message = "[batch {batch}] {i}/{total} Launching import for {path}..."
# we create an upload binded to the library
async_ = options["async_"]
errors = []
for i, path in list(enumerate(paths)):
if options["verbosity"] > 1:
self.stdout.write(
message.format(batch=batch, path=path, i=i + 1, total=len(paths))
)
try:
create_upload(
path=path,
reference=reference,
library=library,
async_=async_,
replace=options["replace"],
in_place=options["in_place"],
dispatch_outbox=options["outbox"],
broadcast=options["broadcast"],
)
except Exception as e:
if options["exit_on_failure"]:
raise
m = "Error while importing {}: {} {}".format(
path, e.__class__.__name__, e
)
self.stderr.write(m)
errors.append((path, f"{e.__class__.__name__} {e}"))
return errors
def setup_watcher(self, path, extensions, recursive, **kwargs):
watchdog_queue = queue.Queue()
# Set up a worker thread to process database load
worker = threading.Thread(
target=process_load_queue(self.stdout, **kwargs),
args=(watchdog_queue,),
)
worker.setDaemon(True)
worker.start()
# setup watchdog to monitor directory for trigger files
patterns = [f"*.{e}" for e in extensions]
event_handler = Watcher(
stdout=self.stdout,
queue=watchdog_queue,
patterns=patterns,
)
observer = watchdog.observers.Observer()
observer.schedule(event_handler, path, recursive=recursive)
observer.start()
try:
while True:
self.stdout.write(f"Watching for changes at {path}", ending="\r")
time.sleep(10)
if kwargs["prune"] and GLOBAL["need_pruning"]:
self.stdout.write("Some files were deleted, pruning library…")
prune()
GLOBAL["need_pruning"] = False
except KeyboardInterrupt:
self.stdout.write("Exiting…")
observer.stop()
observer.join()
GLOBAL = {"need_pruning": False}
def prune():
call_command(
"prune_library",
dry_run=False,
prune_artists=True,
prune_albums=True,
prune_tracks=True,
)
def create_upload(
path,
reference,
library,
async_,
replace,
in_place,
dispatch_outbox,
broadcast,
):
import_handler = tasks.process_upload.delay if async_ else tasks.process_upload
upload = models.Upload(library=library, import_reference=reference)
upload.source = "file://" + path
upload.import_metadata = {
"funkwhale": {
"config": {
"replace": replace,
"dispatch_outbox": dispatch_outbox,
"broadcast": broadcast,
}
}
}
if not in_place:
name = os.path.basename(path)
with open(path, "rb") as f:
upload.audio_file.save(name, File(f), save=False)
upload.save()
import_handler(upload_id=upload.pk)
def process_load_queue(stdout, **kwargs):
def inner(q):
# we batch events, to avoid calling same methods multiple times if a file is modified
# a lot in a really short time
flush_delay = 2
batched_events = collections.OrderedDict()
while True:
while True:
if not q.empty():
event = q.get()
batched_events[event["path"]] = event
else:
break
for path, event in batched_events.copy().items():
if time.time() - event["time"] <= flush_delay:
continue
now = datetime.datetime.now(datetime.timezone.utc)
stdout.write(
"{} -- Processing {}:{}...\n".format(
now.strftime("%Y/%m/%d %H:%M:%S"), event["type"], event["path"]
)
)
del batched_events[path]
handle_event(event, stdout=stdout, **kwargs)
time.sleep(1)
return inner
class Watcher(watchdog.events.PatternMatchingEventHandler):
def __init__(self, stdout, queue, patterns):
self.stdout = stdout
self.queue = queue
super().__init__(patterns=patterns)
def enqueue(self, event):
e = {
"is_directory": event.is_directory,
"type": event.event_type,
"path": event.src_path,
"src_path": event.src_path,
"dest_path": getattr(event, "dest_path", None),
"time": time.time(),
}
self.queue.put(e)
def on_moved(self, event):
self.enqueue(event)
def on_created(self, event):
self.enqueue(event)
def on_deleted(self, event):
self.enqueue(event)
def on_modified(self, event):
self.enqueue(event)
def handle_event(event, stdout, **kwargs):
handlers = {
"modified": handle_modified,
"created": handle_created,
"moved": handle_moved,
"deleted": handle_deleted,
}
handlers[event["type"]](event=event, stdout=stdout, **kwargs)
def handle_modified(event, stdout, library, in_place, **kwargs):
existing_candidates = library.uploads.filter(import_status="finished")
with open(event["path"], "rb") as f:
checksum = common_utils.get_file_hash(f)
existing = existing_candidates.filter(checksum=checksum).first()
if existing:
# found an existing file with same checksum, nothing to do
stdout.write(" File already imported and metadata is up-to-date")
return
to_update = None
if in_place:
source = "file://{}".format(event["path"])
to_update = (
existing_candidates.in_place()
.filter(source=source)
.select_related("track__attributed_to")
.prefetch_related(
"track__artist_credit__artist",
"track__album__artist_credit__artist",
)
.first()
)
if to_update:
if (
to_update.track.attributed_to
and to_update.track.attributed_to != library.actor
):
stdout.write(
" Cannot update track metadata, track belongs to someone else"
)
return
else:
stdout.write(
" Updating existing file #{} with new metadata…".format(
to_update.pk
)
)
audio_metadata = to_update.get_metadata()
try:
tasks.update_track_metadata(audio_metadata, to_update.track)
except serializers.ValidationError as e:
stdout.write(f" Invalid metadata: {e}")
else:
to_update.checksum = checksum
to_update.save(update_fields=["checksum"])
return
stdout.write(" Launching import for new file")
create_upload(
path=event["path"],
reference=kwargs["reference"],
library=library,
async_=False,
replace=kwargs["replace"],
in_place=in_place,
dispatch_outbox=kwargs["dispatch_outbox"],
broadcast=kwargs["broadcast"],
)
def handle_created(event, stdout, **kwargs):
"""
Created is essentially an alias for modified, because for instance when copying a file in the watched directory,
a created event will be fired on the initial touch, then many modified event (as the file is written).
"""
return handle_modified(event, stdout, **kwargs)
def handle_moved(event, stdout, library, in_place, **kwargs):
if not in_place:
return
old_source = "file://{}".format(event["src_path"])
new_source = "file://{}".format(event["dest_path"])
existing_candidates = library.uploads.filter(import_status="finished")
existing_candidates = existing_candidates.in_place().filter(source=old_source)
existing = existing_candidates.first()
if existing:
stdout.write(f" Updating path of existing file #{existing.pk}")
existing.source = new_source
existing.save(update_fields=["source"])
def handle_deleted(event, stdout, library, in_place, **kwargs):
if not in_place:
return
source = "file://{}".format(event["path"])
existing_candidates = library.uploads.filter(import_status="finished")
existing_candidates = existing_candidates.in_place().filter(source=source)
if existing_candidates.count():
stdout.write(" Removing file from DB")
existing_candidates.delete()
GLOBAL["need_pruning"] = True
def check_updates(stdout, library, extensions, paths, batch_size):
existing = (
library.uploads.in_place()
.filter(import_status="finished")
.exclude(checksum=None)
.select_related("library", "track")
)
queries = []
checked_paths = set()
for path in paths:
for ext in extensions:
queries.append(
Q(source__startswith=f"file://{path}") & Q(source__endswith=f".{ext}")
)
query, remainder = queries[0], queries[1:]
for q in remainder:
query = q | query
existing = existing.filter(query)
total = existing.count()
stdout.write(f"Found {total} files to check in database!")
uploads = existing.order_by("source")
for i, rows in enumerate(batch(uploads.iterator(), batch_size)):
stdout.write(
"Handling batch {} ({} items)".format(
i + 1,
len(rows),
)
)
for upload in rows:
check_upload(stdout, upload)
checked_paths.add(upload.source.replace("file://", "", 1))
return checked_paths
def check_upload(stdout, upload):
try:
audio_file = upload.get_audio_file()
except (FileNotFoundError, PermissionError):
stdout.write(
" Removing file #{} missing from disk at {}".format(
upload.pk, upload.source
)
)
return upload.delete()
checksum = common_utils.get_file_hash(audio_file)
if upload.checksum != checksum:
stdout.write(
" File #{} at {} was modified, updating metadata…".format(
upload.pk, upload.source
)
)
if upload.library.actor_id != upload.track.attributed_to_id:
stdout.write(
" Cannot update track metadata, track belongs to someone else"
)
else:
track = models.Track.objects.prefetch_related(
"artist_credit__artist", "album__artist_credit__artist"
).get(pk=upload.track_id)
try:
tasks.update_track_metadata(upload.get_metadata(), track)
except serializers.ValidationError as e:
stdout.write(f" Invalid metadata: {e}")
return
except IntegrityError:
stdout.write(
" Duplicate key violation for metadata. Skipping...\n{}".format(
upload.source
)
)
return
else:
upload.checksum = checksum
upload.save(update_fields=["checksum"])
from argparse import RawTextHelpFormatter
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from funkwhale_api.music import models, tasks
class Command(BaseCommand):
help = """
Remove tracks, albums and artists that are not associated with any file from the instance library:
- Tracks without uploads are deleted, if the --tracks flag is passed
- Albums without tracks are deleted, if the --albums flag is passed
- Artists without albums are deleted, if the --artists flag is passed
Tracks with associated favorites, playlists or listening won't be deleted
by default, unless you pass the corresponding --ignore-* flags.
"""
def create_parser(self, *args, **kwargs):
parser = super().create_parser(*args, **kwargs)
parser.formatter_class = RawTextHelpFormatter
return parser
def add_arguments(self, parser):
parser.add_argument(
"--no-dry-run",
action="store_false",
dest="dry_run",
default=True,
help="Disable dry run mode and apply pruning for real on the database",
)
parser.add_argument(
"--artists",
action="store_true",
dest="prune_artists",
default=False,
help="Prune artists without albums/tracks",
)
parser.add_argument(
"--albums",
action="store_true",
dest="prune_albums",
default=False,
help="Prune albums without tracks",
)
parser.add_argument(
"--tracks",
action="store_true",
dest="prune_tracks",
default=False,
help="Prune tracks without uploads",
)
parser.add_argument(
"--ignore-favorites",
action="store_false",
dest="exclude_favorites",
default=True,
help="Allow favorited tracks to be pruned",
)
parser.add_argument(
"--ignore-playlists",
action="store_false",
dest="exclude_playlists",
default=True,
help="Allow tracks included in playlists to be pruned",
)
parser.add_argument(
"--ignore-listenings",
action="store_false",
dest="exclude_listenings",
default=True,
help="Allow tracks with listening history to be pruned",
)
@transaction.atomic
def handle(self, *args, **options):
if not any(
[options["prune_albums"], options["prune_tracks"], options["prune_artists"]]
):
raise CommandError(
"You need to provide at least one of the --tracks, --albums or --artists flags"
)
if options["dry_run"]:
self.stdout.write("Dry-run on, will not commit anything")
else:
self.stdout.write("Dry-run off, *pruning for real*")
self.stdout.write("")
if options["prune_tracks"]:
prunable = tasks.get_prunable_tracks(
exclude_favorites=options["exclude_favorites"],
exclude_playlists=options["exclude_playlists"],
exclude_listenings=options["exclude_listenings"],
)
pruned_total = prunable.count()
total = models.Track.objects.count()
if options["dry_run"]:
self.stdout.write(f"Would prune {pruned_total}/{total} tracks")
else:
self.stdout.write(f"Deleting {pruned_total}/{total} tracks…")
prunable.delete()
if options["prune_albums"]:
prunable = tasks.get_prunable_albums()
pruned_total = prunable.count()
total = models.Album.objects.count()
if options["dry_run"]:
self.stdout.write(f"Would prune {pruned_total}/{total} albums")
else:
self.stdout.write(f"Deleting {pruned_total}/{total} albums…")
prunable.delete()
if options["prune_artists"]:
prunable = tasks.get_prunable_artists()
pruned_total = prunable.count()
total = models.Artist.objects.count()
if options["dry_run"]:
self.stdout.write(f"Would prune {pruned_total}/{total} artists")
else:
self.stdout.write(f"Deleting {pruned_total}/{total} artists…")
prunable.delete()
self.stdout.write("")
if options["dry_run"]:
self.stdout.write(
"Nothing was pruned, rerun this command with --no-dry-run to apply the changes"
)
else:
self.stdout.write("Pruning completed!")
self.stdout.write("")
from django.core.management.base import BaseCommand
from django.db import transaction
from funkwhale_api.music import models
class Command(BaseCommand):
help = """Deletes any tracks not tagged with a MusicBrainz ID from the database. By default, any tracks that
have been favorited by a user or added to a playlist are preserved."""
def add_arguments(self, parser):
parser.add_argument(
"--no-dry-run",
action="store_true",
dest="no_dry_run",
default=True,
help="Disable dry run mode and apply pruning for real on the database",
)
parser.add_argument(
"--include-playlist-content",
action="store_true",
dest="include_playlist_content",
default=False,
help="Allow tracks included in playlists to be pruned",
)
parser.add_argument(
"--include-favorites-content",
action="store_true",
dest="include_favorited_content",
default=False,
help="Allow favorited tracks to be pruned",
)
parser.add_argument(
"--include-listened-content",
action="store_true",
dest="include_listened_content",
default=False,
help="Allow tracks with listening history to be pruned",
)
@transaction.atomic
def handle(self, *args, **options):
tracks = models.Track.objects.filter(mbid__isnull=True)
if not options["include_favorited_content"]:
tracks = tracks.filter(track_favorites__isnull=True)
if not options["include_playlist_content"]:
tracks = tracks.filter(playlist_tracks__isnull=True)
if not options["include_listened_content"]:
tracks = tracks.filter(listenings__isnull=True)
pruned_total = tracks.count()
total = models.Track.objects.count()
if options["no_dry_run"]:
self.stdout.write(f"Deleting {pruned_total}/{total} tracks…")
tracks.delete()
else:
self.stdout.write(f"Would prune {pruned_total}/{total} tracks")
from django.core.management.base import BaseCommand
from django.db import transaction
from funkwhale_api.music import models
class Command(BaseCommand):
help = """
This command makes it easy to prune all skipped Uploads from the database.
Due to a bug they might caused the database to grow exponentially,
especially when using in-place-imports on a regular basis. This command
helps to clean up the database again.
"""
def add_arguments(self, parser):
parser.add_argument(
"--force",
default=False,
help="Disable dry run mode and apply pruning for real on the database",
)
@transaction.atomic
def handle(self, *args, **options):
skipped = models.Upload.objects.filter(import_status="skipped")
count = skipped.count()
if options["force"]:
skipped.delete()
print(f"Deleted {count} entries from the database.")
return
print(
f"Would delete {count} entries from the database.\
Run with --force to actually apply changes to the database"
)
from argparse import RawTextHelpFormatter
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from django.db.models import Q
from funkwhale_api.federation.models import Actor
from funkwhale_api.music.models import Library, TrackActor
class Command(BaseCommand):
help = """
Rebuild audio permission table. You shouldn't have to do this by hand, but if you face
any weird things (tracks still shown when they shouldn't, or tracks not shown when they should),
this may help.
"""
def create_parser(self, *args, **kwargs):
parser = super().create_parser(*args, **kwargs)
parser.formatter_class = RawTextHelpFormatter
return parser
def add_arguments(self, parser):
parser.add_argument(
"username",
nargs="*",
help="Rebuild only for given users",
)
@transaction.atomic
def handle(self, *args, **options):
actor_ids = []
if options["username"]:
actors = Actor.objects.all().local(True)
actor_ids = list(
actors.filter(preferred_username__in=options["username"]).values_list(
"id", flat=True
)
)
if len(actor_ids) < len(options["username"]):
raise CommandError("Invalid username")
print("Emptying permission table for specified users…")
qs = TrackActor.objects.all().filter(
Q(actor__pk__in=actor_ids) | Q(actor=None)
)
qs._raw_delete(qs.db)
else:
print("Emptying permission table…")
qs = TrackActor.objects.all()
qs._raw_delete(qs.db)
libraries = Library.objects.all()
objs = []
total_libraries = len(libraries)
for i, library in enumerate(libraries):
print(
"[{}/{}] Populating permission table for library {}".format(
i + 1, total_libraries, library.pk
)
)
objs += TrackActor.get_objs(
library=library,
actor_ids=actor_ids,
upload_and_track_ids=[],
)
print("Committing changes…")
TrackActor.objects.bulk_create(objs, batch_size=5000, ignore_conflicts=True)
import base64
import datetime
import logging
from collections.abc import Mapping
import arrow
import mutagen
from django import forms
import magic
import mutagen._util
import mutagen.flac
import mutagen.oggtheora
import mutagen.oggvorbis
from rest_framework import serializers
logger = logging.getLogger(__name__)
NODEFAULT = object()
# default title used when imported tracks miss the `Album` tag, see #122
UNKNOWN_ALBUM = "[Unknown Album]"
class TagNotFound(KeyError):
......@@ -13,14 +25,28 @@ class UnsupportedTag(KeyError):
pass
class ParseError(ValueError):
pass
def get_id3_tag(f, k):
if k == "pictures":
return f.tags.getall("APIC")
# First we try to grab the standard key
possible_attributes = [("text", True), ("url", False)]
for attr, select_first in possible_attributes:
try:
return f.tags[k].text[0]
v = getattr(f.tags[k], attr)
if select_first:
v = v[0]
return v
except KeyError:
pass
break
except IndexError:
break
except AttributeError:
continue
# then we fallback on parsing non standard tags
all_tags = f.tags.getall("TXXX")
try:
......@@ -44,6 +70,45 @@ def clean_id3_pictures(apic):
return pictures
def get_mp4_tag(f, k):
if k == "pictures":
return f.get("covr", [])
raw_value = f.get(k, None)
if not raw_value:
raise TagNotFound(k)
value = raw_value[0]
try:
return value.decode()
except AttributeError:
return value
def get_mp4_position(raw_value):
return raw_value[0]
def clean_mp4_pictures(raw_pictures):
pictures = []
for p in list(raw_pictures):
if p.imageformat == p.FORMAT_JPEG:
mimetype = "image/jpeg"
elif p.imageformat == p.FORMAT_PNG:
mimetype = "image/png"
else:
continue
pictures.append(
{
"mimetype": mimetype,
"content": bytes(p),
"description": "",
"type": mutagen.id3.PictureType.COVER_FRONT,
}
)
return pictures
def get_flac_tag(f, k):
if k == "pictures":
return f.pictures
......@@ -67,6 +132,52 @@ def clean_flac_pictures(apic):
return pictures
def clean_ogg_coverart(metadata_block_picture):
pictures = []
for b64_data in [metadata_block_picture]:
try:
data = base64.b64decode(b64_data)
except (TypeError, ValueError):
continue
mime = magic.Magic(mime=True)
mime.from_buffer(data)
pictures.append(
{
"mimetype": mime.from_buffer(data),
"content": data,
"description": "",
"type": mutagen.id3.PictureType.COVER_FRONT,
}
)
return pictures
def clean_ogg_pictures(metadata_block_picture):
pictures = []
for b64_data in [metadata_block_picture]:
try:
data = base64.b64decode(b64_data)
except (TypeError, ValueError):
continue
try:
picture = mutagen.flac.Picture(data)
except mutagen.flac.FLACError:
continue
pictures.append(
{
"mimetype": picture.mime,
"content": picture.data,
"description": "",
"type": picture.type.real,
}
)
return pictures
def get_mp3_recording_id(f, k):
try:
return [t for t in f.tags.getall("UFID") if "musicbrainz.org" in t.owner][
......@@ -76,143 +187,282 @@ def get_mp3_recording_id(f, k):
raise TagNotFound(k)
def convert_track_number(v):
try:
return int(v)
except ValueError:
# maybe the position is of the form "1/4"
pass
try:
return int(v.split("/")[0])
except (ValueError, AttributeError, IndexError):
pass
class FirstUUIDField(forms.UUIDField):
def to_python(self, value):
def get_mp3_comment(f, k):
keys_to_try = ["COMM", "COMM::eng"]
for key in keys_to_try:
try:
# sometimes, Picard leaves to uuids in the field, separated
# by a slash
value = value.split("/")[0]
except (AttributeError, IndexError, TypeError):
return get_id3_tag(f, key)
except TagNotFound:
pass
return super().to_python(value)
raise TagNotFound("COMM")
VALIDATION = {
"musicbrainz_artistid": FirstUUIDField(),
"musicbrainz_albumid": FirstUUIDField(),
"musicbrainz_recordingid": FirstUUIDField(),
}
VALIDATION = {}
CONF = {
"OggVorbis": {
"OggOpus": {
"getter": lambda f, k: f[k][0],
"fields": {
"track_number": {
"field": "TRACKNUMBER",
"to_application": convert_track_number,
"position": {"field": "TRACKNUMBER"},
"disc_number": {"field": "DISCNUMBER"},
"title": {},
"artist": {},
"artists": {},
"album_artist": {"field": "albumartist"},
"album": {},
"date": {"field": "date"},
"musicbrainz_albumid": {},
"musicbrainz_artistid": {},
"musicbrainz_albumartistid": {},
"mbid": {"field": "musicbrainz_trackid"},
"license": {},
"copyright": {},
"genre": {},
"pictures": [
{
"field": "metadata_block_picture",
"to_application": clean_ogg_pictures,
},
{
"field": "coverart",
"to_application": clean_ogg_coverart,
},
],
"comment": {"field": "comment"},
},
},
"OggVorbis": {
"getter": lambda f, k: f[k][0],
"fields": {
"position": {"field": "TRACKNUMBER"},
"disc_number": {"field": "DISCNUMBER"},
"title": {},
"artist": {},
"artists": {},
"album_artist": {"field": "albumartist"},
"album": {},
"date": {"field": "date", "to_application": lambda v: arrow.get(v).date()},
"date": {"field": "date"},
"musicbrainz_albumid": {},
"musicbrainz_artistid": {},
"musicbrainz_recordingid": {"field": "musicbrainz_trackid"},
"musicbrainz_albumartistid": {},
"mbid": {"field": "musicbrainz_trackid"},
"license": {},
"copyright": {},
"genre": {},
"pictures": [
{
"field": "metadata_block_picture",
"to_application": clean_ogg_pictures,
},
{
"field": "coverart",
"to_application": clean_ogg_coverart,
},
],
"comment": {"field": "comment"},
},
},
"OggTheora": {
"getter": lambda f, k: f[k][0],
"fields": {
"track_number": {
"field": "TRACKNUMBER",
"to_application": convert_track_number,
},
"position": {"field": "TRACKNUMBER"},
"disc_number": {"field": "DISCNUMBER"},
"title": {},
"artist": {},
"artists": {},
"album_artist": {"field": "albumartist"},
"album": {},
"date": {"field": "date", "to_application": lambda v: arrow.get(v).date()},
"date": {"field": "date"},
"musicbrainz_albumid": {"field": "MusicBrainz Album Id"},
"musicbrainz_artistid": {"field": "MusicBrainz Artist Id"},
"musicbrainz_recordingid": {"field": "MusicBrainz Track Id"},
"musicbrainz_albumartistid": {"field": "MusicBrainz Album Artist Id"},
"mbid": {"field": "MusicBrainz Track Id"},
"license": {},
"copyright": {},
"genre": {},
"comment": {"field": "comment"},
},
},
"MP3": {
"ID3": {
"getter": get_id3_tag,
"clean_pictures": clean_id3_pictures,
"fields": {
"track_number": {"field": "TRCK", "to_application": convert_track_number},
"position": {"field": "TRCK"},
"disc_number": {"field": "TPOS"},
"title": {"field": "TIT2"},
"artist": {"field": "TPE1"},
"artists": {"field": "ARTISTS"},
"album_artist": {"field": "TPE2"},
"album": {"field": "TALB"},
"date": {
"field": "TDRC",
"to_application": lambda v: arrow.get(str(v)).date(),
},
"date": {"field": "TDRC"},
"musicbrainz_albumid": {"field": "MusicBrainz Album Id"},
"musicbrainz_artistid": {"field": "MusicBrainz Artist Id"},
"musicbrainz_recordingid": {
"field": "UFID",
"getter": get_mp3_recording_id,
"genre": {"field": "TCON"},
"musicbrainz_albumartistid": {"field": "MusicBrainz Album Artist Id"},
"mbid": {"field": "UFID", "getter": get_mp3_recording_id},
"pictures": {},
"license": {"field": "WCOP"},
"copyright": {"field": "TCOP"},
"comment": {"field": "COMM", "getter": get_mp3_comment},
},
},
"MP4": {
"getter": get_mp4_tag,
"clean_pictures": clean_mp4_pictures,
"fields": {
"position": {"field": "trkn", "to_application": get_mp4_position},
"disc_number": {"field": "disk", "to_application": get_mp4_position},
"title": {"field": "©nam"},
"artist": {"field": "©ART"},
"artists": {"field": "----:com.apple.iTunes:ARTISTS"},
"album_artist": {"field": "aART"},
"album": {"field": "©alb"},
"date": {"field": "©day"},
"musicbrainz_albumid": {
"field": "----:com.apple.iTunes:MusicBrainz Album Id"
},
"musicbrainz_artistid": {
"field": "----:com.apple.iTunes:MusicBrainz Artist Id"
},
"genre": {"field": "©gen"},
"musicbrainz_albumartistid": {
"field": "----:com.apple.iTunes:MusicBrainz Album Artist Id"
},
"mbid": {"field": "----:com.apple.iTunes:MusicBrainz Track Id"},
"pictures": {},
"license": {"field": "----:com.apple.iTunes:LICENSE"},
"copyright": {"field": "cprt"},
"comment": {"field": "©cmt"},
},
},
"FLAC": {
"getter": get_flac_tag,
"clean_pictures": clean_flac_pictures,
"fields": {
"track_number": {
"field": "tracknumber",
"to_application": convert_track_number,
},
"position": {"field": "tracknumber"},
"disc_number": {"field": "discnumber"},
"title": {},
"artist": {},
"artists": {},
"album_artist": {"field": "albumartist"},
"album": {},
"date": {
"field": "date",
"to_application": lambda v: arrow.get(str(v)).date(),
},
"date": {"field": "date"},
"musicbrainz_albumid": {},
"musicbrainz_artistid": {},
"musicbrainz_recordingid": {"field": "musicbrainz_trackid"},
"musicbrainz_albumartistid": {},
"genre": {},
"mbid": {"field": "musicbrainz_trackid"},
"test": {},
"pictures": {},
"license": {},
"copyright": {},
"comment": {},
},
},
}
CONF["MP3"] = CONF["ID3"]
CONF["AIFF"] = CONF["ID3"]
ALL_FIELDS = [
"position",
"disc_number",
"title",
"artist",
"album_artist",
"album",
"date",
"musicbrainz_albumid",
"musicbrainz_artistid",
"musicbrainz_albumartistid",
"mbid",
"license",
"copyright",
"comment",
]
class Metadata(object):
def __init__(self, path):
self._file = mutagen.File(path)
class Metadata(Mapping):
def __init__(self, filething, kind=mutagen.File):
self._file = kind(filething)
if self._file is None:
raise ValueError("Cannot parse metadata from {}".format(path))
raise ValueError(f"Cannot parse metadata from {filething}")
if len(self._file) == 0:
raise ValueError(f"No tags found in {filething}")
self.fallback = self.load_fallback(filething, self._file)
ft = self.get_file_type(self._file)
try:
self._conf = CONF[ft]
except KeyError:
raise ValueError("Unsupported format {}".format(ft))
raise ValueError(f"Unsupported format {ft}")
def get_file_type(self, f):
return f.__class__.__name__
def load_fallback(self, filething, parent):
"""
In some situations, such as Ogg Theora files tagged with MusicBrainz Picard,
part of the tags are only available in the ogg vorbis comments
"""
try:
filething.seek(0)
except AttributeError:
pass
if isinstance(parent, mutagen.oggtheora.OggTheora):
try:
return Metadata(filething, kind=mutagen.oggvorbis.OggVorbis)
except (ValueError, mutagen._util.MutagenError):
raise
pass
def get(self, key, default=NODEFAULT):
try:
field_conf = self._conf["fields"][key]
return self._get_from_self(key)
except TagNotFound:
if not self.fallback:
if default != NODEFAULT:
return default
else:
raise
else:
return self.fallback.get(key, default=default)
except UnsupportedTag:
if not self.fallback:
raise
else:
return self.fallback.get(key, default=default)
def all(self):
"""
Return a dict with all support metadata fields, if they are available
"""
final = {}
for field in self._conf["fields"]:
if field in ["pictures"]:
continue
value = self.get(field, None)
if value is None:
continue
final[field] = str(value)
return final
def _get_from_self(self, key, default=NODEFAULT):
try:
field_confs = self._conf["fields"][key]
except KeyError:
raise UnsupportedTag("{} is not supported for this file format".format(key))
raise UnsupportedTag(f"{key} is not supported for this file format")
if not isinstance(field_confs, list):
field_confs = [field_confs]
for field_conf in field_confs:
real_key = field_conf.get("field", key)
try:
getter = field_conf.get("getter", self._conf["getter"])
v = getter(self._file, real_key)
except KeyError:
if default == NODEFAULT:
raise TagNotFound(real_key)
return default
continue
converter = field_conf.get("to_application")
if converter:
......@@ -221,9 +471,18 @@ class Metadata(object):
if field:
v = field.to_python(v)
return v
if default == NODEFAULT:
raise TagNotFound(real_key)
return default
def get_picture(self, *picture_types):
if not picture_types:
raise ValueError("You need to request at least one picture type")
ptypes = [
getattr(mutagen.id3.PictureType, picture_type.upper())
for picture_type in picture_types
]
def get_picture(self, picture_type="cover_front"):
ptype = getattr(mutagen.id3.PictureType, picture_type.upper())
try:
pictures = self.get("pictures")
except (UnsupportedTag, TagNotFound):
......@@ -231,6 +490,350 @@ class Metadata(object):
cleaner = self._conf.get("clean_pictures", lambda v: v)
pictures = cleaner(pictures)
if not pictures:
return
for ptype in ptypes:
for p in pictures:
if p["type"] == ptype:
return p
def __getitem__(self, key):
return self.get(key)
def __len__(self):
return 1
def __iter__(self):
yield from self._conf["fields"]
class ArtistField(serializers.Field):
def __init__(self, *args, **kwargs):
self.for_album = kwargs.pop("for_album", False)
super().__init__(*args, **kwargs)
def get_value(self, data):
if self.for_album:
keys = [
("artists", "album_artist"),
("names", "artists"),
("mbids", "musicbrainz_albumartistid"),
]
else:
keys = [
("artists", "artists"),
("names", "artist"),
("mbids", "musicbrainz_artistid"),
]
final = {}
for field, key in keys:
final[field] = data.get(key, None)
return final
def _get_artist_credit_tuple(self, mbids, data):
from . import tasks
names_artists_credits_tuples = tasks.parse_credits(
data.get("names", ""), None, None
)
artist_artists_credits_tuples = tasks.parse_credits(
data.get("artists", ""), None, None
)
len_mbids = len(mbids)
if (
len(names_artists_credits_tuples) != len_mbids
and len(artist_artists_credits_tuples) != len_mbids
):
logger.warning(
"Error parsing artist data, not the same amount of mbids and parsed artists. \
Probably because the artist parser found more artists than there is."
)
if len(names_artists_credits_tuples) > len(artist_artists_credits_tuples):
return names_artists_credits_tuples
return artist_artists_credits_tuples
def _get_mbids(self, raw_mbids):
# we have multiple mbid values that can be separated by various separators
separators = [";", ",", "/"]
# we get a list like that if tagged via musicbrainz
# ae29aae4-abfb-4609-8f54-417b1f4d64cc; 3237b5a8-ae44-400c-aa6d-cea51f0b9074;
mbids = [raw_mbids]
if raw_mbids:
for separator in separators:
if separator in raw_mbids:
mbids = [m.strip() for m in raw_mbids.split(separator)]
break
return mbids
def _format_artist_credit_list(self, artists_credits_tuples, mbids):
final_artist_credits = []
for i, ac in enumerate(artists_credits_tuples):
artist_credit = {
"credit": ac[0],
"mbid": (mbids[i] if 0 <= i < len(mbids) else None),
"joinphrase": ac[1],
"index": i,
}
final_artist_credits.append(artist_credit)
return final_artist_credits
def to_internal_value(self, data):
if (
self.context.get("strict", True)
and not data.get("artists", [])
and not data.get("names", [])
):
raise serializers.ValidationError("This field is required.")
mbids = self._get_mbids(data["mbids"])
# now, we split on artist names
artists_credits_tuples = self._get_artist_credit_tuple(mbids, data)
final_artist_credits = self._format_artist_credit_list(
artists_credits_tuples, mbids
)
field = serializers.ListField(
child=ArtistSerializer(strict=self.context.get("strict", True)),
min_length=1,
)
return field.to_internal_value(final_artist_credits)
class AlbumField(serializers.Field):
def get_value(self, data):
return data
def to_internal_value(self, data):
try:
title = data.get("album") or ""
except TagNotFound:
title = ""
title = title.strip() or UNKNOWN_ALBUM
final = {
"title": title,
"release_date": data.get("date", None),
"mbid": data.get("musicbrainz_albumid", None),
}
artist_credit_field = ArtistField(for_album=True)
payload = artist_credit_field.get_value(data)
try:
artist_credit = artist_credit_field.to_internal_value(payload)
except serializers.ValidationError as e:
artist_credit = []
logger.debug("Ignoring validation error on album artist_credit: %s", e)
album_serializer = AlbumSerializer(data=final)
album_serializer.is_valid(raise_exception=True)
album_serializer.validated_data["artist_credit"] = artist_credit
return album_serializer.validated_data
class CoverDataField(serializers.Field):
def get_value(self, data):
return data
def to_internal_value(self, data):
return data.get_picture("cover_front", "other")
class PermissiveDateField(serializers.CharField):
def to_internal_value(self, value):
if not value:
return None
value = super().to_internal_value(str(value))
ADDITIONAL_FORMATS = [
"%Y-%d-%m %H:%M", # deezer date format
"%Y-%W", # weird date format based on week number, see #718
]
for date_format in ADDITIONAL_FORMATS:
try:
parsed = datetime.datetime.strptime(value, date_format)
except ValueError:
continue
else:
return datetime.date(parsed.year, parsed.month, parsed.day)
try:
parsed = arrow.get(str(value))
return datetime.date(parsed.year, parsed.month, parsed.day)
except (arrow.parser.ParserError, ValueError):
pass
return None
def extract_tags_from_genre(string):
tags = []
delimiter = "@@@@@"
for d in [" - ", ", ", ",", "; ", ";", "/"]:
# Replace common tags separators by a custom delimiter
string = string.replace(d, delimiter)
# loop on the parts (splitting on our custom delimiter)
for tag in string.split(delimiter):
if not tag:
continue
tags.append(tag)
return tags
class TagsField(serializers.CharField):
def get_value(self, data):
return data
def to_internal_value(self, data):
try:
value = data.get("genre") or ""
except TagNotFound:
return []
value = super().to_internal_value(str(value))
return extract_tags_from_genre(value)
class MBIDField(serializers.UUIDField):
def __init__(self, *args, **kwargs):
kwargs.setdefault("allow_null", True)
kwargs.setdefault("required", False)
super().__init__(*args, **kwargs)
def to_internal_value(self, v):
if v in ["", None]:
return None
return super().to_internal_value(v)
class ArtistSerializer(serializers.Serializer):
credit = serializers.CharField(required=False, allow_null=True, allow_blank=True)
mbid = MBIDField()
joinphrase = serializers.CharField(
trim_whitespace=False, required=False, allow_null=True, allow_blank=True
)
def __init__(self, *args, **kwargs):
self.strict = kwargs.pop("strict", True)
super().__init__(*args, **kwargs)
def validate_credit(self, v):
if self.strict and not v:
raise serializers.ValidationError("This field is required.")
return v
class AlbumSerializer(serializers.Serializer):
title = serializers.CharField(required=False, allow_null=True)
mbid = MBIDField()
release_date = PermissiveDateField(
required=False, allow_null=True, allow_blank=True
)
def validate_title(self, v):
if self.context.get("strict", True) and not v:
raise serializers.ValidationError("This field is required.")
return v
def get_valid_position(v):
if v <= 0:
v = 1
return v
class PositionField(serializers.CharField):
def to_internal_value(self, v):
v = super().to_internal_value(v)
if not v:
return v
try:
return get_valid_position(int(v))
except ValueError:
# maybe the position is of the form "1/4"
pass
try:
return get_valid_position(int(v.split("/")[0]))
except (ValueError, AttributeError, IndexError):
return
class DescriptionField(serializers.CharField):
def get_value(self, data):
return data
def to_internal_value(self, data):
try:
value = data.get("comment") or None
except TagNotFound:
return None
if not value:
return None
value = super().to_internal_value(value)
return {"text": value, "content_type": "text/plain"}
class TrackMetadataSerializer(serializers.Serializer):
title = serializers.CharField(required=False, allow_null=True)
position = PositionField(allow_blank=True, allow_null=True, required=False)
disc_number = PositionField(allow_blank=True, allow_null=True, required=False)
copyright = serializers.CharField(allow_blank=True, allow_null=True, required=False)
license = serializers.CharField(allow_blank=True, allow_null=True, required=False)
mbid = MBIDField()
tags = TagsField(allow_blank=True, allow_null=True, required=False)
description = DescriptionField(allow_null=True, allow_blank=True, required=False)
album = AlbumField()
artist_credit = ArtistField()
cover_data = CoverDataField(required=False)
remove_blank_null_fields = [
"copyright",
"description",
"license",
"position",
"disc_number",
"mbid",
"tags",
]
def validate_title(self, v):
if self.context.get("strict", True) and not v:
raise serializers.ValidationError("This field is required.")
return v
def validate(self, validated_data):
validated_data = super().validate(validated_data)
for field in self.remove_blank_null_fields:
try:
v = validated_data[field]
except KeyError:
continue
if v in ["", None, []]:
validated_data.pop(field)
validated_data["album"]["cover_data"] = validated_data.pop("cover_data", None)
return validated_data
class FakeMetadata(Mapping):
def __init__(self, data, picture=None):
self.data = data
self.picture = None
def __getitem__(self, key):
return self.data[key]
def __len__(self):
return len(self.data)
def __iter__(self):
yield from self.data
def get_picture(self, *args):
return self.picture
......@@ -2,25 +2,10 @@
from __future__ import unicode_literals
from django.db import migrations
import taggit.managers
class Migration(migrations.Migration):
dependencies = [
("taggit", "0002_auto_20150616_2121"),
("music", "0003_auto_20151222_2233"),
]
dependencies = [("music", "0003_auto_20151222_2233")]
operations = [
migrations.AddField(
model_name="track",
name="tags",
field=taggit.managers.TaggableManager(
verbose_name="Tags",
help_text="A comma-separated list of tags.",
through="taggit.TaggedItem",
to="taggit.Tag",
),
)
]
operations = []
# Generated by Django 2.0.3 on 2018-05-15 18:08
from django.db import migrations, models
import taggit.managers
class Migration(migrations.Migration):
......@@ -19,15 +18,4 @@ class Migration(migrations.Migration):
name="size",
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name="track",
name="tags",
field=taggit.managers.TaggableManager(
blank=True,
help_text="A comma-separated list of tags.",
through="taggit.TaggedItem",
to="taggit.Tag",
verbose_name="Tags",
),
),
]
......@@ -5,14 +5,12 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('music', '0027_auto_20180515_1808'),
]
dependencies = [("music", "0027_auto_20180515_1808")]
operations = [
migrations.AddField(
model_name='importjob',
name='replace_if_duplicate',
model_name="importjob",
name="replace_if_duplicate",
field=models.BooleanField(default=False),
),
)
]
# Generated by Django 2.0.7 on 2018-08-07 17:48
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import uuid
class Migration(migrations.Migration):
dependencies = [
("federation", "0007_auto_20180807_1748"),
("music", "0028_importjob_replace_if_duplicate"),
]
operations = [
migrations.CreateModel(
name="Library",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("fid", models.URLField(db_index=True, max_length=500, unique=True)),
("url", models.URLField(blank=True, max_length=500, null=True)),
(
"uuid",
models.UUIDField(db_index=True, default=uuid.uuid4, unique=True),
),
("followers_url", models.URLField(max_length=500)),
(
"creation_date",
models.DateTimeField(default=django.utils.timezone.now),
),
("name", models.CharField(max_length=100)),
(
"description",
models.TextField(blank=True, max_length=5000, null=True),
),
(
"privacy_level",
models.CharField(
choices=[
("me", "Only me"),
("instance", "Everyone on my instance, and my followers"),
(
"everyone",
"Everyone, including people on other instances",
),
],
default="me",
max_length=25,
),
),
(
"actor",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="libraries",
to="federation.Actor",
),
),
],
options={"abstract": False},
),
migrations.AddField(
model_name="importjob",
name="audio_file_size",
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name="importbatch",
name="import_request",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name="import_batches",
to="requests.ImportRequest",
),
),
migrations.AddField(
model_name="importbatch",
name="library",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="import_batches",
to="music.Library",
),
),
migrations.AddField(
model_name="trackfile",
name="library",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="files",
to="music.Library",
),
),
]
# Generated by Django 2.0.8 on 2018-08-25 14:11
import django.contrib.postgres.fields.jsonb
import django.core.serializers.json
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import funkwhale_api.music.models
class Migration(migrations.Migration):
dependencies = [
("federation", "0009_auto_20180822_1956"),
("music", "0029_auto_20180807_1748"),
]
operations = [
migrations.CreateModel(
name="LibraryScan",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("total_files", models.PositiveIntegerField(default=0)),
("processed_files", models.PositiveIntegerField(default=0)),
("errored_files", models.PositiveIntegerField(default=0)),
("status", models.CharField(default="pending", max_length=25)),
(
"creation_date",
models.DateTimeField(default=django.utils.timezone.now),
),
("modification_date", models.DateTimeField(blank=True, null=True)),
(
"actor",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="federation.Actor",
),
),
],
),
migrations.RemoveField(model_name="trackfile", name="library_track"),
migrations.AddField(
model_name="library",
name="files_count",
field=models.PositiveIntegerField(default=0),
),
migrations.AddField(
model_name="trackfile",
name="fid",
field=models.URLField(blank=True, max_length=500, null=True, unique=True),
),
migrations.AddField(
model_name="trackfile",
name="import_date",
field=models.DateTimeField(blank=True, null=True),
),
migrations.AddField(
model_name="trackfile",
name="import_details",
field=django.contrib.postgres.fields.jsonb.JSONField(
default=funkwhale_api.music.models.empty_dict,
encoder=django.core.serializers.json.DjangoJSONEncoder,
max_length=50000,
),
),
migrations.AddField(
model_name="trackfile",
name="import_metadata",
field=django.contrib.postgres.fields.jsonb.JSONField(
default=funkwhale_api.music.models.empty_dict,
encoder=django.core.serializers.json.DjangoJSONEncoder,
max_length=50000,
),
),
migrations.AddField(
model_name="trackfile",
name="import_reference",
field=models.CharField(
default=funkwhale_api.music.models.get_import_reference, max_length=50
),
),
migrations.AddField(
model_name="trackfile",
name="import_status",
field=models.CharField(
choices=[
("pending", "Pending"),
("finished", "Finished"),
("errored", "Errored"),
("skipped", "Skipped"),
],
default="pending",
max_length=25,
),
),
migrations.AddField(
model_name="trackfile",
name="metadata",
field=django.contrib.postgres.fields.jsonb.JSONField(
default=funkwhale_api.music.models.empty_dict,
encoder=django.core.serializers.json.DjangoJSONEncoder,
max_length=50000,
),
),
migrations.AlterField(
model_name="album",
name="release_date",
field=models.DateField(blank=True, null=True),
),
migrations.AlterField(
model_name="trackfile",
name="audio_file",
field=models.FileField(
max_length=255, upload_to=funkwhale_api.music.models.get_file_path
),
),
migrations.AlterField(
model_name="trackfile",
name="source",
field=models.CharField(blank=True, max_length=500, null=True),
),
migrations.AlterField(
model_name="trackfile",
name="track",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="files",
to="music.Track",
),
),
migrations.AddField(
model_name="libraryscan",
name="library",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="scans",
to="music.Library",
),
),
]
# Generated by Django 2.0.8 on 2018-09-14 20:07
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('federation', '0011_auto_20180910_1902'),
('music', '0030_auto_20180825_1411'),
]
operations = [
migrations.AddField(
model_name='album',
name='fid',
field=models.URLField(db_index=True, max_length=500, null=True, unique=True),
),
migrations.AddField(
model_name='album',
name='from_activity',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AddField(
model_name='artist',
name='fid',
field=models.URLField(db_index=True, max_length=500, null=True, unique=True),
),
migrations.AddField(
model_name='artist',
name='from_activity',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AddField(
model_name='track',
name='fid',
field=models.URLField(db_index=True, max_length=500, null=True, unique=True),
),
migrations.AddField(
model_name='track',
name='from_activity',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AddField(
model_name='trackfile',
name='from_activity',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AddField(
model_name='work',
name='fid',
field=models.URLField(db_index=True, max_length=500, null=True, unique=True),
),
migrations.AddField(
model_name='work',
name='from_activity',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AlterField(
model_name='trackfile',
name='modification_date',
field=models.DateTimeField(default=django.utils.timezone.now, null=True),
),
]
# Generated by Django 2.0.8 on 2018-09-21 16:47
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [("music", "0031_auto_20180914_2007")]
operations = [
migrations.RenameModel("TrackFile", "Upload"),
migrations.RenameField(
model_name="importjob", old_name="track_file", new_name="upload"
),
migrations.RenameField(
model_name="library", old_name="files_count", new_name="uploads_count"
),
migrations.AlterField(
model_name="upload",
name="library",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="uploads",
to="music.Library",
),
),
migrations.AlterField(
model_name="upload",
name="track",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="uploads",
to="music.Track",
),
),
]
# Generated by Django 2.0.9 on 2018-10-23 18:37
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import funkwhale_api.music.models
class Migration(migrations.Migration):
dependencies = [
('music', '0032_track_file_to_upload'),
]
operations = [
migrations.CreateModel(
name='UploadVersion',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('mimetype', models.CharField(choices=[('audio/ogg', 'ogg'), ('audio/mpeg', 'mp3'), ('audio/x-flac', 'flac')], max_length=50)),
('creation_date', models.DateTimeField(default=django.utils.timezone.now)),
('accessed_date', models.DateTimeField(blank=True, null=True)),
('audio_file', models.FileField(max_length=255, upload_to=funkwhale_api.music.models.get_file_path)),
('bitrate', models.PositiveIntegerField()),
('size', models.IntegerField()),
('upload', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='versions', to='music.Upload')),
],
),
migrations.AlterField(
model_name='album',
name='from_activity',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AlterField(
model_name='artist',
name='from_activity',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AlterField(
model_name='track',
name='from_activity',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AlterField(
model_name='work',
name='from_activity',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='federation.Activity'),
),
migrations.AlterUniqueTogether(
name='uploadversion',
unique_together={('upload', 'mimetype', 'bitrate')},
),
]
# Generated by Django 2.0.9 on 2018-11-27 03:25
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('music', '0033_auto_20181023_1837'),
]
operations = [
migrations.CreateModel(
name='License',
fields=[
('code', models.CharField(max_length=100, primary_key=True, serialize=False)),
('url', models.URLField(max_length=500)),
('copyleft', models.BooleanField()),
('commercial', models.BooleanField()),
('attribution', models.BooleanField()),
('derivative', models.BooleanField()),
('redistribute', models.BooleanField()),
],
),
migrations.AlterField(
model_name='uploadversion',
name='mimetype',
field=models.CharField(choices=[('audio/ogg', 'ogg'), ('audio/mpeg', 'mp3'), ('audio/x-flac', 'flac'), ('audio/flac', 'flac')], max_length=50),
),
migrations.AddField(
model_name='track',
name='license',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='tracks', to='music.License'),
),
]
# Generated by Django 2.0.9 on 2018-12-03 15:15
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('music', '0034_auto_20181127_0325'),
]
operations = [
migrations.AddField(
model_name='track',
name='copyright',
field=models.CharField(blank=True, max_length=500, null=True),
),
migrations.AlterField(
model_name='track',
name='license',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.DO_NOTHING, related_name='tracks', to='music.License'),
),
]
# Generated by Django 2.0.9 on 2018-12-04 15:10
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('music', '0035_auto_20181203_1515'),
]
operations = [
migrations.AddField(
model_name='track',
name='disc_number',
field=models.PositiveIntegerField(blank=True, null=True),
),
]
# Generated by Django 2.0.9 on 2019-01-03 17:57
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('music', '0036_track_disc_number'),
]
operations = [
migrations.AlterModelOptions(
name='track',
options={'ordering': ['album', 'disc_number', 'position']},
),
migrations.AlterField(
model_name='album',
name='creation_date',
field=models.DateTimeField(db_index=True, default=django.utils.timezone.now),
),
migrations.AlterField(
model_name='artist',
name='creation_date',
field=models.DateTimeField(db_index=True, default=django.utils.timezone.now),
),
migrations.AlterField(
model_name='track',
name='creation_date',
field=models.DateTimeField(db_index=True, default=django.utils.timezone.now),
),
migrations.AlterField(
model_name='upload',
name='creation_date',
field=models.DateTimeField(db_index=True, default=django.utils.timezone.now),
),
migrations.AlterField(
model_name='work',
name='creation_date',
field=models.DateTimeField(db_index=True, default=django.utils.timezone.now),
),
]