Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2429-fix-popover-auto-close
  • 2448-complete-tags
  • 2452-fetch-third-party-metadata
  • 2467-fix-radio-builder
  • 2469-Fix-search-bar-in-ManageUploads
  • 2476-deep-upload-links
  • 2480-add-notification-number-badges
  • 2482-upgrade-about-page-to-use-new-ui
  • 2487-fix-accessibility-according-to-WCAG
  • 2490-experiment-use-rstore
  • 2490-experimental-use-simple-data-store
  • 2490-fix-search-modal
  • 2490-search-modal
  • 2501-fix-compatibility-with-older-browsers
  • 2502-drop-uno-and-jquery
  • 2533-allow-followers-in-user-activiy-privacy-level
  • 2539-drop-ansible-installation-method-in-favor-of-docker
  • 2550-22-user-interfaces-for-federation
  • 2560-default-modal-width
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • feature/2481-vui-translations
  • fix-amd64-docker-build-gfortran
  • fix-channel-creation
  • fix-front-node-version
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • refactor/homepage
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
200 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 278-search-browse
  • 303-json-ld
  • ButterflyOfFire/funkwhale-patch-1
  • develop
  • master
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
39 results
Show changes
Showing
with 4471 additions and 671 deletions
# Generated by Django 3.2.23 on 2024-01-30 11:58
import itertools
from django.db import migrations, models
from django.db.models import Q
from funkwhale_api.music import utils
def set_quality_upload(apps, schema_editor):
Upload = apps.get_model("music", "Upload")
extension_to_mimetypes = utils.get_extension_to_mimetype_dict()
# Low quality
mp3_query = Q(mimetype__in=extension_to_mimetypes["mp3"]) & Q(bitrate__lte=192)
OpusAACOGG_query = Q(
mimetype__in=list(
itertools.chain(
extension_to_mimetypes["opus"],
extension_to_mimetypes["ogg"],
extension_to_mimetypes["aac"],
)
)
) & Q(bitrate__lte=96)
low = Upload.objects.filter((mp3_query) | (OpusAACOGG_query))
low.update(quality=0)
# medium
mp3_query = Q(mimetype__in=extension_to_mimetypes["mp3"]) & Q(bitrate__lte=256)
ogg_query = Q(mimetype__in=extension_to_mimetypes["ogg"]) & Q(bitrate__lte=192)
aacopus_query = Q(
mimetype__in=list(
itertools.chain(
extension_to_mimetypes["aac"], extension_to_mimetypes["opus"]
)
)
) & Q(bitrate__lte=128)
medium = Upload.objects.filter((mp3_query) | (ogg_query) | (aacopus_query)).exclude(
pk__in=low.values_list("pk", flat=True)
)
medium.update(quality=1)
# high
mp3_query = Q(mimetype__in=extension_to_mimetypes["mp3"]) & Q(bitrate__lte=320)
ogg_query = Q(mimetype__in=extension_to_mimetypes["ogg"]) & Q(bitrate__lte=256)
aac_query = Q(mimetype__in=extension_to_mimetypes["aac"]) & Q(bitrate__lte=288)
opus_query = Q(mimetype__in=extension_to_mimetypes["opus"]) & Q(bitrate__lte=160)
high = (
Upload.objects.filter((mp3_query) | (ogg_query) | (aac_query) | (opus_query))
.exclude(pk__in=low.values_list("pk", flat=True))
.exclude(pk__in=medium.values_list("pk", flat=True))
)
high.update(quality=2)
# veryhigh
opus_query = Q(mimetype__in=extension_to_mimetypes["opus"]) & Q(bitrate__gte=510)
flacaifaiff_query = Q(
mimetype__in=list(
itertools.chain(
extension_to_mimetypes["flac"],
extension_to_mimetypes["aif"],
extension_to_mimetypes["aiff"],
)
)
)
Upload.objects.filter((opus_query) | (flacaifaiff_query)).exclude(
pk__in=low.values_list("pk", flat=True)
).exclude(pk__in=medium.values_list("pk", flat=True)).exclude(
pk__in=high.values_list("pk", flat=True)
).update(
quality=3
)
def skip(apps, schema_editor):
pass
class Migration(migrations.Migration):
dependencies = [
("music", "0057_auto_20221118_2108"),
]
operations = [
migrations.AddField(
model_name="upload",
name="quality",
field=models.IntegerField(
choices=[(0, "low"), (1, "medium"), (2, "high"), (3, "very_high")],
default=1,
),
),
migrations.RunPython(set_quality_upload, skip),
]
# Generated by Django 4.2.9 on 2024-03-16 00:36
import django.contrib.postgres.search
from django.db import migrations, models, connection
import django.db.models.deletion
import django.utils.timezone
import uuid
def skip(apps, schema_editor):
pass
def save_artist_credit(obj, ArtistCredit):
artist_credit, created = ArtistCredit.objects.get_or_create(
artist=obj.artist,
joinphrase="",
credit=obj.artist.name,
)
return (obj.pk, artist_credit.pk)
def bulk_save_m2m(model, relations, obj_name):
table = model.artist_credit.through._meta.db_table
values_sql = ", ".join(["(%s, %s)"] * len(relations))
params = [x for pair in relations for x in pair]
query = f"""
INSERT INTO {table} ({obj_name}_id, artistcredit_id)
VALUES {values_sql}
ON CONFLICT DO NOTHING
"""
with connection.cursor() as cursor:
cursor.execute(query, params)
def set_all_artists_credit(apps, schema_editor):
Track = apps.get_model("music", "Track")
Album = apps.get_model("music", "Album")
ArtistCredit = apps.get_model("music", "ArtistCredit")
relations = []
for track in Track.objects.all():
relations.append(save_artist_credit(track, ArtistCredit))
if relations:
bulk_save_m2m(Track, relations, "track")
relations = []
for album in Album.objects.all():
relations.append(save_artist_credit(album, ArtistCredit))
if relations:
bulk_save_m2m(Album, relations, "album")
class Migration(migrations.Migration):
dependencies = [
("music", "0058_upload_quality"),
]
operations = [
migrations.CreateModel(
name="ArtistCredit",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"fid",
models.URLField(
db_index=True, max_length=500, null=True, unique=True
),
),
(
"mbid",
models.UUIDField(blank=True, db_index=True, null=True, unique=True),
),
(
"uuid",
models.UUIDField(db_index=True, default=uuid.uuid4, unique=True),
),
(
"creation_date",
models.DateTimeField(
db_index=True, default=django.utils.timezone.now
),
),
(
"body_text",
django.contrib.postgres.search.SearchVectorField(blank=True),
),
("credit", models.CharField(blank=True, max_length=500, null=True)),
("joinphrase", models.CharField(blank=True, max_length=250, null=True)),
("index", models.IntegerField(blank=True, null=True)),
(
"artist",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="artist_credit",
to="music.artist",
),
),
(
"from_activity",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="federation.activity",
),
),
],
options={
"ordering": ["index", "credit"],
},
),
migrations.AddField(
model_name="album",
name="artist_credit",
field=models.ManyToManyField(
related_name="albums",
to="music.artistcredit",
),
),
migrations.AddField(
model_name="track",
name="artist_credit",
field=models.ManyToManyField(
related_name="tracks",
to="music.artistcredit",
),
),
migrations.RunPython(set_all_artists_credit, skip),
migrations.RemoveField(
model_name="album",
name="artist",
),
migrations.RemoveField(
model_name="track",
name="artist",
),
]
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("music", "0059_remove_album_artist_remove_track_artist_artistcredit_and_more"),
("playlists", "0007_alter_playlist_actor_alter_playlisttrack_uuid_and_more"),
]
operations = []
# Generated by Django 4.2.9 on 2025-01-03 16:12
from django.db import migrations, models, transaction
from django.conf import settings
from funkwhale_api.federation import utils as federation_utils
from django.urls import reverse
import uuid
def insert_tracks_to_playlist(apps, playlist, uploads):
PlaylistTrack = apps.get_model("playlists", "PlaylistTrack")
uploads_to_update = []
for i, upload in enumerate(uploads):
if upload.track:
PlaylistTrack.objects.create(
creation_date=playlist.creation_date,
playlist=playlist,
track=upload.track,
index=0 + i,
uuid=(new_uuid := uuid.uuid4()),
fid=federation_utils.full_url(
reverse(
"federation:music:playlist-tracks-detail",
kwargs={"uuid": new_uuid},
)
),
)
upload.library = None
uploads_to_update.append(upload)
apps.get_model("music", "Upload").objects.bulk_update(
uploads_to_update, fields=["library"], batch_size=1000
)
playlist.library.playlist_uploads.set(uploads)
@transaction.atomic
def migrate_libraries_to_playlist(apps, schema_editor):
Playlist = apps.get_model("playlists", "Playlist")
Library = apps.get_model("music", "Library")
Actor = apps.get_model("federation", "Actor")
Channel = apps.get_model("audio", "Channel")
to_instance_libs = []
to_public_libs = []
to_me_libs = []
for library in Library.objects.all():
if (
not federation_utils.is_local(library.actor.fid)
or library.actor.name == "service"
):
continue
if (
hasattr(library, "playlist")
and library.playlist
and library.uploads.all().exists()
):
uploads = library.uploads.all()
with transaction.atomic():
insert_tracks_to_playlist(apps, library.playlist, uploads)
continue
if (
Channel.objects.filter(library=library).exists()
or Playlist.objects.filter(library=library).exists()
or not federation_utils.is_local(library.fid)
or library.name in ["me", "instance", "everyone", "followers"]
):
continue
try:
playlist, created = Playlist.objects.get_or_create(
name=library.name,
library=library,
actor=library.actor,
creation_date=library.creation_date,
privacy_level=library.privacy_level,
description=library.description,
defaults={
"uuid": (new_uuid := uuid.uuid4()),
"fid": federation_utils.full_url(
reverse(
"federation:music:playlists-detail",
kwargs={"uuid": new_uuid},
)
),
},
)
playlist.save()
if library.uploads.all().exists():
uploads = library.uploads.all()
with transaction.atomic():
insert_tracks_to_playlist(apps, playlist, uploads)
if library.privacy_level == "me":
to_me_libs.append(library)
if library.privacy_level == "instance":
to_instance_libs.append(library)
if library.privacy_level == "everyone":
to_public_libs.append(library)
library.privacy_level = "me"
library.playlist = playlist
library.save()
except Exception as e:
print(f"An error occurred during library.playlist creation : {e}")
continue
# migrate uploads to new built-in libraries
for actor in Actor.objects.all():
if (
not federation_utils.is_local(actor.fid)
or actor.name == "service"
or hasattr(actor, "channel")
):
continue
privacy_levels = ["me", "instance", "followers", "everyone"]
for privacy_level in privacy_levels:
build_in_lib, created = Library.objects.filter(
channel__isnull=True
).get_or_create(
actor=actor,
privacy_level=privacy_level,
name=privacy_level,
defaults={
"uuid": (new_uuid := uuid.uuid4()),
"fid": federation_utils.full_url(
reverse(
"federation:music:libraries-detail",
kwargs={"uuid": new_uuid},
)
),
},
)
for library in actor.libraries.filter(privacy_level=privacy_level):
library.uploads.all().update(library=build_in_lib)
library.delete
if privacy_level == "everyone":
for lib in to_public_libs:
lib.uploads.all().update(library=build_in_lib)
if privacy_level == "instance":
for lib in to_instance_libs:
lib.uploads.all().update(library=build_in_lib)
if privacy_level == "me":
for lib in to_me_libs:
lib.uploads.all().update(library=build_in_lib)
def check_succefull_migration(apps, schema_editor):
Actor = apps.get_model("federation", "Actor")
Playlist = apps.get_model("playlists", "Playlist")
for actor in Actor.objects.all():
not_build_in_libs = (
actor.playlists.count()
+ actor.libraries.filter(channel__isnull=False).count()
)
if actor.name == "service" or not federation_utils.is_local(actor.fid):
continue
elif actor.playlists.filter(library__isnull=True).count() > 0:
raise Exception(
f"Incoherent playlist database state : all local playlists do not have lib or too many libs"
)
elif (
not hasattr(actor, "channel")
and actor.libraries.count() - 4 != not_build_in_libs
or (hasattr(actor, "channel") and actor.libraries.count() > 1)
):
raise Exception(
f"Incoherent library database state, check for errors in log and share them to the funkwhale team. Migration was abordted to prevent data loss.\
actor libs = {actor.libraries.count()} and acto not built-in lib = {not_build_in_libs} \
and acto pl ={actor.playlists.count()} and not channel lib = {actor.libraries.filter(channel__isnull=False).count()} \
and actor.name = {actor.name}"
)
for playlist in Playlist.objects.all():
if not federation_utils.is_local(playlist.fid):
continue
elif playlist.library.privacy_level != "me":
raise Exception(
"Incoherent playlist database state, check for errors in log and share them to the funkwhale team. Migration was abordted to prevent data loss"
)
class Migration(migrations.Migration):
dependencies = [
("music", "0060_empty_for_test"),
("playlists", "0009_playlist_library"),
]
operations = [
migrations.AlterField(
model_name="library",
name="privacy_level",
field=models.CharField(
choices=[
("me", "Only me"),
("followers", "Me and my followers"),
("instance", "Everyone on my instance, and my followers"),
("everyone", "Everyone, including people on other instances"),
],
default="me",
max_length=25,
),
),
migrations.AddField(
model_name="upload",
name="playlist_libraries",
field=models.ManyToManyField(
blank=True,
related_name="playlist_uploads",
to="music.library",
),
),
migrations.RunPython(
migrate_libraries_to_playlist, reverse_code=migrations.RunPython.noop
),
migrations.RunPython(
check_succefull_migration, reverse_code=migrations.RunPython.noop
),
]
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("music", "0061_migrate_libraries_to_playlist"),
]
operations = [
migrations.RemoveField(
model_name="library",
name="description",
),
migrations.RemoveField(
model_name="library",
name="followers_url",
),
]
# Generated by Django 4.2.9 on 2024-12-21 20:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("music", "0062_del_lib_description"),
]
operations = [
migrations.AddField(
model_name="upload",
name="third_party_provider",
field=models.CharField(blank=True, max_length=100, null=True),
),
migrations.AlterField(
model_name="uploadversion",
name="mimetype",
field=models.CharField(
choices=[
("audio/mp3", "mp3"),
("audio/mpeg3", "mp3"),
("audio/x-mp3", "mp3"),
("audio/mpeg", "mp3"),
("video/ogg", "ogg"),
("audio/ogg", "ogg"),
("audio/opus", "opus"),
("audio/x-m4a", "aac"),
("audio/x-m4a", "m4a"),
("audio/m4a", "m4a"),
("audio/x-flac", "flac"),
("audio/flac", "flac"),
("audio/aiff", "aif"),
("audio/x-aiff", "aif"),
("audio/aiff", "aiff"),
("audio/x-aiff", "aiff"),
],
max_length=50,
),
),
]
import datetime import datetime
import itertools
import logging
import os import os
import shutil
import tempfile import tempfile
import urllib.parse
import uuid import uuid
from random import randint
import arrow import arrow
import markdown import slugify
from django.conf import settings from django.conf import settings
from django.core.files import File from django.contrib.contenttypes.fields import GenericRelation
from django.contrib.postgres.indexes import GinIndex
from django.contrib.postgres.search import SearchVectorField
from django.core.exceptions import ObjectDoesNotExist
from django.core.files.base import ContentFile from django.core.files.base import ContentFile
from django.db import models from django.core.serializers.json import DjangoJSONEncoder
from django.db.models.signals import post_save from django.db import models, transaction
from django.db.models import Count, JSONField, Max, Min, Prefetch
from django.db.models.expressions import OuterRef, Subquery
from django.db.models.query_utils import Q
from django.db.models.signals import post_save, pre_save
from django.dispatch import receiver from django.dispatch import receiver
from django.urls import reverse from django.urls import reverse
from django.utils import timezone from django.utils import timezone
from taggit.managers import TaggableManager
from versatileimagefield.fields import VersatileImageField
from funkwhale_api import downloader, musicbrainz from funkwhale_api import musicbrainz
from funkwhale_api.common import fields
from funkwhale_api.common import models as common_models
from funkwhale_api.common import preferences, session
from funkwhale_api.common import utils as common_utils
from funkwhale_api.federation import models as federation_models
from funkwhale_api.federation import utils as federation_utils from funkwhale_api.federation import utils as federation_utils
from funkwhale_api.tags import models as tags_models
from . import importers, metadata, utils from . import importers, metadata, utils
logger = logging.getLogger(__name__)
ARTIST_CONTENT_CATEGORY_CHOICES = [
("music", "music"),
("podcast", "podcast"),
("other", "other"),
]
def empty_dict():
return {}
class APIModelMixin(models.Model): class APIModelMixin(models.Model):
fid = models.URLField(unique=True, max_length=500, db_index=True, null=True)
mbid = models.UUIDField(unique=True, db_index=True, null=True, blank=True) mbid = models.UUIDField(unique=True, db_index=True, null=True, blank=True)
uuid = models.UUIDField(unique=True, db_index=True, default=uuid.uuid4) uuid = models.UUIDField(unique=True, db_index=True, default=uuid.uuid4)
from_activity = models.ForeignKey(
"federation.Activity", null=True, blank=True, on_delete=models.SET_NULL
)
api_includes = [] api_includes = []
creation_date = models.DateTimeField(default=timezone.now) creation_date = models.DateTimeField(default=timezone.now, db_index=True)
import_hooks = [] import_hooks = []
body_text = SearchVectorField(blank=True)
class Meta: class Meta:
abstract = True abstract = True
ordering = ["-creation_date"] ordering = ["-creation_date"]
indexes = [
GinIndex(fields=["body_text"]),
]
@classmethod @classmethod
def get_or_create_from_api(cls, mbid): def get_or_create_from_api(cls, mbid):
...@@ -53,9 +87,7 @@ class APIModelMixin(models.Model): ...@@ -53,9 +87,7 @@ class APIModelMixin(models.Model):
cls.musicbrainz_model cls.musicbrainz_model
] ]
else: else:
raw_data = cls.api.search(**kwargs)[ raw_data = cls.api.search(**kwargs)[f"{cls.musicbrainz_model}-list"][0]
"{0}-list".format(cls.musicbrainz_model)
][0]
cleaned_data = cls.clean_musicbrainz_data(raw_data) cleaned_data = cls.clean_musicbrainz_data(raw_data)
return importers.load(cls, cleaned_data, raw_data, cls.import_hooks) return importers.load(cls, cleaned_data, raw_data, cls.import_hooks)
...@@ -78,53 +110,192 @@ class APIModelMixin(models.Model): ...@@ -78,53 +110,192 @@ class APIModelMixin(models.Model):
self.musicbrainz_model, self.mbid self.musicbrainz_model, self.mbid
) )
def get_federation_id(self):
if self.fid:
return self.fid
return federation_utils.full_url(
reverse(
f"federation:music:{self.federation_namespace}-detail",
kwargs={"uuid": self.uuid},
)
)
def save(self, **kwargs):
if not self.pk and not self.fid:
self.fid = self.get_federation_id()
return super().save(**kwargs)
@property
def is_local(self) -> bool:
return federation_utils.is_local(self.fid)
@property
def domain_name(self):
if not self.fid:
return
parsed = urllib.parse.urlparse(self.fid)
return parsed.hostname
def get_tags(self):
return list(sorted(self.tagged_items.values_list("tag__name", flat=True)))
class License(models.Model):
code = models.CharField(primary_key=True, max_length=100)
url = models.URLField(max_length=500)
# if true, license is a copyleft license, meaning that derivative
# work must be shared under the same license
copyleft = models.BooleanField()
# if true, commercial use of the work is allowed
commercial = models.BooleanField()
# if true, attribution to the original author is required when reusing
# the work
attribution = models.BooleanField()
# if true, derivative work are allowed
derivative = models.BooleanField()
# if true, redistribution of the wor is allowed
redistribute = models.BooleanField()
class ArtistQuerySet(models.QuerySet): @property
def conf(self):
from . import licenses
for row in licenses.LICENSES:
if self.code == row["code"]:
return row
logger.warning("%s do not match any registered license", self.code)
class ArtistQuerySet(common_models.LocalFromFidQuerySet, models.QuerySet):
def with_albums_count(self): def with_albums_count(self):
return self.annotate(_albums_count=models.Count("albums")) return self.annotate(_albums_count=models.Count("artist_credit__albums"))
def with_albums(self): def with_albums(self):
return self.prefetch_related( return self.prefetch_related(
models.Prefetch("albums", queryset=Album.objects.with_tracks_count()) models.Prefetch(
"artist_credit__albums",
queryset=Album.objects.with_tracks_count().select_related(
"attachment_cover", "attributed_to"
),
)
)
def annotate_playable_by_actor(self, actor):
tracks = (
Upload.objects.playable_by(actor)
.filter(track__artist_credit__artist=models.OuterRef("id"))
.order_by("id")
.values("id")[:1]
) )
subquery = models.Subquery(tracks)
return self.annotate(is_playable_by_actor=subquery)
def playable_by(self, actor, include=True):
tracks = Track.objects.playable_by(actor)
matches = self.filter(
pk__in=tracks.values("artist_credit__artist")
).values_list("pk")
if include:
return self.filter(pk__in=matches)
else:
return self.exclude(pk__in=matches)
class Artist(APIModelMixin): class Artist(APIModelMixin):
name = models.CharField(max_length=255) name = models.TextField()
federation_namespace = "artists"
musicbrainz_model = "artist" musicbrainz_model = "artist"
musicbrainz_mapping = { musicbrainz_mapping = {
"mbid": {"musicbrainz_field_name": "id"}, "mbid": {"musicbrainz_field_name": "id"},
"name": {"musicbrainz_field_name": "name"}, "name": {"musicbrainz_field_name": "name"},
} }
# Music entities are attributed to actors, to validate that updates occur
# from an authorized account. On top of that, we consider the instance actor
# can update anything under it's own domain
attributed_to = models.ForeignKey(
"federation.Actor",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="attributed_artists",
)
tagged_items = GenericRelation(tags_models.TaggedItem)
fetches = GenericRelation(
"federation.Fetch",
content_type_field="object_content_type",
object_id_field="object_id",
)
description = models.ForeignKey(
"common.Content", null=True, blank=True, on_delete=models.SET_NULL
)
attachment_cover = models.ForeignKey(
"common.Attachment",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="covered_artist",
)
content_category = models.CharField(
max_length=30,
db_index=True,
default="music",
choices=ARTIST_CONTENT_CATEGORY_CHOICES,
null=False,
)
modification_date = models.DateTimeField(default=timezone.now, db_index=True)
api = musicbrainz.api.artists api = musicbrainz.api.artists
objects = ArtistQuerySet.as_manager() objects = ArtistQuerySet.as_manager()
def __str__(self): def __str__(self):
return self.name return self.name
@property def get_absolute_url(self):
def tags(self): return f"/library/artists/{self.pk}"
t = []
for album in self.albums.all(): def get_moderation_url(self):
for tag in album.tags: return f"/manage/library/artists/{self.pk}"
t.append(tag)
return set(t)
@classmethod @classmethod
def get_or_create_from_name(cls, name, **kwargs): def get_or_create_from_name(cls, name, **kwargs):
kwargs.update({"name": name}) kwargs.update({"name": name})
return cls.objects.get_or_create(name__iexact=name, defaults=kwargs) return cls.objects.get_or_create(name__iexact=name, defaults=kwargs)
@property
def cover(self):
return self.attachment_cover
def import_artist(v): def get_channel(self):
a = Artist.get_or_create_from_api(mbid=v[0]["artist"]["id"])[0] try:
return a return self.channel
except ObjectDoesNotExist:
return None
def import_artist_credit(v):
artists_credits = []
for i, ac in enumerate(v):
artist, create = Artist.get_or_create_from_api(mbid=ac["artist"]["id"])
if "joinphrase" in ac["artist"]:
joinphrase = ac["artist"]["joinphrase"]
elif i < len(v):
joinphrase = preferences.get("music__default_join_phrase")
else:
joinphrase = ""
artist_credit, created = ArtistCredit.objects.get_or_create(
artist=artist,
credit=ac["artist"]["name"],
index=i,
joinphrase=joinphrase,
)
artists_credits.append(artist_credit)
return artists_credits
def parse_date(v): def parse_date(v):
if len(v) == 4:
return datetime.date(int(v), 1, 1)
d = arrow.get(v).date() d = arrow.get(v).date()
return d return d
...@@ -137,24 +308,115 @@ def import_tracks(instance, cleaned_data, raw_data): ...@@ -137,24 +308,115 @@ def import_tracks(instance, cleaned_data, raw_data):
importers.load(Track, track_cleaned_data, track_data, Track.import_hooks) importers.load(Track, track_cleaned_data, track_data, Track.import_hooks)
class AlbumQuerySet(models.QuerySet): class ArtistCreditQuerySet(common_models.LocalFromFidQuerySet, models.QuerySet):
def albums(self):
albums_ids = self.prefetch_related("albums").values_list("albums")
return Album.objects.filter(id__in=albums_ids)
class ArtistCredit(APIModelMixin):
artist = models.ForeignKey(
Artist, related_name="artist_credit", on_delete=models.CASCADE
)
credit = models.CharField(
null=True,
blank=True,
max_length=500,
)
joinphrase = models.CharField(
null=True,
blank=True,
max_length=250,
)
index = models.IntegerField(
null=True,
blank=True,
)
federation_namespace = "artistcredit"
objects = ArtistCreditQuerySet.as_manager()
class Meta:
ordering = ["index", "credit"]
class AlbumQuerySet(common_models.LocalFromFidQuerySet, models.QuerySet):
def with_tracks_count(self): def with_tracks_count(self):
return self.annotate(_tracks_count=models.Count("tracks")) return self.annotate(_tracks_count=models.Count("tracks"))
def annotate_playable_by_actor(self, actor):
tracks = (
Upload.objects.playable_by(actor)
.filter(track__artist_credit__albums=models.OuterRef("id"))
.order_by("id")
.values("id")[:1]
)
subquery = models.Subquery(tracks)
return self.annotate(is_playable_by_actor=subquery)
def playable_by(self, actor, include=True):
tracks = Track.objects.playable_by(actor)
matches = self.filter(pk__in=tracks.values("album_id")).values_list("pk")
if include:
return self.filter(pk__in=matches)
else:
return self.exclude(pk__in=matches)
def with_duration(self):
# takes one upload per track
subquery = Subquery(
Upload.objects.filter(track_id=OuterRef("tracks"))
.order_by("id")
.values("id")[:1]
)
return self.annotate(
duration=models.Sum(
"tracks__uploads__duration",
filter=Q(tracks__uploads=subquery),
)
)
class Album(APIModelMixin): class Album(APIModelMixin):
title = models.CharField(max_length=255) title = models.TextField()
artist = models.ForeignKey(Artist, related_name="albums", on_delete=models.CASCADE) artist_credit = models.ManyToManyField(ArtistCredit, related_name="albums")
release_date = models.DateField(null=True) release_date = models.DateField(null=True, blank=True, db_index=True)
release_group_id = models.UUIDField(null=True, blank=True) release_group_id = models.UUIDField(null=True, blank=True)
cover = VersatileImageField( attachment_cover = models.ForeignKey(
upload_to="albums/covers/%Y/%m/%d", null=True, blank=True "common.Attachment",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="covered_album",
) )
TYPE_CHOICES = (("album", "Album"),) TYPE_CHOICES = (("album", "Album"),)
type = models.CharField(choices=TYPE_CHOICES, max_length=30, default="album") type = models.CharField(choices=TYPE_CHOICES, max_length=30, default="album")
# Music entities are attributed to actors, to validate that updates occur
# from an authorized account. On top of that, we consider the instance actor
# can update anything under it's own domain
attributed_to = models.ForeignKey(
"federation.Actor",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="attributed_albums",
)
tagged_items = GenericRelation(tags_models.TaggedItem)
fetches = GenericRelation(
"federation.Fetch",
content_type_field="object_content_type",
object_id_field="object_id",
)
description = models.ForeignKey(
"common.Content", null=True, blank=True, on_delete=models.SET_NULL
)
api_includes = ["artist-credits", "recordings", "media", "release-groups"] api_includes = ["artist-credits", "recordings", "media", "release-groups"]
api = musicbrainz.api.releases api = musicbrainz.api.releases
federation_namespace = "albums"
musicbrainz_model = "release" musicbrainz_model = "release"
musicbrainz_mapping = { musicbrainz_mapping = {
"mbid": {"musicbrainz_field_name": "id"}, "mbid": {"musicbrainz_field_name": "id"},
...@@ -169,41 +431,38 @@ class Album(APIModelMixin): ...@@ -169,41 +431,38 @@ class Album(APIModelMixin):
"title": {"musicbrainz_field_name": "title"}, "title": {"musicbrainz_field_name": "title"},
"release_date": {"musicbrainz_field_name": "date", "converter": parse_date}, "release_date": {"musicbrainz_field_name": "date", "converter": parse_date},
"type": {"musicbrainz_field_name": "type", "converter": lambda v: v.lower()}, "type": {"musicbrainz_field_name": "type", "converter": lambda v: v.lower()},
"artist": { "artist_credit": {
"musicbrainz_field_name": "artist-credit", "musicbrainz_field_name": "artist-credit",
"converter": import_artist, "converter": import_artist_credit,
}, },
} }
objects = AlbumQuerySet.as_manager() objects = AlbumQuerySet.as_manager()
def get_image(self, data=None): @property
if data: def cover(self):
f = ContentFile(data["content"]) return self.attachment_cover
extensions = {"image/jpeg": "jpg", "image/png": "png", "image/gif": "gif"}
extension = extensions.get(data["mimetype"], "jpg")
self.cover.save("{}.{}".format(self.uuid, extension), f)
else:
image_data = musicbrainz.api.images.get_front(str(self.mbid))
f = ContentFile(image_data)
self.cover.save("{0}.jpg".format(self.mbid), f)
return self.cover.file
def __str__(self): def __str__(self):
return self.title return self.title
@property def get_absolute_url(self):
def tags(self): return f"/library/albums/{self.pk}"
t = []
for track in self.tracks.all(): def get_moderation_url(self):
for tag in track.tags.all(): return f"/manage/library/albums/{self.pk}"
t.append(tag)
return set(t)
@classmethod @classmethod
def get_or_create_from_title(cls, title, **kwargs): def get_or_create_from_title(cls, title, **kwargs):
kwargs.update({"title": title}) kwargs.update({"title": title})
return cls.objects.get_or_create(title__iexact=title, defaults=kwargs) return cls.objects.get_or_create(title__iexact=title, defaults=kwargs)
@property
def get_artist_credit_string(self):
return utils.get_artist_credit_string(self)
def get_artists_list(self):
return [ac.artist for ac in self.artist_credit.all()]
def import_tags(instance, cleaned_data, raw_data): def import_tags(instance, cleaned_data, raw_data):
MINIMUM_COUNT = 2 MINIMUM_COUNT = 2
...@@ -215,7 +474,8 @@ def import_tags(instance, cleaned_data, raw_data): ...@@ -215,7 +474,8 @@ def import_tags(instance, cleaned_data, raw_data):
except ValueError: except ValueError:
continue continue
tags_to_add.append(tag_data["name"]) tags_to_add.append(tag_data["name"])
instance.tags.add(*tags_to_add)
tags_models.add_tags(instance, *tags_to_add)
def import_album(v): def import_album(v):
...@@ -223,76 +483,75 @@ def import_album(v): ...@@ -223,76 +483,75 @@ def import_album(v):
return a return a
def link_recordings(instance, cleaned_data, raw_data): class TrackQuerySet(common_models.LocalFromFidQuerySet, models.QuerySet):
tracks = [r["target"] for r in raw_data["recording-relation-list"]] def for_nested_serialization(self):
Track.objects.filter(mbid__in=tracks).update(work=instance) return self.prefetch_related(
"artist_credit",
Prefetch(
def import_lyrics(instance, cleaned_data, raw_data): "album",
try: queryset=Album.objects.prefetch_related(
url = [ "artist_credit", "attachment_cover"
url_data ).annotate(_prefetched_tracks_count=Count("tracks")),
for url_data in raw_data["url-relation-list"] ),
if url_data["type"] == "lyrics" )
][0]["target"]
except (IndexError, KeyError):
return
l, _ = Lyrics.objects.get_or_create(work=instance, url=url)
return l
def annotate_playable_by_actor(self, actor):
files = (
Upload.objects.playable_by(actor)
.filter(track=models.OuterRef("id"))
.order_by("id")
.values("id")[:1]
)
subquery = models.Subquery(files)
return self.annotate(is_playable_by_actor=subquery)
class Work(APIModelMixin): def playable_by(self, actor, include=True):
language = models.CharField(max_length=20) if settings.MUSIC_USE_DENORMALIZATION:
nature = models.CharField(max_length=50) if actor is not None:
title = models.CharField(max_length=255) query = models.Q(actor=None) | models.Q(actor=actor)
else:
query = models.Q(actor=None, internal=False)
if not include:
query = ~query
return self.filter(pk__in=TrackActor.objects.filter(query).values("track"))
files = Upload.objects.playable_by(actor, include)
matches = self.filter(uploads__in=files).values_list("pk")
if include:
return self.filter(pk__in=matches)
else:
return self.exclude(pk__in=matches)
api = musicbrainz.api.works def with_playable_uploads(self, actor):
api_includes = ["url-rels", "recording-rels"] uploads = Upload.objects.playable_by(actor)
musicbrainz_model = "work" return self.prefetch_related(
musicbrainz_mapping = { models.Prefetch("uploads", queryset=uploads, to_attr="playable_uploads")
"mbid": {"musicbrainz_field_name": "id"}, )
"title": {"musicbrainz_field_name": "title"},
"language": {"musicbrainz_field_name": "language"},
"nature": {"musicbrainz_field_name": "type", "converter": lambda v: v.lower()},
}
import_hooks = [import_lyrics, link_recordings]
def fetch_lyrics(self): def order_for_album(self):
lyric = self.lyrics.first() """
if lyric: Order by disc number then position
return lyric """
data = self.api.get(self.mbid, includes=["url-rels"])["work"] return self.order_by("disc_number", "position", "title")
lyric = import_lyrics(self, {}, data)
return lyric def random(self, batch_size):
bounds = self.aggregate(min_id=Min("id"), max_id=Max("id"))
min_id, max_id = bounds["min_id"], bounds["max_id"]
if min_id is None or max_id is None:
return self.none()
class Lyrics(models.Model): tries = 0
uuid = models.UUIDField(unique=True, db_index=True, default=uuid.uuid4) max_tries = 10
work = models.ForeignKey( found_ids = set()
Work, related_name="lyrics", null=True, blank=True, on_delete=models.CASCADE
)
url = models.URLField(unique=True)
content = models.TextField(null=True, blank=True)
@property while len(found_ids) < batch_size and tries < max_tries:
def content_rendered(self): candidate_ids = [randint(min_id, max_id) for _ in range(batch_size * 2)]
return markdown.markdown( found_ids.update(
self.content, self.filter(id__in=candidate_ids).values_list("id", flat=True)
safe_mode=True,
enable_attributes=False,
extensions=["markdown.extensions.nl2br"],
) )
tries += 1
return self.filter(id__in=list(found_ids)[:batch_size]).order_by("?")
class TrackQuerySet(models.QuerySet):
def for_nested_serialization(self):
return (
self.select_related()
.select_related("album__artist", "artist")
.prefetch_related("files")
)
def get_artist(release_list): def get_artist(release_list):
...@@ -302,77 +561,113 @@ def get_artist(release_list): ...@@ -302,77 +561,113 @@ def get_artist(release_list):
class Track(APIModelMixin): class Track(APIModelMixin):
title = models.CharField(max_length=255) mbid = models.UUIDField(db_index=True, null=True, blank=True)
artist = models.ForeignKey(Artist, related_name="tracks", on_delete=models.CASCADE) title = models.TextField()
artist_credit = models.ManyToManyField(ArtistCredit, related_name="tracks")
disc_number = models.PositiveIntegerField(null=True, blank=True)
position = models.PositiveIntegerField(null=True, blank=True) position = models.PositiveIntegerField(null=True, blank=True)
album = models.ForeignKey( album = models.ForeignKey(
Album, related_name="tracks", null=True, blank=True, on_delete=models.CASCADE Album, related_name="tracks", null=True, blank=True, on_delete=models.CASCADE
) )
work = models.ForeignKey( license = models.ForeignKey(
Work, related_name="tracks", null=True, blank=True, on_delete=models.CASCADE License,
null=True,
blank=True,
on_delete=models.DO_NOTHING,
related_name="tracks",
) )
# Music entities are attributed to actors, to validate that updates occur
# from an authorized account. On top of that, we consider the instance actor
# can update anything under it's own domain
attributed_to = models.ForeignKey(
"federation.Actor",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="attributed_tracks",
)
copyright = models.TextField(null=True, blank=True)
description = models.ForeignKey(
"common.Content", null=True, blank=True, on_delete=models.SET_NULL
)
attachment_cover = models.ForeignKey(
"common.Attachment",
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name="covered_track",
)
downloads_count = models.PositiveIntegerField(default=0)
federation_namespace = "tracks"
musicbrainz_model = "recording" musicbrainz_model = "recording"
api = musicbrainz.api.recordings api = musicbrainz.api.recordings
api_includes = ["artist-credits", "releases", "media", "tags", "work-rels"] api_includes = ["artist-credits", "releases", "media", "tags"]
musicbrainz_mapping = { musicbrainz_mapping = {
"mbid": {"musicbrainz_field_name": "id"}, "mbid": {"musicbrainz_field_name": "id"},
"title": {"musicbrainz_field_name": "title"}, "title": {"musicbrainz_field_name": "title"},
"artist": { "artist_credit": {
# we use the artist from the release to avoid #237 "musicbrainz_field_name": "artist-credit",
"musicbrainz_field_name": "release-list", "converter": import_artist_credit,
"converter": get_artist,
}, },
"album": {"musicbrainz_field_name": "release-list", "converter": import_album}, "album": {"musicbrainz_field_name": "release-list", "converter": import_album},
} }
import_hooks = [import_tags] import_hooks = [import_tags]
objects = TrackQuerySet.as_manager() objects = TrackQuerySet.as_manager()
tags = TaggableManager(blank=True) tagged_items = GenericRelation(tags_models.TaggedItem)
fetches = GenericRelation(
"federation.Fetch",
content_type_field="object_content_type",
object_id_field="object_id",
)
class Meta: class Meta:
ordering = ["album", "position"] ordering = ["album", "disc_number", "position"]
indexes = [
GinIndex(fields=["body_text"]),
]
def __str__(self): def __str__(self):
return self.title return self.title
def save(self, **kwargs): def get_absolute_url(self):
try: return f"/library/tracks/{self.pk}"
self.artist
except Artist.DoesNotExist: def get_moderation_url(self):
self.artist = self.album.artist return f"/manage/library/tracks/{self.pk}"
super().save(**kwargs)
@property
def get_work(self): def get_artist_credit_string(self):
if self.work: return utils.get_artist_credit_string(self)
return self.work
data = self.api.get(self.mbid, includes=["work-rels"])
try:
work_data = data["recording"]["work-relation-list"][0]["work"]
except (IndexError, KeyError):
return
work, _ = Work.get_or_create_from_api(mbid=work_data["id"])
return work
def get_lyrics_url(self): def get_artists_list(self):
return reverse("api:v1:tracks-lyrics", kwargs={"pk": self.pk}) return [ac.artist for ac in self.artist_credit.all()]
@property @property
def full_name(self): def full_name(self):
try: try:
return "{} - {} - {}".format(self.artist.name, self.album.title, self.title) return (
f"{self.get_artist_credit_string} - {self.album.title} - {self.title}"
)
except AttributeError: except AttributeError:
return "{} - {}".format(self.artist.name, self.title) return f"{self.get_artist_credit_string} - {self.title}"
@property
def cover(self):
return self.attachment_cover
def get_activity_url(self): def get_activity_url(self):
if self.mbid: if self.mbid:
return "https://musicbrainz.org/recording/{}".format(self.mbid) return f"https://musicbrainz.org/recording/{self.mbid}"
return settings.FUNKWHALE_URL + "/tracks/{}".format(self.pk) return settings.FUNKWHALE_URL + f"/tracks/{self.pk}"
@classmethod @classmethod
def get_or_create_from_title(cls, title, **kwargs): def get_or_create_from_title(cls, title, **kwargs):
kwargs.update({"title": title}) kwargs.update({"title": title})
return cls.objects.get_or_create(title__iexact=title, defaults=kwargs) return cls.objects.get_or_create(title__iexact=title, defaults=kwargs)
# not used anymore, allow increase of performance when importing tracks using mbids.
# In its actual state it will not work since it assume track_data["recording"]["artist-credit"] can
# contain a joinphrase but it's not he case. Needs to be updated.
@classmethod @classmethod
def get_or_create_from_release(cls, release_mbid, mbid): def get_or_create_from_release(cls, release_mbid, mbid):
release_mbid = str(release_mbid) release_mbid = str(release_mbid)
...@@ -389,74 +684,267 @@ class Track(APIModelMixin): ...@@ -389,74 +684,267 @@ class Track(APIModelMixin):
tracks = [t for m in data["release"]["medium-list"] for t in m["track-list"]] tracks = [t for m in data["release"]["medium-list"] for t in m["track-list"]]
track_data = None track_data = None
for track in tracks: for track in tracks:
if track["recording"]["id"] == mbid: if track["recording"]["id"] == str(mbid):
track_data = track track_data = track
break break
if not track_data: if not track_data:
raise ValueError("No track found matching this ID") raise ValueError("No track found matching this ID")
return cls.objects.update_or_create( artists_credits = []
for i, ac in enumerate(track_data["recording"]["artist-credit"]):
try:
ac_mbid = ac["artist"]["id"]
except TypeError:
# it's probably a string, like "feat.".
continue
track_artist = Artist.get_or_create_from_api(ac_mbid)[0]
if "joinphrase" not in ac:
joinphrase = ""
else:
joinphrase = ac["joinphrase"]
artist_credit, create = ArtistCredit.objects.get_or_create(
artist=track_artist,
credit=ac["artist"]["name"],
joinphrase=joinphrase,
index=i,
)
artists_credits.append(artist_credit)
if album.artist_credit.all() != artist_credit:
album.artist_credit.set(artists_credits)
track = cls.objects.update_or_create(
mbid=mbid, mbid=mbid,
defaults={ defaults={
"position": int(track["position"]), "position": int(track["position"]),
"title": track["recording"]["title"], "title": track["recording"]["title"],
"album": album, "album": album,
"artist": album.artist,
}, },
) )
track[0].artist_credit.set(artists_credits)
return track
@property
def listen_url(self) -> str:
# Not using reverse because this is slow
return f"/api/v2/listen/{self.uuid}/"
@property
def local_license(self):
"""
Since license primary keys are strings, and we can get the data
from our hardcoded licenses.LICENSES list, there is no need
for extra SQL joins / queries.
"""
from . import licenses
return licenses.LICENSES_BY_ID.get(self.license_id)
class UploadQuerySet(common_models.NullsLastQuerySet):
def in_place(self, include=True):
query = models.Q(source__startswith="file://") & (
models.Q(audio_file="") | models.Q(audio_file=None)
)
if not include:
query = ~query
return self.filter(query)
def playable_by(self, actor, include=True):
libraries = Library.objects.viewable_by(actor)
if include:
return self.filter(
Q(library__in=libraries) | Q(playlist_libraries__in=libraries),
import_status__in=["finished", "skipped"],
)
return self.exclude(
Q(library__in=libraries) | Q(playlist_libraries__in=libraries),
import_status__in=["finished", "skipped"],
)
def local(self, include=True):
query = models.Q(library__actor__domain_id=settings.FEDERATION_HOSTNAME)
if not include:
query = ~query
return self.filter(query)
def for_federation(self):
return self.filter(import_status="finished", mimetype__startswith="audio/")
def with_file(self):
return self.exclude(audio_file=None).exclude(audio_file="")
TRACK_FILE_IMPORT_STATUS_CHOICES = (
("draft", "Draft"),
("pending", "Pending"),
("finished", "Finished"),
("errored", "Errored"),
("skipped", "Skipped"),
)
def get_file_path(instance, filename):
# Convert unicode characters in name to ASCII characters.
filename = slugify.slugify(filename, ok=slugify.SLUG_OK + ".", only_ascii=True)
if isinstance(instance, UploadVersion):
return common_utils.ChunkedPath("transcoded")(instance, filename)
if instance.library.actor.get_user():
return common_utils.ChunkedPath("tracks")(instance, filename)
elif instance.third_party_provider:
return common_utils.ChunkedPath("third_party_tracks")(instance, filename)
else:
# we cache remote tracks in a different directory
return common_utils.ChunkedPath("federation_cache/tracks")(instance, filename)
def get_import_reference():
return str(uuid.uuid4())
quality_choices = [(0, "low"), (1, "medium"), (2, "high"), (3, "very_high")]
class TrackFile(models.Model): class Upload(models.Model):
fid = models.URLField(unique=True, max_length=500, null=True, blank=True)
uuid = models.UUIDField(unique=True, db_index=True, default=uuid.uuid4) uuid = models.UUIDField(unique=True, db_index=True, default=uuid.uuid4)
track = models.ForeignKey(Track, related_name="files", on_delete=models.CASCADE) track = models.ForeignKey(
audio_file = models.FileField(upload_to="tracks/%Y/%m/%d", max_length=255) Track, related_name="uploads", on_delete=models.CASCADE, null=True, blank=True
source = models.URLField(null=True, blank=True, max_length=500) )
creation_date = models.DateTimeField(default=timezone.now) audio_file = models.FileField(upload_to=get_file_path, max_length=255)
modification_date = models.DateTimeField(auto_now=True) source = models.CharField(
# URL validators are not flexible enough for our file:// and upload:// schemes
null=True,
blank=True,
max_length=500,
)
creation_date = models.DateTimeField(default=timezone.now, db_index=True)
modification_date = models.DateTimeField(default=timezone.now, null=True)
accessed_date = models.DateTimeField(null=True, blank=True) accessed_date = models.DateTimeField(null=True, blank=True)
duration = models.IntegerField(null=True, blank=True) duration = models.IntegerField(null=True, blank=True)
size = models.IntegerField(null=True, blank=True) size = models.IntegerField(null=True, blank=True)
bitrate = models.IntegerField(null=True, blank=True) bitrate = models.IntegerField(null=True, blank=True)
acoustid_track_id = models.UUIDField(null=True, blank=True) acoustid_track_id = models.UUIDField(null=True, blank=True)
mimetype = models.CharField(null=True, blank=True, max_length=200) mimetype = models.CharField(null=True, blank=True, max_length=200)
library = models.ForeignKey(
library_track = models.OneToOneField( "library",
"federation.LibraryTrack",
related_name="local_track_file",
on_delete=models.CASCADE,
null=True, null=True,
blank=True, blank=True,
related_name="uploads",
on_delete=models.CASCADE,
)
playlist_libraries = models.ManyToManyField(
"library",
blank=True,
related_name="playlist_uploads",
) )
def download_file(self): # metadata from federation
# import the track file, since there is not any metadata = JSONField(
# we create a tmp dir for the download default=empty_dict, max_length=50000, encoder=DjangoJSONEncoder, blank=True
tmp_dir = tempfile.mkdtemp()
data = downloader.download(self.source, target_directory=tmp_dir)
self.duration = data.get("duration", None)
self.audio_file.save(
os.path.basename(data["audio_file_path"]),
File(open(data["audio_file_path"], "rb")),
) )
shutil.rmtree(tmp_dir) import_date = models.DateTimeField(null=True, blank=True)
return self.audio_file # optional metadata provided during import
import_metadata = JSONField(
default=empty_dict, max_length=50000, encoder=DjangoJSONEncoder, blank=True
)
# status / error details for the import
import_status = models.CharField(
default="pending", choices=TRACK_FILE_IMPORT_STATUS_CHOICES, max_length=25
)
# a short reference provided by the client to group multiple files
# in the same import
import_reference = models.CharField(max_length=50, default=get_import_reference)
# optional metadata about import results (error messages, etc.)
import_details = JSONField(
default=empty_dict, max_length=50000, encoder=DjangoJSONEncoder, blank=True
)
from_activity = models.ForeignKey(
"federation.Activity", null=True, on_delete=models.SET_NULL, blank=True
)
downloads_count = models.PositiveIntegerField(default=0)
# stores checksums such as `sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855`
checksum = models.CharField(max_length=100, db_index=True, null=True, blank=True)
def get_federation_url(self): quality = models.IntegerField(choices=quality_choices, default=1)
return federation_utils.full_url("/federation/music/file/{}".format(self.uuid))
third_party_provider = models.CharField(max_length=100, null=True, blank=True)
objects = UploadQuerySet.as_manager()
@property
def is_local(self) -> bool:
return federation_utils.is_local(self.fid)
@property @property
def path(self): def domain_name(self):
return reverse("api:v1:trackfiles-serve", kwargs={"pk": self.pk}) if not self.fid:
return
parsed = urllib.parse.urlparse(self.fid)
return parsed.hostname
def download_audio_from_remote(self, actor):
from funkwhale_api.federation import signing
if actor:
auth = signing.get_auth(actor.private_key, actor.private_key_id)
else:
auth = None
remote_response = session.get_session().get(
self.source,
auth=auth,
stream=True,
timeout=20,
headers={"Content-Type": "application/octet-stream"},
)
with remote_response as r:
remote_response.raise_for_status()
extension = utils.get_ext_from_type(self.mimetype)
title_parts = []
title_parts.append(self.track.title)
if self.track.album:
title_parts.append(self.track.album.title)
title_parts.append(self.track.get_artist_credit_string)
title = " - ".join(title_parts)
filename = f"{title}.{extension}"
tmp_file = tempfile.TemporaryFile()
for chunk in r.iter_content(chunk_size=512):
tmp_file.write(chunk)
self.audio_file.save(filename, tmp_file, save=False)
self.save(update_fields=["audio_file"])
def get_federation_id(self):
if self.fid:
return self.fid
return federation_utils.full_url(
reverse("federation:music:uploads-detail", kwargs={"uuid": self.uuid})
)
@property @property
def filename(self): def filename(self) -> str:
return "{}.{}".format(self.track.full_name, self.extension) return f"{self.track.full_name}.{self.extension}"
@property @property
def extension(self): def extension(self):
if not self.audio_file: try:
return return utils.MIMETYPE_TO_EXTENSION[self.mimetype]
except KeyError:
pass
if self.audio_file:
return os.path.splitext(self.audio_file.name)[-1].replace(".", "", 1) return os.path.splitext(self.audio_file.name)[-1].replace(".", "", 1)
if self.in_place_path:
return os.path.splitext(self.in_place_path)[-1].replace(".", "", 1)
def get_file_size(self): def get_file_size(self):
if self.audio_file: if self.audio_file:
...@@ -465,37 +953,95 @@ class TrackFile(models.Model): ...@@ -465,37 +953,95 @@ class TrackFile(models.Model):
if self.source.startswith("file://"): if self.source.startswith("file://"):
return os.path.getsize(self.source.replace("file://", "", 1)) return os.path.getsize(self.source.replace("file://", "", 1))
if self.library_track and self.library_track.audio_file:
return self.library_track.audio_file.size
def get_audio_file(self): def get_audio_file(self):
if self.audio_file: if self.audio_file:
return self.audio_file.open() return self.audio_file.open()
if self.source.startswith("file://"): if self.source and self.source.startswith("file://"):
return open(self.source.replace("file://", "", 1), "rb") return open(self.source.replace("file://", "", 1), "rb")
if self.library_track and self.library_track.audio_file:
return self.library_track.audio_file.open()
def set_audio_data(self): def get_audio_file_path(self):
if self.audio_file:
return self.audio_file.path
if self.source and self.source.startswith("file://"):
return self.source.replace("file://", "", 1)
def get_audio_data(self):
audio_file = self.get_audio_file() audio_file = self.get_audio_file()
if audio_file: if not audio_file:
with audio_file as f: return
audio_data = utils.get_audio_file_data(f) audio_data = utils.get_audio_file_data(audio_file)
if not audio_data: if not audio_data:
return return
self.duration = int(audio_data["length"]) return {
self.bitrate = audio_data["bitrate"] "duration": int(audio_data["length"]),
self.size = self.get_file_size() "bitrate": audio_data["bitrate"],
"size": self.get_file_size(),
}
def get_quality(self):
extension_to_mimetypes = utils.get_extension_to_mimetype_dict()
if not self.bitrate and self.mimetype not in list(
itertools.chain(
extension_to_mimetypes["aiff"],
extension_to_mimetypes["aif"],
extension_to_mimetypes["flac"],
)
):
return 1
bitrate_limits = {
"mp3": {192: 0, 256: 1, 320: 2},
"ogg": {96: 0, 192: 1, 256: 2},
"aac": {96: 0, 128: 1, 288: 2},
"m4a": {96: 0, 128: 1, 288: 2},
"opus": {
96: 0,
128: 1,
160: 2,
},
}
for ext in bitrate_limits:
if self.mimetype in extension_to_mimetypes[ext]:
for limit, quality in sorted(bitrate_limits[ext].items()):
if int(self.bitrate) <= limit:
return quality
# opus higher tham 160
return 3
if self.mimetype in list(
itertools.chain(
extension_to_mimetypes["aiff"],
extension_to_mimetypes["aif"],
extension_to_mimetypes["flac"],
)
):
return 3
else: else:
lt = self.library_track return 1
if lt:
self.duration = lt.get_metadata("length")
self.size = lt.get_metadata("size")
self.bitrate = lt.get_metadata("bitrate")
def save(self, **kwargs): def save(self, **kwargs):
if not self.mimetype and self.audio_file: if not self.mimetype:
if self.audio_file:
self.mimetype = utils.guess_mimetype(self.audio_file) self.mimetype = utils.guess_mimetype(self.audio_file)
elif self.source and self.source.startswith("file://"):
self.mimetype = utils.guess_mimetype_from_name(self.source)
if not self.size and self.audio_file:
self.size = self.audio_file.size
if not self.checksum:
try:
audio_file = self.get_audio_file()
except FileNotFoundError:
pass
else:
if audio_file:
self.checksum = common_utils.get_file_hash(audio_file)
if not self.pk and not self.fid and self.library.actor.get_user():
self.fid = self.get_federation_id()
self.quality = self.get_quality()
return super().save(**kwargs) return super().save(**kwargs)
def get_metadata(self): def get_metadata(self):
...@@ -504,6 +1050,148 @@ class TrackFile(models.Model): ...@@ -504,6 +1050,148 @@ class TrackFile(models.Model):
return return
return metadata.Metadata(audio_file) return metadata.Metadata(audio_file)
@property
def listen_url(self) -> str:
return self.track.listen_url + f"?upload={self.uuid}"
def get_listen_url(self, to=None, download=True) -> str:
url = self.listen_url
if to:
url += f"&to={to}"
if not download:
url += "&download=false"
return url
@property
def listen_url_no_download(self):
# Not using reverse because this is slow
return self.listen_url + "&download=false"
def get_transcoded_version(self, format, max_bitrate=None):
if format:
mimetype = utils.EXTENSION_TO_MIMETYPE[format]
else:
mimetype = self.mimetype or "audio/mpeg"
format = utils.MIMETYPE_TO_EXTENSION[mimetype]
existing_versions = self.versions.filter(mimetype=mimetype)
if max_bitrate is not None:
# we don't want to transcode if a 320kbps version is available
# and we're requestiong 300kbps
acceptable_max_bitrate = max_bitrate * 1.2
acceptable_min_bitrate = max_bitrate * 0.8
existing_versions = existing_versions.filter(
bitrate__gte=acceptable_min_bitrate, bitrate__lte=acceptable_max_bitrate
).order_by("-bitrate")
if existing_versions:
# we found an existing version, no need to transcode again
return existing_versions[0]
return self.create_transcoded_version(mimetype, format, bitrate=max_bitrate)
@transaction.atomic
def create_transcoded_version(self, mimetype, format, bitrate):
# we create the version with an empty file, then
# we'll write to it
f = ContentFile(b"")
bitrate = min(bitrate or 320000, self.bitrate or 320000)
version = self.versions.create(mimetype=mimetype, bitrate=bitrate, size=0)
# we keep the same name, but we update the extension
new_name = (
os.path.splitext(os.path.basename(self.audio_file.name))[0] + f".{format}"
)
version.audio_file.save(new_name, f)
utils.transcode_audio(
audio_file_path=self.get_audio_file_path(),
output_path=version.audio_file.path,
output_format=utils.MIMETYPE_TO_EXTENSION[mimetype],
bitrate=str(bitrate),
)
version.size = version.audio_file.size
version.save(update_fields=["size"])
return version
@property
def in_place_path(self):
if not self.source or not self.source.startswith("file://"):
return
return self.source.lstrip("file://")
@property
def audio_file_path(self):
if not self.audio_file:
return None
try:
return self.audio_file.path
except NotImplementedError:
# external storage
return self.audio_file.name
def get_all_tagged_items(self):
track_tags = self.track.tagged_items.all()
album_tags = (
self.track.album.tagged_items.all()
if self.track.album
else tags_models.TaggedItem.objects.none()
)
artist_tags = [
ac.artist.tagged_items.all() for ac in self.track.artist_credit.all()
]
non_empty_artist_tags = []
for qs in artist_tags:
if qs.exists():
non_empty_artist_tags.append(qs)
if non_empty_artist_tags:
final_qs = (track_tags | album_tags).union(*non_empty_artist_tags)
else:
final_qs = track_tags | album_tags
# this is needed to avoid *** RuntimeError: generator raised StopIteration
final_list = [obj for obj in final_qs]
items = sorted(final_list, key=lambda x: x.tag.name if x.tag else "")
return items
MIMETYPE_CHOICES = [(mt, ext) for ext, mt in utils.AUDIO_EXTENSIONS_AND_MIMETYPE]
class UploadVersion(models.Model):
upload = models.ForeignKey(
Upload, related_name="versions", on_delete=models.CASCADE
)
mimetype = models.CharField(max_length=50, choices=MIMETYPE_CHOICES)
creation_date = models.DateTimeField(default=timezone.now)
accessed_date = models.DateTimeField(null=True, blank=True)
audio_file = models.FileField(upload_to=get_file_path, max_length=255)
bitrate = models.PositiveIntegerField()
size = models.IntegerField()
class Meta:
unique_together = ("upload", "mimetype", "bitrate")
@property
def filename(self) -> str:
try:
return (
self.upload.track.full_name
+ "."
+ utils.MIMETYPE_TO_EXTENSION[self.mimetype]
)
except KeyError:
return self.upload.filename
@property
def audio_file_path(self):
if not self.audio_file:
return None
try:
return self.audio_file.path
except NotImplementedError:
# external storage
return self.audio_file.name
IMPORT_STATUS_CHOICES = ( IMPORT_STATUS_CHOICES = (
("pending", "Pending"), ("pending", "Pending"),
...@@ -541,6 +1229,13 @@ class ImportBatch(models.Model): ...@@ -541,6 +1229,13 @@ class ImportBatch(models.Model):
blank=True, blank=True,
on_delete=models.SET_NULL, on_delete=models.SET_NULL,
) )
library = models.ForeignKey(
"Library",
related_name="import_batches",
null=True,
blank=True,
on_delete=models.CASCADE,
)
class Meta: class Meta:
ordering = ["-creation_date"] ordering = ["-creation_date"]
...@@ -559,10 +1254,8 @@ class ImportBatch(models.Model): ...@@ -559,10 +1254,8 @@ class ImportBatch(models.Model):
tasks.import_batch_notify_followers.delay(import_batch_id=self.pk) tasks.import_batch_notify_followers.delay(import_batch_id=self.pk)
def get_federation_url(self): def get_federation_id(self):
return federation_utils.full_url( return federation_utils.full_url(f"/federation/music/import/batch/{self.uuid}")
"/federation/music/import/batch/{}".format(self.uuid)
)
class ImportJob(models.Model): class ImportJob(models.Model):
...@@ -571,8 +1264,8 @@ class ImportJob(models.Model): ...@@ -571,8 +1264,8 @@ class ImportJob(models.Model):
batch = models.ForeignKey( batch = models.ForeignKey(
ImportBatch, related_name="jobs", on_delete=models.CASCADE ImportBatch, related_name="jobs", on_delete=models.CASCADE
) )
track_file = models.ForeignKey( upload = models.ForeignKey(
TrackFile, related_name="jobs", null=True, blank=True, on_delete=models.CASCADE Upload, related_name="jobs", null=True, blank=True, on_delete=models.CASCADE
) )
source = models.CharField(max_length=500) source = models.CharField(max_length=500)
mbid = models.UUIDField(editable=False, null=True, blank=True) mbid = models.UUIDField(editable=False, null=True, blank=True)
...@@ -591,16 +1284,331 @@ class ImportJob(models.Model): ...@@ -591,16 +1284,331 @@ class ImportJob(models.Model):
null=True, null=True,
blank=True, blank=True,
) )
audio_file_size = models.IntegerField(null=True, blank=True)
class Meta: class Meta:
ordering = ("id",) ordering = ("id",)
def save(self, **kwargs):
if self.audio_file and not self.audio_file_size:
self.audio_file_size = self.audio_file.size
return super().save(**kwargs)
class LibraryQuerySet(models.QuerySet):
def local(self, include=True):
query = models.Q(actor__domain_id=settings.FEDERATION_HOSTNAME)
if not include:
query = ~query
return self.filter(query)
def with_follows(self, actor):
return self.prefetch_related(
models.Prefetch(
"received_follows",
queryset=federation_models.LibraryFollow.objects.filter(actor=actor),
to_attr="_follows",
)
)
def viewable_by(self, actor):
from funkwhale_api.federation.models import Follow, LibraryFollow
if actor is None:
return self.filter(privacy_level="everyone")
me_query = models.Q(privacy_level__in=["me", "followers"], actor=actor)
instance_query = models.Q(privacy_level="instance", actor__domain=actor.domain)
followed_libraries = LibraryFollow.objects.filter(
actor=actor, approved=True
).values_list("target", flat=True)
followed_channels_libraries = (
Follow.objects.exclude(target__channel=None)
.filter(
actor=actor,
approved=True,
)
.values_list("target__channel__library", flat=True)
)
domains_reachable = federation_models.Domain.objects.filter(
reachable=True
) | federation_models.Domain.objects.filter(name=settings.FUNKWHALE_HOSTNAME)
# User follow
followed_actors = Follow.objects.filter(actor=actor, approved=True).values_list(
"target", flat=True
)
# service actor can access libraries if there is approved followers on their manage domain
if actor.managed_domains.exists():
remote_service_actors = Q(
privacy_level="followers",
received_follows__approved=True,
received_follows__actor__domain__in=actor.managed_domains.all(),
) | Q(
privacy_level="followers",
actor__received_follows__approved=True,
actor__received_follows__actor__domain__in=actor.managed_domains.all(),
)
else:
remote_service_actors = Q()
return self.filter(
me_query
| instance_query
| remote_service_actors
| models.Q(privacy_level="everyone")
| models.Q(pk__in=followed_libraries)
| models.Q(pk__in=followed_channels_libraries)
| models.Q(actor__in=followed_actors, privacy_level="followers")
& models.Q(actor__domain__in=domains_reachable)
)
class Library(federation_models.FederationMixin):
uuid = models.UUIDField(unique=True, db_index=True, default=uuid.uuid4)
actor = models.ForeignKey(
"federation.Actor", related_name="libraries", on_delete=models.CASCADE
)
creation_date = models.DateTimeField(default=timezone.now)
name = models.CharField(max_length=100)
privacy_level = models.CharField(
choices=fields.PRIVACY_LEVEL_CHOICES, default="me", max_length=25
)
uploads_count = models.PositiveIntegerField(default=0)
objects = LibraryQuerySet.as_manager()
def __str__(self):
return self.name
def get_moderation_url(self) -> str:
return f"/manage/library/libraries/{self.uuid}"
def get_federation_id(self) -> str:
return federation_utils.full_url(
reverse("federation:music:libraries-detail", kwargs={"uuid": self.uuid})
)
def get_absolute_url(self) -> str:
return f"/library/{self.uuid}"
def save(self, **kwargs):
if not self.pk and not self.fid and self.actor.is_local:
self.fid = self.get_federation_id()
self.followers_url = self.fid + "/followers"
return super().save(**kwargs)
def should_autoapprove_follow(self, actor) -> bool:
if self.privacy_level == "everyone":
return True
if self.privacy_level == "instance" and actor.get_user():
return True
return False
def schedule_scan(self, actor, force=False):
latest_scan = (
self.scans.exclude(status="errored").order_by("-creation_date").first()
)
delay_between_scans = datetime.timedelta(seconds=3600 * 24)
now = timezone.now()
if (
not force
and latest_scan
and latest_scan.creation_date + delay_between_scans > now
):
return
scan = self.scans.create(total_files=self.uploads_count, actor=actor)
from . import tasks
common_utils.on_commit(tasks.start_library_scan.delay, library_scan_id=scan.pk)
return scan
def get_channel(self):
try:
return self.channel
except ObjectDoesNotExist:
return None
def latest_scan(self):
return self.scans.order_by("-creation_date").first()
SCAN_STATUS = [
("pending", "pending"),
("scanning", "scanning"),
("errored", "errored"),
("finished", "finished"),
]
class LibraryScan(models.Model):
actor = models.ForeignKey(
"federation.Actor", null=True, blank=True, on_delete=models.CASCADE
)
library = models.ForeignKey(Library, related_name="scans", on_delete=models.CASCADE)
total_files = models.PositiveIntegerField(default=0)
processed_files = models.PositiveIntegerField(default=0)
errored_files = models.PositiveIntegerField(default=0)
status = models.CharField(default="pending", max_length=25)
creation_date = models.DateTimeField(default=timezone.now)
modification_date = models.DateTimeField(null=True, blank=True)
class TrackActor(models.Model):
"""
Denormalization table to store all playable tracks for a given user
Empty user means the track is public or internal (cf internal flag too)
"""
id = models.BigAutoField(primary_key=True)
actor = models.ForeignKey(
"federation.Actor",
on_delete=models.CASCADE,
related_name="track_actor_items",
blank=True,
null=True,
)
track = models.ForeignKey(
Track, on_delete=models.CASCADE, related_name="track_actor_items"
)
upload = models.ForeignKey(
Upload, on_delete=models.CASCADE, related_name="track_actor_items"
)
internal = models.BooleanField(default=False, db_index=True)
class Meta:
unique_together = ("track", "actor", "internal", "upload")
@classmethod
def get_objs(cls, library, actor_ids, upload_and_track_ids):
upload_and_track_ids = upload_and_track_ids or library.uploads.filter(
import_status="finished", track__isnull=False
).values_list("id", "track")
objs = []
if library.privacy_level == "me":
if library.get_channel():
follow_queryset = library.channel.actor.received_follows
else:
follow_queryset = library.received_follows
follow_queryset = follow_queryset.filter(approved=True).exclude(
actor__user__isnull=True
)
if actor_ids:
follow_queryset = follow_queryset.filter(actor__pk__in=actor_ids)
final_actor_ids = list(follow_queryset.values_list("actor", flat=True))
owner = library.actor if library.actor.is_local else None
if owner and (not actor_ids or owner in final_actor_ids):
final_actor_ids.append(owner.pk)
for actor_id in final_actor_ids:
for upload_id, track_id in upload_and_track_ids:
objs.append(
cls(actor_id=actor_id, track_id=track_id, upload_id=upload_id)
)
elif library.privacy_level == "followers":
follow_queryset = library.actor.received_follows
follow_queryset = follow_queryset.filter(approved=True).exclude(
actor__user__isnull=True
)
if actor_ids:
follow_queryset = follow_queryset.filter(actor__pk__in=actor_ids)
owner = library.actor if library.actor.is_local else None
final_actor_ids = list(follow_queryset.values_list("actor", flat=True))
if owner and (not actor_ids or owner in final_actor_ids):
final_actor_ids.append(owner.pk)
final_actor_ids = list(follow_queryset.values_list("actor", flat=True))
for actor_id in final_actor_ids:
for upload_id, track_id in upload_and_track_ids:
objs.append(
cls(actor_id=actor_id, track_id=track_id, upload_id=upload_id)
)
elif library.privacy_level == "instance":
for upload_id, track_id in upload_and_track_ids:
objs.append(
cls(
actor_id=None,
track_id=track_id,
upload_id=upload_id,
internal=True,
)
)
elif library.privacy_level == "everyone":
for upload_id, track_id in upload_and_track_ids:
objs.append(cls(actor_id=None, track_id=track_id, upload_id=upload_id))
return objs
@classmethod
def create_entries(
cls, library, delete_existing=True, actor_ids=None, upload_and_track_ids=None
):
if not settings.MUSIC_USE_DENORMALIZATION:
# skip
return
if delete_existing:
to_delete = cls.objects.filter(upload__library=library)
if actor_ids:
to_delete = to_delete.filter(actor__pk__in=actor_ids)
# we don't use .delete() here because we don't want signals to fire
to_delete._raw_delete(to_delete.db)
objs = cls.get_objs(
library, actor_ids=actor_ids, upload_and_track_ids=upload_and_track_ids
)
return cls.objects.bulk_create(objs, ignore_conflicts=True, batch_size=5000)
@receiver(post_save, sender=ImportJob) @receiver(post_save, sender=ImportJob)
def update_batch_status(sender, instance, **kwargs): def update_batch_status(sender, instance, **kwargs):
instance.batch.update_status() instance.batch.update_status()
@receiver(post_save, sender=Upload)
def update_denormalization_track_actor(sender, instance, created, **kwargs):
if (
(
created
or (
kwargs.get("update_fields", None)
and "library" in kwargs.get("update_fields")
)
)
and settings.MUSIC_USE_DENORMALIZATION
and instance.track_id
and instance.import_status == "finished"
):
TrackActor.create_entries(
instance.library,
delete_existing=False,
upload_and_track_ids=[(instance.pk, instance.track_id)],
)
@receiver(pre_save, sender=Library)
def set_privacy_level_updated(sender, instance, update_fields, **kwargs):
if not instance.pk:
return
if update_fields is not None and "privacy_level" not in update_fields:
return
db_value = instance.__class__.objects.filter(pk=instance.pk).values_list(
"privacy_level", flat=True
)[0]
if db_value != instance.privacy_level:
# Needed to update denormalized permissions
setattr(instance, "_privacy_level_updated", True)
@receiver(post_save, sender=Library)
def update_denormalization_track_user_library_privacy_level(
sender, instance, created, **kwargs
):
if created:
return
updated = getattr(instance, "_privacy_level_updated", False)
if updated:
TrackActor.create_entries(instance)
@receiver(post_save, sender=ImportBatch) @receiver(post_save, sender=ImportBatch)
def update_request_status(sender, instance, created, **kwargs): def update_request_status(sender, instance, created, **kwargs):
update_fields = kwargs.get("update_fields", []) or [] update_fields = kwargs.get("update_fields", []) or []
......
from funkwhale_api.common import models as common_models
from funkwhale_api.common import mutations
from funkwhale_api.common import serializers as common_serializers
from funkwhale_api.common import utils as common_utils
from funkwhale_api.federation import routes
from funkwhale_api.tags import models as tags_models
from funkwhale_api.tags import serializers as tags_serializers
from . import models
NOOP = object()
def can_suggest(obj, actor):
return obj.is_local
def can_approve(obj, actor):
if not obj.is_local or not actor.user:
return False
return (
actor.id is not None and actor.id == obj.attributed_to_id
) or actor.user.get_permissions()["library"]
class TagMutation(mutations.UpdateMutationSerializer):
tags = tags_serializers.TagsListField()
def get_previous_state_handlers(self):
handlers = super().get_previous_state_handlers()
handlers["tags"] = lambda obj: list(
sorted(obj.tagged_items.values_list("tag__name", flat=True))
)
return handlers
def update(self, instance, validated_data):
tags = validated_data.pop("tags", NOOP)
r = super().update(instance, validated_data)
if tags != NOOP:
tags_models.set_tags(instance, *tags)
return r
class DescriptionMutation(mutations.UpdateMutationSerializer):
description = common_serializers.ContentSerializer()
def get_previous_state_handlers(self):
handlers = super().get_previous_state_handlers()
handlers["description"] = (
lambda obj: common_serializers.ContentSerializer(obj.description).data
if obj.description_id
else None
)
return handlers
def update(self, instance, validated_data):
description = validated_data.pop("description", NOOP)
r = super().update(instance, validated_data)
if description != NOOP:
common_utils.attach_content(instance, "description", description)
return r
class CoverMutation(mutations.UpdateMutationSerializer):
cover = common_serializers.RelatedField(
"uuid",
queryset=common_models.Attachment.objects.all().local(),
serializer=None,
)
def get_serialized_relations(self):
serialized_relations = super().get_serialized_relations()
serialized_relations["cover"] = "uuid"
return serialized_relations
def get_previous_state_handlers(self):
handlers = super().get_previous_state_handlers()
handlers["cover"] = (
lambda obj: str(obj.attachment_cover.uuid) if obj.attachment_cover else None
)
return handlers
def update(self, instance, validated_data):
if "cover" in validated_data:
validated_data["attachment_cover"] = validated_data.pop("cover")
return super().update(instance, validated_data)
def mutation_post_init(self, mutation):
# link cover_attachment (if any) to mutation
if "cover" not in mutation.payload:
return
try:
attachment = common_models.Attachment.objects.get(
uuid=mutation.payload["cover"]
)
except common_models.Attachment.DoesNotExist:
return
common_models.MutationAttachment.objects.create(
attachment=attachment, mutation=mutation
)
@mutations.registry.connect(
"update",
models.Track,
perm_checkers={"suggest": can_suggest, "approve": can_approve},
)
class TrackMutationSerializer(CoverMutation, TagMutation, DescriptionMutation):
class Meta:
model = models.Track
fields = [
"license",
"title",
"position",
"copyright",
"tags",
"description",
"cover",
]
def get_serialized_relations(self):
serialized_relations = super().get_serialized_relations()
serialized_relations["license"] = "code"
return serialized_relations
def post_apply(self, obj, validated_data):
channel = (
obj.artist_credit.all()[0].artist.get_channel()
if len(obj.artist_credit.all()) == 1
else None
)
if channel:
upload = channel.library.uploads.filter(track=obj).first()
if upload:
routes.outbox.dispatch(
{"type": "Update", "object": {"type": "Audio"}},
context={"upload": upload},
)
else:
routes.outbox.dispatch(
{"type": "Update", "object": {"type": "Track"}}, context={"track": obj}
)
@mutations.registry.connect(
"update",
models.Artist,
perm_checkers={"suggest": can_suggest, "approve": can_approve},
)
class ArtistMutationSerializer(CoverMutation, TagMutation, DescriptionMutation):
class Meta:
model = models.Artist
fields = ["name", "tags", "description", "cover"]
def post_apply(self, obj, validated_data):
routes.outbox.dispatch(
{"type": "Update", "object": {"type": "Artist"}}, context={"artist": obj}
)
@mutations.registry.connect(
"update",
models.Album,
perm_checkers={"suggest": can_suggest, "approve": can_approve},
)
class AlbumMutationSerializer(CoverMutation, TagMutation, DescriptionMutation):
class Meta:
model = models.Album
fields = ["title", "release_date", "tags", "cover", "description"]
def post_apply(self, obj, validated_data):
routes.outbox.dispatch(
{"type": "Update", "object": {"type": "Album"}}, context={"album": obj}
)
from rest_framework.permissions import BasePermission
from funkwhale_api.common import preferences
from funkwhale_api.federation import actors, models
class Listen(BasePermission):
def has_permission(self, request, view):
if not preferences.get("common__api_authentication_required"):
return True
user = getattr(request, "user", None)
if user and user.is_authenticated:
return True
actor = getattr(request, "actor", None)
if actor is None:
return False
library = actors.SYSTEM_ACTORS["library"].get_actor_instance()
return models.Follow.objects.filter(
target=library, actor=actor, approved=True
).exists()
from django.db.models import Q import os
import pathlib
import urllib.parse
from django import urls
from django.conf import settings
from django.db import transaction
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers from rest_framework import serializers
from taggit.models import Tag
from funkwhale_api.activity import serializers as activity_serializers from funkwhale_api.activity import serializers as activity_serializers
from funkwhale_api.users.serializers import UserBasicSerializer from funkwhale_api.common import fields
from funkwhale_api.common import models as common_models
from funkwhale_api.common import serializers as common_serializers
from funkwhale_api.common import utils as common_utils
from funkwhale_api.federation import routes
from funkwhale_api.federation import utils as federation_utils
from funkwhale_api.federation.serializers import APIActorSerializer
from funkwhale_api.playlists import models as playlists_models
from funkwhale_api.tags import models as tag_models
from funkwhale_api.tags import serializers as tags_serializers
from . import filters, models, tasks, utils
NOOP = object()
COVER_WRITE_FIELD = common_serializers.RelatedField(
"uuid",
queryset=common_models.Attachment.objects.all().local(),
serializer=None,
allow_null=True,
required=False,
queryset_filter=lambda qs, context: qs.filter(actor=context["request"].user.actor),
write_only=True,
)
from . import models, tasks
class CoverField(common_serializers.AttachmentSerializer):
pass
class ArtistAlbumSerializer(serializers.ModelSerializer):
tracks_count = serializers.SerializerMethodField()
class Meta: cover_field = CoverField()
model = models.Album
fields = (
"id", class OptionalDescriptionMixin:
"mbid", def to_representation(self, obj):
"title", repr = super().to_representation(obj)
"artist", if self.context.get("description", False):
"release_date", description = obj.description
"cover", repr["description"] = (
"creation_date", common_serializers.ContentSerializer(description).data
"tracks_count", if description
else None
) )
def get_tracks_count(self, o): return repr
return o._tracks_count
class LicenseSerializer(serializers.Serializer):
id = serializers.SerializerMethodField()
url = serializers.URLField()
code = serializers.CharField()
name = serializers.CharField()
redistribute = serializers.BooleanField()
derivative = serializers.BooleanField()
commercial = serializers.BooleanField()
attribution = serializers.BooleanField()
copyleft = serializers.BooleanField()
class ArtistWithAlbumsSerializer(serializers.ModelSerializer): def get_id(self, obj) -> str:
albums = ArtistAlbumSerializer(many=True, read_only=True) return obj["identifiers"][0]
class Meta: class Meta:
model = models.Artist model = models.License
fields = ("id", "mbid", "name", "creation_date", "albums")
class TrackFileSerializer(serializers.ModelSerializer): class ArtistAlbumSerializer(serializers.Serializer):
path = serializers.SerializerMethodField() tracks_count = serializers.SerializerMethodField()
cover = CoverField(allow_null=True)
is_playable = serializers.SerializerMethodField()
is_local = serializers.BooleanField()
id = serializers.IntegerField()
fid = serializers.URLField()
mbid = serializers.UUIDField()
title = serializers.CharField()
artist_credit = serializers.SerializerMethodField()
release_date = serializers.DateField()
creation_date = serializers.DateTimeField()
def get_artist_credit(self, o) -> int:
return [ac.id for ac in o.artist_credit.all()]
def get_tracks_count(self, o) -> int:
return len(o.tracks.all())
def get_is_playable(self, obj) -> bool:
try:
return bool(obj.is_playable_by_actor)
except AttributeError:
return None
class Meta:
model = models.TrackFile
fields = (
"id",
"path",
"source",
"filename",
"mimetype",
"track",
"duration",
"mimetype",
"bitrate",
"size",
)
read_only_fields = ["duration", "mimetype", "bitrate", "size"]
def get_path(self, o): DATETIME_FIELD = serializers.DateTimeField()
url = o.path
return url
class AlbumTrackSerializer(serializers.ModelSerializer): class InlineActorSerializer(serializers.Serializer):
files = TrackFileSerializer(many=True, read_only=True) full_username = serializers.CharField()
preferred_username = serializers.CharField()
domain = serializers.CharField(source="domain_id")
class ArtistWithAlbumsInlineChannelSerializer(serializers.Serializer):
uuid = serializers.CharField()
actor = InlineActorSerializer()
class ArtistWithAlbumsSerializer(OptionalDescriptionMixin, serializers.Serializer):
albums = serializers.SerializerMethodField()
tags = serializers.SerializerMethodField()
attributed_to = APIActorSerializer(allow_null=True)
channel = ArtistWithAlbumsInlineChannelSerializer(allow_null=True)
tracks_count = serializers.SerializerMethodField()
id = serializers.IntegerField()
fid = serializers.URLField()
mbid = serializers.UUIDField()
name = serializers.CharField()
content_category = serializers.CharField()
creation_date = serializers.DateTimeField()
is_local = serializers.BooleanField()
cover = CoverField(allow_null=True)
def get_albums(self, artist):
albums = artist.artist_credit.albums()
return ArtistAlbumSerializer(albums, many=True).data
@extend_schema_field({"type": "array", "items": {"type": "string"}})
def get_tags(self, obj):
tagged_items = getattr(obj, "_prefetched_tagged_items", [])
return [ti.tag.name for ti in tagged_items]
def get_tracks_count(self, o) -> int:
return getattr(o, "_tracks_count", 0)
class ArtistSerializer(serializers.ModelSerializer):
cover = CoverField(allow_null=True, required=False)
description = common_serializers.ContentSerializer(allow_null=True, required=False)
channel = serializers.UUIDField(allow_null=True, required=False)
tags = serializers.SerializerMethodField()
class Meta: class Meta:
model = models.Track model = models.Artist
fields = ( fields = (
"id", "id",
"fid",
"mbid", "mbid",
"title", "name",
"album",
"artist",
"creation_date", "creation_date",
"files", "modification_date",
"position", "is_local",
"content_category",
"description",
"cover",
"channel",
"attributed_to",
"tags",
) )
@extend_schema_field({"type": "array", "items": {"type": "string"}})
def get_tags(self, obj):
tagged_items = getattr(obj, "_prefetched_tagged_items", [])
return [ti.tag.name for ti in tagged_items]
class ArtistCreditSerializer(serializers.ModelSerializer):
artist = ArtistSerializer()
class ArtistSimpleSerializer(serializers.ModelSerializer):
class Meta: class Meta:
model = models.Artist model = models.ArtistCredit
fields = ("id", "mbid", "name", "creation_date") fields = ["artist", "credit", "joinphrase", "index"]
class AlbumSerializer(serializers.ModelSerializer): class TrackAlbumSerializer(serializers.ModelSerializer):
tracks = serializers.SerializerMethodField() artist_credit = ArtistCreditSerializer(many=True)
artist = ArtistSimpleSerializer(read_only=True) cover = CoverField(allow_null=True)
tracks_count = serializers.SerializerMethodField()
def get_tracks_count(self, o) -> int:
return getattr(o, "_prefetched_tracks_count", len(o.tracks.all()))
class Meta: class Meta:
model = models.Album model = models.Album
fields = ( fields = (
"id", "id",
"fid",
"mbid", "mbid",
"title", "title",
"artist", "artist_credit",
"tracks",
"release_date", "release_date",
"cover", "cover",
"creation_date", "creation_date",
"is_local",
"tracks_count",
) )
def get_tracks(self, o):
ordered_tracks = sorted( class TrackUploadSerializer(serializers.Serializer):
o.tracks.all(), uuid = serializers.UUIDField()
key=lambda v: (v.position, v.title) if v.position else (99999, v.title), listen_url = serializers.URLField()
size = serializers.IntegerField()
duration = serializers.IntegerField()
bitrate = serializers.IntegerField()
mimetype = serializers.CharField()
extension = serializers.CharField()
is_local = serializers.SerializerMethodField()
def get_is_local(self, upload) -> bool:
return federation_utils.is_local(upload.fid)
def sort_uploads_for_listen(uploads):
"""
Given a list of uploads, return a sorted list of uploads, with local or locally
cached ones first, and older first
"""
score = {upload: 0 for upload in uploads}
for upload in uploads:
if upload.is_local:
score[upload] = 3
elif upload.audio_file:
score[upload] = 2
sorted_tuples = sorted(score.items(), key=lambda t: (t[1], -t[0].pk), reverse=True)
return [t[0] for t in sorted_tuples]
class TrackSerializer(OptionalDescriptionMixin, serializers.Serializer):
artist_credit = ArtistCreditSerializer(many=True)
album = TrackAlbumSerializer(read_only=True)
uploads = serializers.SerializerMethodField()
listen_url = serializers.SerializerMethodField()
tags = serializers.SerializerMethodField()
attributed_to = APIActorSerializer(allow_null=True)
description = common_serializers.ContentSerializer(allow_null=True, required=False)
id = serializers.IntegerField()
fid = serializers.URLField()
mbid = serializers.UUIDField()
title = serializers.CharField()
creation_date = serializers.DateTimeField()
is_local = serializers.BooleanField()
position = serializers.IntegerField()
disc_number = serializers.IntegerField()
downloads_count = serializers.IntegerField()
copyright = serializers.CharField()
license = serializers.SerializerMethodField()
cover = CoverField(allow_null=True)
is_playable = serializers.SerializerMethodField()
@extend_schema_field(OpenApiTypes.URI)
def get_listen_url(self, obj):
return obj.listen_url
# @extend_schema_field({"type": "array", "items": {"type": "object"}})
@extend_schema_field(TrackUploadSerializer(many=True))
def get_uploads(self, obj):
uploads = getattr(obj, "playable_uploads", [])
# we put local uploads first
uploads = [
TrackUploadSerializer(u).data for u in sort_uploads_for_listen(uploads)
]
uploads = sorted(uploads, key=lambda u: u["is_local"], reverse=True)
return list(uploads)
@extend_schema_field({"type": "array", "items": {"type": "string"}})
def get_tags(self, obj):
tagged_items = getattr(obj, "_prefetched_tagged_items", [])
return [ti.tag.name for ti in tagged_items]
def get_license(self, o) -> str:
return o.license_id
def get_is_playable(self, obj) -> bool:
return bool(getattr(obj, "playable_uploads", []))
class AlbumSerializer(OptionalDescriptionMixin, serializers.Serializer):
artist_credit = ArtistCreditSerializer(many=True)
cover = CoverField(allow_null=True)
is_playable = serializers.SerializerMethodField()
tags = serializers.SerializerMethodField()
tracks_count = serializers.SerializerMethodField()
attributed_to = APIActorSerializer()
id = serializers.IntegerField()
fid = serializers.URLField()
mbid = serializers.UUIDField()
title = serializers.CharField()
release_date = serializers.DateField()
creation_date = serializers.DateTimeField()
is_local = serializers.BooleanField()
duration = serializers.SerializerMethodField(read_only=True)
tracks = TrackSerializer(many=True, allow_null=True)
description = common_serializers.ContentSerializer(allow_null=True, required=False)
def get_tracks_count(self, o) -> int:
return len(o.tracks.all())
def get_is_playable(self, obj) -> bool:
try:
return any(
[
bool(getattr(t, "is_playable_by_actor", None))
for t in obj.tracks.all()
]
) )
return AlbumTrackSerializer(ordered_tracks, many=True).data except AttributeError:
return None
@extend_schema_field({"type": "array", "items": {"type": "string"}})
def get_tags(self, obj):
tagged_items = getattr(obj, "_prefetched_tagged_items", [])
return [ti.tag.name for ti in tagged_items]
class TrackAlbumSerializer(serializers.ModelSerializer): def get_duration(self, obj) -> int:
artist = ArtistSimpleSerializer(read_only=True) try:
return obj.duration
except AttributeError:
# no annotation?
return 0
@common_serializers.track_fields_for_update("name", "description", "privacy_level")
class LibraryForOwnerSerializer(serializers.ModelSerializer):
uploads_count = serializers.SerializerMethodField()
size = serializers.SerializerMethodField()
actor = serializers.SerializerMethodField()
class Meta: class Meta:
model = models.Album model = models.Library
fields = ( fields = [
"id", "uuid",
"mbid", "fid",
"title", "name",
"artist", "privacy_level",
"release_date", "uploads_count",
"cover", "size",
"creation_date", "creation_date",
"actor",
]
read_only_fields = ["fid", "uuid", "creation_date", "actor"]
def get_uploads_count(self, o) -> int:
return getattr(o, "_uploads_count", int(o.uploads_count))
def get_size(self, o) -> int:
return getattr(o, "_size", 0)
def on_updated_fields(self, obj, before, after):
routes.outbox.dispatch(
{"type": "Update", "object": {"type": "Library"}}, context={"library": obj}
) )
@extend_schema_field(APIActorSerializer)
def get_actor(self, o):
return APIActorSerializer(o.actor).data
class TrackSerializer(serializers.ModelSerializer):
files = TrackFileSerializer(many=True, read_only=True) class UploadSerializer(serializers.ModelSerializer):
artist = ArtistSimpleSerializer(read_only=True) from funkwhale_api.audio.serializers import ChannelSerializer
album = TrackAlbumSerializer(read_only=True)
lyrics = serializers.SerializerMethodField() track = TrackSerializer(required=False, allow_null=True)
library = common_serializers.RelatedField(
"uuid",
LibraryForOwnerSerializer(),
required=False,
filters=lambda context: {"actor": context["user"].actor},
)
privacy_level = serializers.ChoiceField(
choices=fields.PRIVACY_LEVEL_CHOICES, required=False
)
channel = common_serializers.RelatedField(
"uuid",
ChannelSerializer(),
required=False,
filters=lambda context: {"attributed_to": context["user"].actor},
)
class Meta: class Meta:
model = models.Track model = models.Upload
fields = ( fields = [
"id", "uuid",
"mbid", "filename",
"title", "creation_date",
"album", "mimetype",
"artist", "track",
"library",
"channel",
"duration",
"mimetype",
"bitrate",
"size",
"import_date",
"import_status",
"privacy_level",
]
read_only_fields = [
"uuid",
"creation_date", "creation_date",
"files", "duration",
"position", "mimetype",
"lyrics", "bitrate",
"size",
"track",
"import_date",
]
def validate(self, data):
validated_data = super().validate(data)
if "audio_file" in validated_data:
audio_data = utils.get_audio_file_data(validated_data["audio_file"])
if audio_data:
validated_data["duration"] = audio_data["length"]
validated_data["bitrate"] = audio_data["bitrate"]
return validated_data
def filter_album(qs, context):
if "channel" in context:
return qs.filter(artist_credit__artist__channel=context["channel"])
if "actor" in context:
return qs.filter(artist_credit__artist__attributed_to=context["actor"])
return qs.none()
class ImportMetadataSerializer(serializers.Serializer):
title = serializers.CharField(max_length=500, required=True)
description = serializers.CharField(
max_length=5000, required=False, allow_null=True
)
mbid = serializers.UUIDField(required=False, allow_null=True)
copyright = serializers.CharField(max_length=500, required=False, allow_null=True)
position = serializers.IntegerField(min_value=1, required=False, allow_null=True)
tags = tags_serializers.TagsListField(required=False)
license = common_serializers.RelatedField(
"code", LicenseSerializer(), required=False, allow_null=True
)
cover = common_serializers.RelatedField(
"uuid",
queryset=common_models.Attachment.objects.all().local(),
serializer=None,
queryset_filter=lambda qs, context: qs.filter(actor=context["actor"]),
write_only=True,
required=False,
allow_null=True,
)
album = common_serializers.RelatedField(
"id",
queryset=models.Album.objects.all(),
serializer=None,
queryset_filter=filter_album,
write_only=True,
required=False,
allow_null=True,
) )
def get_lyrics(self, obj):
return obj.get_lyrics_url()
@extend_schema_field(ImportMetadataSerializer)
class ImportMetadataField(serializers.JSONField):
def to_internal_value(self, v):
v = super().to_internal_value(v)
s = ImportMetadataSerializer(
data=v, context={"actor": self.context["user"].actor}
)
s.is_valid(raise_exception=True)
return v
class TagSerializer(serializers.ModelSerializer):
class Meta:
model = Tag
fields = ("id", "name", "slug")
class UploadForOwnerSerializer(UploadSerializer):
import_status = serializers.ChoiceField(
choices=models.TRACK_FILE_IMPORT_STATUS_CHOICES, default="pending"
)
import_metadata = ImportMetadataField(required=False)
filename = serializers.CharField(required=False)
class Meta(UploadSerializer.Meta):
fields = UploadSerializer.Meta.fields + [
"import_details",
"import_metadata",
"import_reference",
"metadata",
"source",
"audio_file",
]
extra_kwargs = {"audio_file": {"write_only": True}}
read_only_fields = UploadSerializer.Meta.read_only_fields + [
"import_details",
"metadata",
]
def to_representation(self, obj):
r = super().to_representation(obj)
if "audio_file" in r:
del r["audio_file"]
r["privacy_level"] = obj.library.privacy_level
return r
class SimpleAlbumSerializer(serializers.ModelSerializer): def validate(self, validated_data):
class Meta: if (
model = models.Album not self.instance
fields = ("id", "mbid", "title", "release_date", "cover") and "library" not in validated_data
and "channel" not in validated_data
):
raise serializers.ValidationError(
"You need to specify a channel or a library"
)
if (
not self.instance
and "library" in validated_data
and "channel" in validated_data
):
raise serializers.ValidationError(
"You may specify a channel or a library, not both"
)
if "audio_file" in validated_data:
self.validate_upload_quota(validated_data["audio_file"])
if "channel" in validated_data:
validated_data["library"] = validated_data.pop("channel").library
class LyricsSerializer(serializers.ModelSerializer): if "import_status" in validated_data and validated_data[
class Meta: "import_status"
model = models.Lyrics ] not in ["draft", "pending"]:
fields = ("id", "work", "content", "content_rendered") raise serializers.ValidationError(
"Newly created Uploads need to have import_status of draft or pending"
)
return super().validate(validated_data)
def validate_upload_quota(self, f):
quota_status = self.context["user"].get_quota_status()
if (f.size / 1000 / 1000) > quota_status["remaining"]:
raise serializers.ValidationError("upload_quota_reached")
class ImportJobSerializer(serializers.ModelSerializer): return f
track_file = TrackFileSerializer(read_only=True)
class Meta:
model = models.ImportJob
fields = ("id", "mbid", "batch", "source", "status", "track_file", "audio_file")
read_only_fields = ("status", "track_file")
class UploadBulkUpdateListSerializer(serializers.ListSerializer):
def create(self, validated_data):
privacy_levels = ["me", "instance", "followers", "everyone"]
privacy_level_map = {
privacy_level: self.context["actor"]
.libraries.filter(privacy_level=privacy_level, name=privacy_level)
.exclude(playlist__isnull=False)
.first()
for privacy_level in privacy_levels
}
if None in privacy_level_map.values():
raise federation_utils.BuiltInLibException(
{"details": "Built-in library not found or too many"}
)
objs = []
for data in validated_data:
try:
uuid = data.get("uuid", None)
upload = models.Upload.objects.select_related("track").get(uuid=uuid)
except models.Upload.DoesNotExist:
raise serializers.ValidationError(
f"Upload with uuid {uuid} does not exist"
)
upload.library = privacy_level_map[data["privacy_level"]]
# bulk_update skip post-save signal (raw sql db query), we need the to update the denormalization table
# could optimize and work on a bulk denormalization table update. In the meantime we do it one by one
upload.save(update_fields=["library"])
objs.append(upload)
return objs
class ImportBatchSerializer(serializers.ModelSerializer):
submitted_by = UserBasicSerializer(read_only=True) class UploadBulkUpdateSerializer(serializers.Serializer):
uuid = serializers.UUIDField()
privacy_level = serializers.ChoiceField(choices=fields.PRIVACY_LEVEL_CHOICES)
class Meta: class Meta:
model = models.ImportBatch list_serializer_class = UploadBulkUpdateListSerializer
fields = (
"id",
"submitted_by", class UploadActionSerializer(common_serializers.ActionSerializer):
"source", actions = [
"status", common_serializers.Action("delete", allow_all=True),
"creation_date", common_serializers.Action("relaunch_import", allow_all=True),
"import_request", common_serializers.Action("publish", allow_all=False),
]
filterset_class = filters.UploadFilter
pk_field = "uuid"
@transaction.atomic
def handle_delete(self, objects):
libraries = sorted(set(objects.values_list("library", flat=True)))
for id in libraries:
# we group deletes by library for easier federation
uploads = objects.filter(library__pk=id).select_related("library__actor")
for chunk in common_utils.chunk_queryset(uploads, 100):
routes.outbox.dispatch(
{"type": "Delete", "object": {"type": "Audio"}},
context={"uploads": chunk},
) )
read_only_fields = ("creation_date", "submitted_by", "source")
def to_representation(self, instance): return objects.delete()
repr = super().to_representation(instance)
try: @transaction.atomic
repr["job_count"] = instance.job_count def handle_relaunch_import(self, objects):
except AttributeError: qs = objects.filter(import_status__in=["pending", "skipped", "errored"])
# Queryset was not annotated pks = list(qs.values_list("id", flat=True))
pass qs.update(import_status="pending")
return repr for pk in pks:
common_utils.on_commit(tasks.process_upload.delay, upload_id=pk)
@transaction.atomic
def handle_publish(self, objects):
qs = objects.filter(import_status="draft")
pks = list(qs.values_list("id", flat=True))
qs.update(import_status="pending")
for pk in pks:
common_utils.on_commit(tasks.process_upload.delay, upload_id=pk)
class SimpleAlbumSerializer(serializers.ModelSerializer):
cover = CoverField(allow_null=True)
class Meta:
model = models.Album
fields = ("id", "mbid", "title", "release_date", "cover")
class TrackActivitySerializer(activity_serializers.ModelSerializer): class TrackActivitySerializer(activity_serializers.ModelSerializer):
type = serializers.SerializerMethodField() type = serializers.SerializerMethodField()
name = serializers.CharField(source="title") name = serializers.CharField(source="title")
artist = serializers.CharField(source="artist.name") artist_credit = serializers.CharField(source="get_artist_credit_string")
album = serializers.CharField(source="album.title") album = serializers.SerializerMethodField()
class Meta: class Meta:
model = models.Track model = models.Track
fields = ["id", "local_id", "name", "type", "artist", "album"] fields = ["id", "local_id", "name", "type", "artist_credit", "album"]
def get_type(self, obj): def get_type(self, obj):
return "Audio" return "Audio"
def get_album(self, o):
if o.album:
return o.album.title
class ImportJobRunSerializer(serializers.Serializer):
jobs = serializers.PrimaryKeyRelatedField( def get_embed_url(type, id):
many=True, return settings.FUNKWHALE_EMBED_URL + f"?type={type}&id={id}"
queryset=models.ImportJob.objects.filter(status__in=["pending", "errored"]),
class OembedSerializer(serializers.Serializer):
format = serializers.ChoiceField(choices=["json"])
url = serializers.URLField()
maxheight = serializers.IntegerField(required=False)
maxwidth = serializers.IntegerField(required=False)
def validate(self, validated_data):
try:
match = common_utils.spa_resolve(
urllib.parse.urlparse(validated_data["url"]).path
) )
batches = serializers.PrimaryKeyRelatedField( except urls.exceptions.Resolver404:
many=True, queryset=models.ImportBatch.objects.all() raise serializers.ValidationError(
"Invalid URL {}".format(validated_data["url"])
)
data = {
"version": "1.0",
"type": "rich",
"provider_name": settings.APP_NAME,
"provider_url": settings.FUNKWHALE_URL,
"height": validated_data.get("maxheight") or 400,
"width": validated_data.get("maxwidth") or 600,
}
embed_id = None
embed_type = None
if match.url_name == "library_track":
qs = models.Track.objects.prefetch_related(
"artist_credit", "album__artist_credit"
).filter(pk=int(match.kwargs["pk"]))
try:
track = qs.get()
except models.Track.DoesNotExist:
raise serializers.ValidationError(
"No track matching id {}".format(match.kwargs["pk"])
)
embed_type = "track"
embed_id = track.pk
data["title"] = f"{track.title} by {track.get_artist_credit_string}"
if track.attachment_cover:
data[
"thumbnail_url"
] = track.attachment_cover.download_url_medium_square_crop
data["thumbnail_width"] = 200
data["thumbnail_height"] = 200
elif track.album and track.album.attachment_cover:
data[
"thumbnail_url"
] = track.album.attachment_cover.download_url_medium_square_crop
data["thumbnail_width"] = 200
data["thumbnail_height"] = 200
data["description"] = track.full_name
data["author_name"] = track.get_artist_credit_string
data["height"] = 150
# here we take the first artist since oembed standard do not allow a list of url
data["author_url"] = federation_utils.full_url(
common_utils.spa_reverse(
"library_artist",
kwargs={"pk": track.artist_credit.all()[0].artist.pk},
)
)
elif match.url_name == "library_album":
qs = models.Album.objects.prefetch_related("artist_credit").filter(
pk=int(match.kwargs["pk"])
)
try:
album = qs.get()
except models.Album.DoesNotExist:
raise serializers.ValidationError(
"No album matching id {}".format(match.kwargs["pk"])
)
embed_type = "album"
embed_id = album.pk
if album.attachment_cover:
data[
"thumbnail_url"
] = album.attachment_cover.download_url_medium_square_crop
data["thumbnail_width"] = 200
data["thumbnail_height"] = 200
data["title"] = f"{album.title} by {album.get_artist_credit_string}"
data["description"] = f"{album.title} by {album.get_artist_credit_string}"
data["author_name"] = album.get_artist_credit_string
data["height"] = 400
data["author_url"] = federation_utils.full_url(
common_utils.spa_reverse(
"library_artist",
kwargs={"pk": album.artist_credit.all()[0].artist.pk},
)
)
elif match.url_name == "library_artist":
qs = models.Artist.objects.filter(pk=int(match.kwargs["pk"]))
try:
artist = qs.get()
except models.Artist.DoesNotExist:
raise serializers.ValidationError(
"No artist matching id {}".format(match.kwargs["pk"])
)
embed_type = "artist"
embed_id = artist.pk
album_ids = (
artist.artist_credit.all()
.prefetch_related("albums")
.values_list("albums", flat=True)
)
album = (
models.Album.objects.exclude(attachment_cover=None)
.filter(pk__in=album_ids)
.order_by("-id")
.first()
)
if album and album.attachment_cover:
data[
"thumbnail_url"
] = album.attachment_cover.download_url_medium_square_crop
data["thumbnail_width"] = 200
data["thumbnail_height"] = 200
data["title"] = artist.name
data["description"] = artist.name
data["author_name"] = artist.name
data["height"] = 400
data["author_url"] = federation_utils.full_url(
common_utils.spa_reverse("library_artist", kwargs={"pk": artist.pk})
)
elif match.url_name == "channel_detail":
from funkwhale_api.audio.models import Channel
kwargs = {}
if "uuid" in match.kwargs:
kwargs["uuid"] = match.kwargs["uuid"]
else:
username_data = federation_utils.get_actor_data_from_username(
match.kwargs["username"]
)
kwargs["actor__domain"] = username_data["domain"]
kwargs["actor__preferred_username__iexact"] = username_data["username"]
qs = Channel.objects.filter(**kwargs).select_related(
"artist__attachment_cover"
)
try:
channel = qs.get()
except models.Artist.DoesNotExist:
raise serializers.ValidationError(
"No channel matching id {}".format(match.kwargs["uuid"])
)
embed_type = "channel"
embed_id = channel.uuid
if channel.artist.attachment_cover:
data[
"thumbnail_url"
] = channel.artist.attachment_cover.download_url_medium_square_crop
data["thumbnail_width"] = 200
data["thumbnail_height"] = 200
data["title"] = channel.artist.name
data["description"] = channel.artist.name
data["author_name"] = channel.artist.name
data["height"] = 400
data["author_url"] = federation_utils.full_url(
common_utils.spa_reverse(
"channel_detail", kwargs={"uuid": channel.uuid}
)
)
elif match.url_name == "library_playlist":
qs = playlists_models.Playlist.objects.filter(
pk=int(match.kwargs["pk"]), privacy_level="everyone"
)
try:
obj = qs.get()
except playlists_models.Playlist.DoesNotExist:
raise serializers.ValidationError(
"No artist matching id {}".format(match.kwargs["pk"])
)
embed_type = "playlist"
embed_id = obj.pk
playlist_tracks = obj.playlist_tracks.exclude(
track__album__attachment_cover=None
)
playlist_tracks = playlist_tracks.select_related(
"track__album__attachment_cover"
).order_by("index")
first_playlist_track = playlist_tracks.first()
if first_playlist_track:
data[
"thumbnail_url"
] = (
first_playlist_track.track.album.attachment_cover.download_url_medium_square_crop
)
data["thumbnail_width"] = 200
data["thumbnail_height"] = 200
data["title"] = obj.name
data["description"] = obj.name
data["author_name"] = obj.name
data["height"] = 400
data["author_url"] = federation_utils.full_url(
common_utils.spa_reverse("library_playlist", kwargs={"pk": obj.pk})
)
else:
raise serializers.ValidationError(
"Unsupported url: {}".format(validated_data["url"])
)
data[
"html"
] = '<iframe width="{}" height="{}" scrolling="no" frameborder="no" src="{}"></iframe>'.format(
data["width"], data["height"], get_embed_url(embed_type, embed_id)
)
return data
def create(self, data):
return data
class AlbumCreateSerializer(serializers.Serializer):
title = serializers.CharField(required=True, max_length=255)
cover = COVER_WRITE_FIELD
release_date = serializers.DateField(required=False, allow_null=True)
tags = tags_serializers.TagsListField(required=False)
description = common_serializers.ContentSerializer(allow_null=True, required=False)
# only used in album channel creation, so this is not a list
artist_credit = common_serializers.RelatedField(
"id",
queryset=models.ArtistCredit.objects.exclude(artist__channel__isnull=True),
required=True,
serializer=None,
many=True,
filters=lambda context: {"artist__attributed_to": context["user"].actor},
) )
def validate(self, validated_data): def validate(self, validated_data):
jobs = validated_data["jobs"] duplicates = models.Album.objects.none()
batches_ids = [b.pk for b in validated_data["batches"]] for ac in validated_data["artist_credit"]:
query = Q(batch__pk__in=batches_ids) duplicates = duplicates | ac.albums.filter(
query |= Q(pk__in=[j.id for j in jobs]) title__iexact=validated_data["title"]
queryset = ( )
models.ImportJob.objects.filter(query) if duplicates.exists():
.filter(status__in=["pending", "errored"]) raise serializers.ValidationError("An album with this title already exist")
.distinct()
) return super().validate(validated_data)
validated_data["_jobs"] = queryset
return validated_data def to_representation(self, obj):
return AlbumSerializer(obj, context=self.context).data
@transaction.atomic
def create(self, validated_data): def create(self, validated_data):
ids = validated_data["_jobs"].values_list("id", flat=True) instance = models.Album.objects.create(
validated_data["_jobs"].update(status="pending") attributed_to=self.context["user"].actor,
for id in ids: release_date=validated_data.get("release_date"),
tasks.import_job_run.delay(import_job_id=id) title=validated_data["title"],
return {"jobs": list(ids)} attachment_cover=validated_data.get("cover"),
)
common_utils.attach_content(
instance, "description", validated_data.get("description")
)
tag_models.set_tags(instance, *(validated_data.get("tags", []) or []))
instance.artist_credit.set(validated_data["artist_credit"])
return instance
class FSImportSerializer(serializers.Serializer):
path = serializers.CharField(allow_blank=True)
library = serializers.UUIDField()
import_reference = serializers.CharField()
prune = serializers.BooleanField(required=False, default=True)
outbox = serializers.BooleanField(required=False, default=False)
broadcast = serializers.BooleanField(required=False, default=False)
replace = serializers.BooleanField(required=False, default=False)
batch_size = serializers.IntegerField(required=False, default=1000)
verbosity = serializers.IntegerField(required=False, default=1)
def validate_path(self, value):
try:
utils.browse_dir(settings.MUSIC_DIRECTORY_PATH, value)
except NotADirectoryError:
if not os.path.isfile(pathlib.Path(settings.MUSIC_DIRECTORY_PATH) / value):
raise serializers.ValidationError("Invalid path")
except (FileNotFoundError, ValueError):
raise serializers.ValidationError("Invalid path")
return value
def validate_library(self, value):
try:
return self.context["user"].actor.libraries.get(uuid=value)
except models.Library.DoesNotExist:
raise serializers.ValidationError("Invalid library")
class SearchResultSerializer(serializers.Serializer):
artists = ArtistWithAlbumsSerializer(many=True)
tracks = TrackSerializer(many=True)
albums = AlbumSerializer(many=True)
tags = tags_serializers.TagSerializer(many=True)
import django.dispatch
""" Required args: old_status, new_status, upload """
upload_import_status_updated = django.dispatch.Signal()
import urllib.parse
from django.conf import settings
from django.db.models import Q
from django.urls import reverse
from funkwhale_api.common import middleware, preferences, utils
from funkwhale_api.playlists import models as playlists_models
from . import models, serializers
def get_twitter_card_metas(type, id):
return [
{"tag": "meta", "property": "twitter:card", "content": "player"},
{
"tag": "meta",
"property": "twitter:player",
"content": serializers.get_embed_url(type, id),
},
{"tag": "meta", "property": "twitter:player:width", "content": "600"},
{"tag": "meta", "property": "twitter:player:height", "content": "400"},
]
def library_track(request, pk, redirect_to_ap):
queryset = models.Track.objects.filter(pk=pk).prefetch_related(
"album", "artist_credit__artist"
)
try:
obj = queryset.get()
except models.Track.DoesNotExist:
return []
playable_uploads = obj.uploads.playable_by(None).order_by("id")
upload = playable_uploads.first()
if redirect_to_ap:
redirect_url = upload.fid if upload else obj.fid
raise middleware.ApiRedirect(redirect_url)
track_url = utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_track", kwargs={"pk": obj.pk}),
)
metas = [
{"tag": "meta", "property": "og:url", "content": track_url},
{"tag": "meta", "property": "og:title", "content": obj.title},
{"tag": "meta", "property": "og:type", "content": "music.song"},
{"tag": "meta", "property": "music:album:disc", "content": obj.disc_number},
{"tag": "meta", "property": "music:album:track", "content": obj.position},
]
# following https://ogp.me/#array
for ac in obj.artist_credit.all():
metas.append(
{
"tag": "meta",
"property": "music:musician",
"content": utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_artist", kwargs={"pk": ac.artist.pk}),
),
}
)
if obj.album:
metas.append(
{
"tag": "meta",
"property": "music:album",
"content": utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_album", kwargs={"pk": obj.album.pk}),
),
},
)
if obj.attachment_cover:
metas.append(
{
"tag": "meta",
"property": "og:image",
"content": obj.attachment_cover.download_url_medium_square_crop,
}
)
elif obj.album and obj.album.attachment_cover:
metas.append(
{
"tag": "meta",
"property": "og:image",
"content": obj.album.attachment_cover.download_url_medium_square_crop,
}
)
if upload:
metas.append(
{
"tag": "meta",
"property": "og:audio",
"content": utils.join_url(settings.FUNKWHALE_URL, obj.listen_url),
}
)
if preferences.get("federation__enabled"):
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/activity+json",
"href": upload.fid,
}
)
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/json+oembed",
"href": (
utils.join_url(settings.FUNKWHALE_URL, reverse("api:v1:oembed"))
+ f"?format=json&url={urllib.parse.quote_plus(track_url)}"
),
}
)
# twitter player is also supported in various software
metas += get_twitter_card_metas(type="track", id=obj.pk)
return metas
def library_album(request, pk, redirect_to_ap):
queryset = models.Album.objects.filter(pk=pk).prefetch_related("artist_credit")
try:
obj = queryset.get()
except models.Album.DoesNotExist:
return []
if redirect_to_ap:
raise middleware.ApiRedirect(obj.fid)
album_url = utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_album", kwargs={"pk": obj.pk}),
)
metas = [
{"tag": "meta", "property": "og:url", "content": album_url},
{"tag": "meta", "property": "og:title", "content": obj.title},
{"tag": "meta", "property": "og:type", "content": "music.album"},
]
# following https://ogp.me/#array
for ac in obj.artist_credit.all():
metas.append(
{
"tag": "meta",
"property": "music:musician",
"content": utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_artist", kwargs={"pk": ac.artist.pk}),
),
}
)
if obj.release_date:
metas.append(
{
"tag": "meta",
"property": "music:release_date",
"content": str(obj.release_date),
}
)
if obj.attachment_cover:
metas.append(
{
"tag": "meta",
"property": "og:image",
"content": obj.attachment_cover.download_url_medium_square_crop,
}
)
if preferences.get("federation__enabled"):
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/activity+json",
"href": obj.fid,
}
)
if models.Upload.objects.filter(track__album=obj).playable_by(None).exists():
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/json+oembed",
"href": (
utils.join_url(settings.FUNKWHALE_URL, reverse("api:v1:oembed"))
+ f"?format=json&url={urllib.parse.quote_plus(album_url)}"
),
}
)
# twitter player is also supported in various software
metas += get_twitter_card_metas(type="album", id=obj.pk)
return metas
def library_artist(request, pk, redirect_to_ap):
queryset = models.Artist.objects.filter(pk=pk)
try:
obj = queryset.get()
except models.Artist.DoesNotExist:
return []
if redirect_to_ap:
raise middleware.ApiRedirect(obj.fid)
artist_url = utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_artist", kwargs={"pk": obj.pk}),
)
# we use latest album's cover as artist image
latest_album = (
obj.artist_credit.albums()
.exclude(attachment_cover=None)
.order_by("release_date")
.last()
)
metas = [
{"tag": "meta", "property": "og:url", "content": artist_url},
{"tag": "meta", "property": "og:title", "content": obj.name},
{"tag": "meta", "property": "og:type", "content": "profile"},
]
if latest_album and latest_album.attachment_cover:
metas.append(
{
"tag": "meta",
"property": "og:image",
"content": latest_album.attachment_cover.download_url_medium_square_crop,
}
)
if preferences.get("federation__enabled"):
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/activity+json",
"href": obj.fid,
}
)
if (
models.Upload.objects.filter(
Q(track__artist_credit__artist=obj)
| Q(track__album__artist_credit__artist=obj)
)
.playable_by(None)
.exists()
):
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/json+oembed",
"href": (
utils.join_url(settings.FUNKWHALE_URL, reverse("api:v1:oembed"))
+ f"?format=json&url={urllib.parse.quote_plus(artist_url)}"
),
}
)
# twitter player is also supported in various software
metas += get_twitter_card_metas(type="artist", id=obj.pk)
return metas
def library_playlist(request, pk, redirect_to_ap):
queryset = playlists_models.Playlist.objects.filter(pk=pk, privacy_level="everyone")
try:
obj = queryset.get()
except playlists_models.Playlist.DoesNotExist:
return []
obj_url = utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_playlist", kwargs={"pk": obj.pk}),
)
# we use the first playlist track's album's cover as image
playlist_tracks = obj.playlist_tracks.exclude(track__album__attachment_cover=None)
playlist_tracks = playlist_tracks.select_related("track__album").order_by("index")
first_playlist_track = playlist_tracks.first()
metas = [
{"tag": "meta", "property": "og:url", "content": obj_url},
{"tag": "meta", "property": "og:title", "content": obj.name},
{"tag": "meta", "property": "og:type", "content": "music.playlist"},
]
if first_playlist_track:
metas.append(
{
"tag": "meta",
"property": "og:image",
"content": first_playlist_track.track.album.attachment_cover.download_url_medium_square_crop,
}
)
if (
models.Upload.objects.filter(
track__pk__in=[obj.playlist_tracks.values("track")]
)
.playable_by(None)
.exists()
):
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/json+oembed",
"href": (
utils.join_url(settings.FUNKWHALE_URL, reverse("api:v1:oembed"))
+ f"?format=json&url={urllib.parse.quote_plus(obj_url)}"
),
}
)
# twitter player is also supported in various software
metas += get_twitter_card_metas(type="playlist", id=obj.pk)
return metas
def library_library(request, uuid, redirect_to_ap):
queryset = models.Library.objects.filter(uuid=uuid)
try:
obj = queryset.get()
except models.Library.DoesNotExist:
return []
if redirect_to_ap:
raise middleware.ApiRedirect(obj.fid)
library_url = utils.join_url(
settings.FUNKWHALE_URL,
utils.spa_reverse("library_library", kwargs={"uuid": obj.uuid}),
)
metas = [
{"tag": "meta", "property": "og:url", "content": library_url},
{"tag": "meta", "property": "og:type", "content": "website"},
{"tag": "meta", "property": "og:title", "content": obj.name},
]
if preferences.get("federation__enabled"):
metas.append(
{
"tag": "link",
"rel": "alternate",
"type": "application/activity+json",
"href": obj.fid,
}
)
return metas
import collections
import datetime
import logging import logging
import os import os
import re
from django.conf import settings from django.conf import settings
from django.core.files.base import ContentFile from django.core.cache import cache
from musicbrainzngs import ResponseError from django.db import transaction
from django.db.models import F, Q
from funkwhale_api.common import preferences from django.dispatch import receiver
from funkwhale_api.federation import activity, actors from django.utils import timezone
from funkwhale_api.federation import serializers as federation_serializers from musicbrainzngs import NetworkError, ResponseError
from funkwhale_api.providers.acoustid import get_acoustid_client from requests.exceptions import RequestException
from funkwhale_api.providers.audiofile import tasks as audiofile_tasks
from funkwhale_api import musicbrainz
from funkwhale_api.common import channels, preferences
from funkwhale_api.common import utils as common_utils
from funkwhale_api.federation import library as lb
from funkwhale_api.federation import routes
from funkwhale_api.federation import utils as federation_utils
from funkwhale_api.music.management.commands import import_files
from funkwhale_api.tags import models as tags_models
from funkwhale_api.tags import tasks as tags_tasks
from funkwhale_api.taskapp import celery from funkwhale_api.taskapp import celery
from . import lyrics as lyrics_utils from . import licenses, metadata, models, signals
from . import models
from . import utils as music_utils
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@celery.app.task(name="acoustid.set_on_track_file") def populate_album_cover(album, source=None, replace=False):
@celery.require_instance(models.TrackFile, "track_file") if album.attachment_cover and not replace:
def set_acoustid_on_track_file(track_file):
client = get_acoustid_client()
result = client.get_best_match(track_file.audio_file.path)
def update(id):
track_file.acoustid_track_id = id
track_file.save(update_fields=["acoustid_track_id"])
return id
if result:
return update(result["id"])
def import_track_from_remote(library_track):
metadata = library_track.metadata
try:
track_mbid = metadata["recording"]["musicbrainz_id"]
assert track_mbid # for null/empty values
except (KeyError, AssertionError):
pass
else:
return models.Track.get_or_create_from_api(mbid=track_mbid)[0]
try:
album_mbid = metadata["release"]["musicbrainz_id"]
assert album_mbid # for null/empty values
except (KeyError, AssertionError):
pass
else:
album, _ = models.Album.get_or_create_from_api(mbid=album_mbid)
return models.Track.get_or_create_from_title(
library_track.title, artist=album.artist, album=album
)[0]
try:
artist_mbid = metadata["artist"]["musicbrainz_id"]
assert artist_mbid # for null/empty values
except (KeyError, AssertionError):
pass
else:
artist, _ = models.Artist.get_or_create_from_api(mbid=artist_mbid)
album, _ = models.Album.get_or_create_from_title(
library_track.album_title, artist=artist
)
return models.Track.get_or_create_from_title(
library_track.title, artist=artist, album=album
)[0]
# worst case scenario, we have absolutely no way to link to a
# musicbrainz resource, we rely on the name/titles
artist, _ = models.Artist.get_or_create_from_name(library_track.artist_name)
album, _ = models.Album.get_or_create_from_title(
library_track.album_title, artist=artist
)
return models.Track.get_or_create_from_title(
library_track.title, artist=artist, album=album
)[0]
def _do_import(import_job, use_acoustid=False):
logger.info("[Import Job %s] starting job", import_job.pk)
from_file = bool(import_job.audio_file)
mbid = import_job.mbid
replace = import_job.replace_if_duplicate
acoustid_track_id = None
duration = None
track = None
# use_acoustid = use_acoustid and preferences.get('providers_acoustid__api_key')
# Acoustid is not reliable, we disable it for now.
use_acoustid = False
if not mbid and use_acoustid and from_file:
# we try to deduce mbid from acoustid
client = get_acoustid_client()
match = client.get_best_match(import_job.audio_file.path)
if match:
duration = match["recordings"][0]["duration"]
mbid = match["recordings"][0]["id"]
acoustid_track_id = match["id"]
if mbid:
logger.info(
"[Import Job %s] importing track from musicbrainz recording %s",
import_job.pk,
str(mbid),
)
track, _ = models.Track.get_or_create_from_api(mbid=mbid)
elif import_job.audio_file:
logger.info(
"[Import Job %s] importing track from uploaded track data at %s",
import_job.pk,
import_job.audio_file.path,
)
track = audiofile_tasks.import_track_data_from_path(import_job.audio_file.path)
elif import_job.library_track:
logger.info(
"[Import Job %s] importing track from federated library track %s",
import_job.pk,
import_job.library_track.pk,
)
track = import_track_from_remote(import_job.library_track)
elif import_job.source.startswith("file://"):
tf_path = import_job.source.replace("file://", "", 1)
logger.info(
"[Import Job %s] importing track from local track data at %s",
import_job.pk,
tf_path,
)
track = audiofile_tasks.import_track_data_from_path(tf_path)
else:
raise ValueError(
"Not enough data to process import, "
"add a mbid, an audio file or a library track"
)
track_file = None
if replace:
logger.info("[Import Job %s] deleting existing audio file", import_job.pk)
track.files.all().delete()
elif track.files.count() > 0:
logger.info(
"[Import Job %s] skipping, we already have a file for this track",
import_job.pk,
)
if import_job.audio_file:
import_job.audio_file.delete()
import_job.status = "skipped"
import_job.save()
return
track_file = track_file or models.TrackFile(track=track, source=import_job.source)
track_file.acoustid_track_id = acoustid_track_id
if from_file:
track_file.audio_file = ContentFile(import_job.audio_file.read())
track_file.audio_file.name = import_job.audio_file.name
track_file.duration = duration
elif import_job.library_track:
track_file.library_track = import_job.library_track
track_file.mimetype = import_job.library_track.audio_mimetype
if import_job.library_track.library.download_files:
raise NotImplementedError()
else:
# no downloading, we hotlink
pass
elif not import_job.audio_file and not import_job.source.startswith("file://"):
# not an inplace import, and we have a source, so let's download it
logger.info("[Import Job %s] downloading audio file from remote", import_job.pk)
track_file.download_file()
elif not import_job.audio_file and import_job.source.startswith("file://"):
# in place import, we set mimetype from extension
path, ext = os.path.splitext(import_job.source)
track_file.mimetype = music_utils.get_type_from_ext(ext)
track_file.set_audio_data()
track_file.save()
# if no cover is set on track album, we try to update it as well:
if not track.album.cover:
logger.info("[Import Job %s] retrieving album cover", import_job.pk)
update_album_cover(track.album, track_file)
import_job.status = "finished"
import_job.track_file = track_file
if import_job.audio_file:
# it's imported on the track, we don't need it anymore
import_job.audio_file.delete()
import_job.save()
logger.info("[Import Job %s] job finished", import_job.pk)
return track_file
def update_album_cover(album, track_file, replace=False):
if album.cover and not replace:
return return
if source and source.startswith("file://"):
if track_file:
# maybe the file has a cover embedded?
try:
metadata = track_file.get_metadata()
except FileNotFoundError:
metadata = None
if metadata:
cover = metadata.get_picture("cover_front")
if cover:
# best case scenario, cover is embedded in the track
logger.info("[Album %s] Using cover embedded in file", album.pk)
return album.get_image(data=cover)
if track_file.source and track_file.source.startswith("file://"):
# let's look for a cover in the same directory # let's look for a cover in the same directory
path = os.path.dirname(track_file.source.replace("file://", "", 1)) path = os.path.dirname(source.replace("file://", "", 1))
logger.info("[Album %s] scanning covers from %s", album.pk, path) logger.info("[Album %s] scanning covers from %s", album.pk, path)
cover = get_cover_from_fs(path) cover = get_cover_from_fs(path)
if cover: return common_utils.attach_file(album, "attachment_cover", cover)
return album.get_image(data=cover) if album.mbid:
if not album.mbid:
return
try:
logger.info( logger.info(
"[Album %s] Fetching cover from musicbrainz release %s", "[Album %s] Fetching cover from musicbrainz release %s",
album.pk, album.pk,
str(album.mbid), str(album.mbid),
) )
return album.get_image() try:
image_data = musicbrainz.api.images.get_front(str(album.mbid))
except ResponseError as exc: except ResponseError as exc:
logger.warning( logger.warning(
"[Album %s] cannot fetch cover from musicbrainz: %s", album.pk, str(exc) "[Album %s] cannot fetch cover from musicbrainz: %s", album.pk, str(exc)
) )
else:
return common_utils.attach_file(
album,
"attachment_cover",
{"content": image_data, "mimetype": "image/jpeg"},
fetch=True,
)
IMAGE_TYPES = [("jpg", "image/jpeg"), ("png", "image/png")] IMAGE_TYPES = [("jpg", "image/jpeg"), ("jpeg", "image/jpeg"), ("png", "image/png")]
FOLDER_IMAGE_NAMES = ["cover", "folder"]
def get_cover_from_fs(dir_path): def get_cover_from_fs(dir_path):
if os.path.exists(dir_path): if os.path.exists(dir_path):
for name in FOLDER_IMAGE_NAMES:
for e, m in IMAGE_TYPES: for e, m in IMAGE_TYPES:
cover_path = os.path.join(dir_path, "cover.{}".format(e)) cover_path = os.path.join(dir_path, f"{name}.{e}")
if not os.path.exists(cover_path): if not os.path.exists(cover_path):
logger.debug("Cover %s does not exists", cover_path) logger.debug("Cover %s does not exists", cover_path)
continue continue
...@@ -240,81 +76,1168 @@ def get_cover_from_fs(dir_path): ...@@ -240,81 +76,1168 @@ def get_cover_from_fs(dir_path):
return {"mimetype": m, "content": c.read()} return {"mimetype": m, "content": c.read()}
@celery.app.task(name="ImportJob.run", bind=True) @celery.app.task(name="music.library.schedule_remote_scan")
def schedule_scan_for_all_remote_libraries():
from funkwhale_api.federation import actors
libraries = models.Library.objects.all().prefetch_related()
actor = actors.get_service_actor()
for library in libraries:
if library.actor.is_local:
continue
library.schedule_scan(actor)
@celery.app.task(name="music.start_library_scan")
@celery.require_instance( @celery.require_instance(
models.ImportJob.objects.filter(status__in=["pending", "errored"]), "import_job" models.LibraryScan.objects.select_related().filter(status="pending"), "library_scan"
) )
def import_job_run(self, import_job, use_acoustid=False): def start_library_scan(library_scan):
def mark_errored(exc): try:
logger.error("[Import Job %s] Error during import: %s", import_job.pk, str(exc)) data = lb.get_library_data(library_scan.library.fid, actor=library_scan.actor)
import_job.status = "errored" except Exception:
import_job.save(update_fields=["status"]) library_scan.status = "errored"
library_scan.save(update_fields=["status", "modification_date"])
raise
if "errors" in data.keys():
library_scan.status = "errored"
library_scan.save(update_fields=["status", "modification_date"])
raise Exception("Error from remote server : " + str(data))
library_scan.modification_date = timezone.now()
library_scan.status = "scanning"
library_scan.total_files = data["totalItems"]
library_scan.save(update_fields=["status", "modification_date", "total_files"])
scan_library_page.delay(library_scan_id=library_scan.pk, page_url=data["first"])
@celery.app.task(
name="music.scan_library_page",
retry_backoff=60,
max_retries=5,
autoretry_for=[RequestException],
)
@celery.require_instance(
models.LibraryScan.objects.select_related().filter(status="scanning"),
"library_scan",
)
def scan_library_page(library_scan, page_url):
data = lb.get_library_page(library_scan.library, page_url, library_scan.actor)
uploads = []
for item_serializer in data["items"]:
upload = item_serializer.save(library=library_scan.library)
uploads.append(upload)
library_scan.processed_files = F("processed_files") + len(uploads)
library_scan.modification_date = timezone.now()
update_fields = ["modification_date", "processed_files"]
next_page = data.get("next")
fetch_next = next_page and next_page != page_url
if not fetch_next:
update_fields.append("status")
library_scan.status = "finished"
library_scan.save(update_fields=update_fields)
if fetch_next:
scan_library_page.delay(library_scan_id=library_scan.pk, page_url=next_page)
def getter(data, *keys, default=None):
if not data:
return default
v = data
for k in keys:
try: try:
tf = _do_import(import_job, use_acoustid=use_acoustid) v = v[k]
return tf.pk if tf else None except KeyError:
except Exception as exc: return default
if not settings.DEBUG:
return v
class UploadImportError(ValueError):
def __init__(self, code):
self.code = code
super().__init__(code)
def fail_import(upload, error_code, detail=None, **fields):
old_status = upload.import_status
upload.import_status = "errored"
upload.import_details = {"error_code": error_code, "detail": detail}
upload.import_details.update(fields)
upload.import_date = timezone.now()
upload.save(update_fields=["import_details", "import_status", "import_date"])
broadcast = getter(
upload.import_metadata, "funkwhale", "config", "broadcast", default=True
)
if broadcast:
signals.upload_import_status_updated.send_robust(
old_status=old_status,
new_status=upload.import_status,
upload=upload,
sender=None,
)
@celery.app.task(name="music.process_upload")
@celery.require_instance(
models.Upload.objects.filter(import_status="pending").select_related(
"library__actor__user",
"library__channel__artist",
),
"upload",
)
def process_upload(upload, update_denormalization=True):
"""
Main handler to process uploads submitted by user and create the corresponding
metadata (tracks/artists/albums) in our DB.
"""
from . import serializers
channel = upload.library.get_channel()
# When upload is linked to a channel instead of a library
# we willingly ignore the metadata embedded in the file itself
# and rely on user metadata only
use_file_metadata = channel is None
import_metadata = upload.import_metadata or {}
internal_config = {"funkwhale": import_metadata.get("funkwhale", {})}
forced_values_serializer = serializers.ImportMetadataSerializer(
data=import_metadata,
context={"actor": upload.library.actor, "channel": channel},
)
if forced_values_serializer.is_valid():
forced_values = forced_values_serializer.validated_data
else:
forced_values = {}
if not use_file_metadata:
detail = forced_values_serializer.errors
metadata_dump = import_metadata
return fail_import(
upload, "invalid_metadata", detail=detail, file_metadata=metadata_dump
)
if channel:
# ensure the upload is associated with the channel artist
forced_values["artist"] = upload.library.channel.artist
old_status = upload.import_status
upload_source = {"upload_source": upload.source}
if use_file_metadata:
audio_file = upload.get_audio_file()
m = metadata.Metadata(audio_file)
try: try:
self.retry(exc=exc, countdown=30, max_retries=3) serializer = metadata.TrackMetadataSerializer(data=m)
serializer.is_valid()
except Exception: except Exception:
mark_errored(exc) fail_import(upload, "unknown_error")
raise raise
mark_errored(exc) if not serializer.is_valid():
detail = serializer.errors
try:
metadata_dump = m.all()
except Exception as e:
logger.warn("Cannot dump metadata for file %s: %s", audio_file, str(e))
return fail_import(
upload, "invalid_metadata", detail=detail, file_metadata=metadata_dump
)
check_mbid = preferences.get("music__only_allow_musicbrainz_tagged_files")
if check_mbid and not serializer.validated_data.get("mbid"):
return fail_import(
upload,
"Only content tagged with a MusicBrainz ID is permitted on this pod.",
detail="You can tag your files with MusicBrainz Picard",
)
final_metadata = collections.ChainMap(
upload_source, serializer.validated_data, internal_config
)
else:
final_metadata = collections.ChainMap(
upload_source,
forced_values,
internal_config,
)
try:
track = get_track_from_import_metadata(
final_metadata, attributed_to=upload.library.actor, **forced_values
)
except UploadImportError as e:
return fail_import(upload, e.code)
except Exception as e:
fail_import(upload, "unknown_error", e)
raise raise
broadcast = getter(
internal_config, "funkwhale", "config", "broadcast", default=True
)
@celery.app.task(name="ImportBatch.run") # under some situations, we want to skip the import (
@celery.require_instance(models.ImportBatch, "import_batch") # for instance if the user already owns the files)
def import_batch_run(import_batch): owned_duplicates = get_owned_duplicates(upload, track)
for job_id in import_batch.jobs.order_by("id").values_list("id", flat=True): upload.track = track
import_job_run.delay(import_job_id=job_id)
if owned_duplicates:
upload.import_status = "skipped"
upload.import_details = {
"code": "already_imported_in_owned_libraries",
# In order to avoid exponential growth of the database, we only
# reference the first known upload which gets duplicated
"duplicates": owned_duplicates[0],
}
upload.import_date = timezone.now()
upload.save(
update_fields=["import_details", "import_status", "import_date", "track"]
)
if broadcast:
signals.upload_import_status_updated.send_robust(
old_status=old_status,
new_status=upload.import_status,
upload=upload,
sender=None,
)
return
@celery.app.task(name="Lyrics.fetch_content") # all is good, let's finalize the import
@celery.require_instance(models.Lyrics, "lyrics") audio_data = upload.get_audio_data()
def fetch_content(lyrics): if audio_data:
html = lyrics_utils._get_html(lyrics.url) upload.duration = audio_data["duration"]
content = lyrics_utils.extract_content(html) upload.size = audio_data["size"]
cleaned_content = lyrics_utils.clean_content(content) upload.bitrate = audio_data["bitrate"]
lyrics.content = cleaned_content upload.import_status = "finished"
lyrics.save(update_fields=["content"]) upload.import_date = timezone.now()
upload.save(
update_fields=[
"track",
"import_status",
"import_date",
"size",
"duration",
"bitrate",
]
)
if channel:
common_utils.update_modification_date(channel.artist)
if update_denormalization:
models.TrackActor.create_entries(
library=upload.library,
upload_and_track_ids=[(upload.pk, upload.track_id)],
delete_existing=False,
)
@celery.app.task(name="music.import_batch_notify_followers") # update album cover, if needed
@celery.require_instance( if track.album and not track.album.attachment_cover:
models.ImportBatch.objects.filter(status="finished"), "import_batch" populate_album_cover(
track.album,
source=final_metadata.get("upload_source"),
) )
def import_batch_notify_followers(import_batch):
if not preferences.get("federation__enabled"):
return
if import_batch.source == "federation": if broadcast:
return signals.upload_import_status_updated.send_robust(
old_status=old_status,
new_status=upload.import_status,
upload=upload,
sender=None,
)
dispatch_outbox = (
False
if upload.library.privacy_level in ["me", "instance"]
else getter(
internal_config, "funkwhale", "config", "dispatch_outbox", default=True
)
)
if dispatch_outbox:
routes.outbox.dispatch(
{"type": "Create", "object": {"type": "Audio"}}, context={"upload": upload}
)
def get_cover(obj, field):
cover = obj.get(field)
if cover:
try:
url = cover["url"]
except KeyError:
url = cover["href"]
return {"mimetype": cover["mediaType"], "url": url}
library_actor = actors.SYSTEM_ACTORS["library"].get_actor_instance()
followers = library_actor.get_approved_followers() def federation_audio_track_to_metadata(payload, references):
jobs = import_batch.jobs.filter( """
status="finished", library_track__isnull=True, track_file__isnull=False Given a valid payload as returned by federation.serializers.TrackSerializer.validated_data,
).select_related("track_file__track__artist", "track_file__track__album__artist") returns a correct metadata payload for use with get_track_from_import_metadata.
track_files = [job.track_file for job in jobs] """
collection = federation_serializers.CollectionSerializer( new_data = {
"title": payload["name"],
"position": payload.get("position") or 1,
"disc_number": payload.get("disc"),
"license": payload.get("license"),
"copyright": payload.get("copyright"),
"description": payload.get("description"),
"attributed_to": references.get(payload.get("attributedTo")),
"mbid": (
str(payload.get("musicbrainzId")) if payload.get("musicbrainzId") else None
),
"cover_data": get_cover(payload, "image"),
"album": {
"title": payload["album"]["name"],
"fdate": payload["album"]["published"],
"fid": payload["album"]["id"],
"description": payload["album"].get("description"),
"attributed_to": references.get(payload["album"].get("attributedTo")),
"mbid": (
str(payload["album"]["musicbrainzId"])
if payload["album"].get("musicbrainzId")
else None
),
"cover_data": get_cover(payload["album"], "image"),
"release_date": payload["album"].get("released"),
"tags": [t["name"] for t in payload["album"].get("tags", []) or []],
"artist_credit": [
{
"artist": {
"fid": a["artist"]["id"],
"name": a["artist"]["name"],
"fdate": a["artist"]["published"],
"cover_data": get_cover(a["artist"], "image"),
"description": a["artist"].get("description"),
"attributed_to": references.get(
a["artist"].get("attributedTo")
),
"mbid": (
str(a["artist"]["musicbrainzId"])
if a["artist"].get("musicbrainzId")
else None
),
"tags": [t["name"] for t in a["artist"].get("tags", []) or []],
},
"joinphrase": (a["joinphrase"] if "joinphrase" in a else ""),
"credit": a["credit"],
}
for a in payload["album"]["artist_credit"]
],
},
"artist_credit": [
{ {
"actor": library_actor, "artist": {
"id": import_batch.get_federation_url(), "fid": a["artist"]["id"],
"items": track_files, "name": a["artist"]["name"],
"item_serializer": federation_serializers.AudioSerializer, "fdate": a["artist"]["published"],
"description": a["artist"].get("description"),
"attributed_to": references.get(a["artist"].get("attributedTo")),
"mbid": (
str(a["artist"]["musicbrainzId"])
if a["artist"].get("musicbrainzId")
else None
),
"tags": [t["name"] for t in a["artist"].get("tags", []) or []],
"cover_data": get_cover(a["artist"], "image"),
},
"joinphrase": (a["joinphrase"] if "joinphrase" in a else ""),
"credit": a["credit"],
}
for a in payload["artist_credit"]
],
# federation
"fid": payload["id"],
"fdate": payload["published"],
"tags": [t["name"] for t in payload.get("tags", []) or []],
}
return new_data
def get_owned_duplicates(upload, track):
"""
Ensure we skip duplicate tracks to avoid wasting user/instance storage
"""
owned_libraries = upload.library.actor.libraries.all()
return (
models.Upload.objects.filter(
track__isnull=False, library__in=owned_libraries, track=track
)
.exclude(pk=upload.pk)
.values_list("uuid", flat=True)
.order_by("creation_date")
)
def get_best_candidate_or_create(model, query, defaults, sort_fields):
"""
Like queryset.get_or_create() but does not crash if multiple objects
are returned on the get() call
"""
candidates = model.objects.filter(query)
if candidates:
return sort_candidates(candidates, sort_fields)[0], False
return model.objects.create(**defaults), True
def sort_candidates(candidates, important_fields):
"""
Given a list of objects and a list of fields,
will return a sorted list of those objects by score.
Score is higher for objects that have a non-empty attribute
that is also present in important fields::
artist1 = Artist(mbid=None, fid=None)
artist2 = Artist(mbid="something", fid=None)
# artist2 has a mbid, so is sorted first
assert sort_candidates([artist1, artist2], ['mbid'])[0] == artist2
Only supports string fields.
"""
# map each fields to its score, giving a higher score to first fields
fields_scores = {f: i + 1 for i, f in enumerate(sorted(important_fields))}
candidates_with_scores = []
for candidate in candidates:
current_score = 0
for field, score in fields_scores.items():
v = getattr(candidate, field, "")
if v:
current_score += score
candidates_with_scores.append((candidate, current_score))
return [c for c, s in reversed(sorted(candidates_with_scores, key=lambda v: v[1]))]
@transaction.atomic
def get_track_from_import_metadata(
data, update_cover=False, attributed_to=None, query_mb=True, **forced_values
):
track = _get_track(
data, attributed_to=attributed_to, query_mb=query_mb, **forced_values
)
if update_cover and track and not track.album.attachment_cover:
populate_album_cover(track.album, source=data.get("upload_source"))
return track
def truncate(v, length):
if v is None:
return v
return v[:length]
def _get_track(data, attributed_to=None, query_mb=True, **forced_values):
sync_mb_tag = preferences.get("music__sync_musicbrainz_tags")
track_uuid = getter(data, "funkwhale", "track", "uuid")
logger.debug(f"Getting track from import metadata: {data}")
if track_uuid:
# easy case, we have a reference to a uuid of a track that
# already exists in our database
try:
track = models.Track.objects.get(uuid=track_uuid)
except models.Track.DoesNotExist:
raise UploadImportError(code="track_uuid_not_found")
return track
from_activity_id = data.get("from_activity_id", None)
track_mbid = (
forced_values["mbid"] if "mbid" in forced_values else data.get("mbid", None)
)
try:
album_mbid = getter(data, "album", "mbid")
except TypeError:
# album is forced
album_mbid = None
track_fid = getter(data, "fid")
query = None
if album_mbid and track_mbid:
query = Q(mbid=track_mbid, album__mbid=album_mbid)
if track_fid:
query = query | Q(fid=track_fid) if query else Q(fid=track_fid)
if query:
# second easy case: we have a (track_mbid, album_mbid) pair or
# a federation uuid we can check on
try:
return sort_candidates(models.Track.objects.filter(query), ["mbid", "fid"])[
0
]
except IndexError:
pass
# get / create artist, artist_credit
album_artists_credits = None
artist_credit_data = getter(data, "artist_credit", default=[])
if "artist" in forced_values:
artist = forced_values["artist"]
query = Q(artist=artist)
defaults = {
"artist": artist,
"joinphrase": "",
"credit": artist.name,
}
track_artist_credit, created = get_best_candidate_or_create(
models.ArtistCredit, query, defaults=defaults, sort_fields=["mbid", "fid"]
)
track_artists_credits = [track_artist_credit]
else:
mbid = query_mb and (data.get("musicbrainz_id", None) or data.get("mbid", None))
try:
track_artists_credits = get_or_create_artists_credits_from_musicbrainz(
"recording",
mbid,
attributed_to=attributed_to,
from_activity_id=from_activity_id,
)
except (NoMbid, ResponseError, NetworkError):
track_artists_credits = (
get_or_create_artists_credits_from_artist_credit_metadata(
artist_credit_data,
attributed_to=attributed_to,
from_activity_id=from_activity_id,
)
)
# get / create album artist, album artist_credit
if "album" in forced_values:
album = forced_values["album"]
album_artists_credits = track_artists_credits
else:
if album_artists_credits:
pass
mbid = query_mb and (data.get("musicbrainz_albumid", None) or album_mbid)
try:
album_artists_credits = get_or_create_artists_credits_from_musicbrainz(
"release",
mbid,
attributed_to=attributed_to,
from_activity_id=from_activity_id,
)
except (NoMbid, ResponseError, NetworkError):
if album_artists := getter(data, "album", "artist_credit", default=None):
album_artists_credits = (
get_or_create_artists_credits_from_artist_credit_metadata(
album_artists,
attributed_to=attributed_to,
from_activity_id=from_activity_id,
)
)
else:
album_artists_credits = track_artists_credits
# get / create album
if "album" in data:
album_data = data["album"]
album_title = album_data["title"]
album_fid = album_data.get("fid", None)
if album_mbid:
query = Q(mbid=album_mbid)
else:
query = Q(
title__iexact=album_title, artist_credit__in=album_artists_credits
)
if album_fid:
query |= Q(fid=album_fid)
defaults = {
"title": album_title,
"mbid": album_mbid,
"release_date": album_data.get("release_date"),
"fid": album_fid,
"from_activity_id": from_activity_id,
"attributed_to": album_data.get("attributed_to", attributed_to),
}
if album_data.get("fdate"):
defaults["creation_date"] = album_data.get("fdate")
album, created = get_best_candidate_or_create(
models.Album, query, defaults=defaults, sort_fields=["mbid", "fid"]
)
album.artist_credit.set(album_artists_credits)
if created:
tags_models.add_tags(album, *album_data.get("tags", []))
common_utils.attach_content(
album, "description", album_data.get("description")
)
common_utils.attach_file(
album, "attachment_cover", album_data.get("cover_data")
)
if sync_mb_tag and album_mbid:
tags_tasks.sync_fw_item_tag_with_musicbrainz_tags(album)
else:
album = None
# get / create track
track_title = forced_values["title"] if "title" in forced_values else data["title"]
position = (
forced_values["position"]
if "position" in forced_values
else data.get("position", 1)
)
disc_number = (
forced_values["disc_number"]
if "disc_number" in forced_values
else data.get("disc_number")
)
license = (
forced_values["license"]
if "license" in forced_values
else licenses.match(data.get("license"), data.get("copyright"))
)
copyright = (
forced_values["copyright"]
if "copyright" in forced_values
else data.get("copyright")
)
description = (
{"text": forced_values["description"], "content_type": "text/markdown"}
if "description" in forced_values
else data.get("description")
)
cover_data = (
forced_values["cover"] if "cover" in forced_values else data.get("cover_data")
)
query = Q(
title__iexact=track_title,
artist_credit__in=track_artists_credits,
album=album,
position=position,
disc_number=disc_number,
)
if track_mbid:
if album_mbid:
query |= Q(mbid=track_mbid, album__mbid=album_mbid)
else:
query |= Q(mbid=track_mbid)
if track_fid:
query |= Q(fid=track_fid)
defaults = {
"title": track_title,
"album": album,
"mbid": track_mbid,
"position": position,
"disc_number": disc_number,
"fid": track_fid,
"from_activity_id": from_activity_id,
"attributed_to": data.get("attributed_to", attributed_to),
"license": license,
"copyright": copyright,
}
if data.get("fdate"):
defaults["creation_date"] = data.get("fdate")
track, created = get_best_candidate_or_create(
models.Track, query, defaults=defaults, sort_fields=["mbid", "fid"]
)
if created:
tags = (
forced_values["tags"] if "tags" in forced_values else data.get("tags", [])
)
tags_models.add_tags(track, *tags)
common_utils.attach_content(track, "description", description)
common_utils.attach_file(track, "attachment_cover", cover_data)
if sync_mb_tag and track_mbid:
tags_tasks.sync_fw_item_tag_with_musicbrainz_tags(track)
track.artist_credit.set(track_artists_credits)
return track
def get_or_create_artist_from_ac(ac_data, attributed_to, from_activity_id):
sync_mb_tag = preferences.get("music__sync_musicbrainz_tags")
mbid = ac_data.get("artist", {}).get("mbid", None)
fid = ac_data.get("artist", {}).get("fid", None)
name = ac_data.get("artist", {}).get("name", ac_data.get("credit", None))
creation_date = ac_data.get("artist", {}).get("fdate", timezone.now())
description = ac_data.get("artist", {}).get("description", None)
attributed_to = ac_data.get("artist", {}).get("attributed_to", attributed_to)
tags = ac_data.get("artist", {}).get("tags", [])
cover = ac_data.get("artist", {}).get("cover_data", None)
if mbid:
query = Q(mbid=mbid)
else:
query = Q(name__iexact=name)
if fid:
query |= Q(fid=fid)
defaults = {
"name": name,
"mbid": mbid,
"fid": fid,
"from_activity_id": from_activity_id,
"attributed_to": attributed_to,
"creation_date": creation_date,
} }
).data if ac_data.get("fdate"):
for f in followers: defaults["creation_date"] = ac_data.get("fdate")
create = federation_serializers.ActivitySerializer(
artist, created = get_best_candidate_or_create(
models.Artist, query, defaults=defaults, sort_fields=["mbid", "fid"]
)
if created:
tags_models.add_tags(artist, *tags)
common_utils.attach_content(artist, "description", description)
common_utils.attach_file(artist, "attachment_cover", cover)
if sync_mb_tag and mbid:
tags_tasks.sync_fw_item_tag_with_musicbrainz_tags(artist)
return artist
class NoMbid(Exception):
pass
def get_or_create_artists_credits_from_musicbrainz(
mb_obj_type, mbid, attributed_to, from_activity_id
):
if not mbid:
raise NoMbid
try:
if mb_obj_type == "release":
mb_obj = musicbrainz.api.releases.get(mbid, includes=["artists"])
elif mb_obj_type == "recording":
mb_obj = musicbrainz.api.recordings.get(mbid, includes=["artists"])
except (ResponseError, NetworkError) as e:
logger.warning(
f"Couldn't get Musicbrainz information for {mb_obj_type} with {mbid} mbid \
because of the following exception : {e}"
)
raise e
artists_credits = []
acs = mb_obj.get("recording", mb_obj)["artist-credit"]
logger.debug(f"MusicBrainz responded with : {mb_obj}")
for i, ac in enumerate(acs):
if isinstance(ac, str):
continue
artist_name = ac["artist"]["name"]
joinphrase = ac["joinphrase"]
# mb use "name" instead of "credit" and id instead of mbdi
credit = ac.get("name", ac.get("credit", artist_name))
ac["credit"] = credit
ac["artist"]["mbid"] = ac["artist"]["id"]
artist = get_or_create_artist_from_ac(ac, attributed_to, from_activity_id)
# artist_credit creation
defaults = {
"artist": artist,
"joinphrase": joinphrase,
"credit": credit,
"index": i,
}
query = (
Q(artist=artist.pk)
& Q(joinphrase=joinphrase)
& Q(credit=credit)
& Q(index=i)
)
artist_credit, created = get_best_candidate_or_create(
models.ArtistCredit, query, defaults=defaults, sort_fields=["mbid", "fid"]
)
artists_credits.append(artist_credit)
return artists_credits
def parse_credits(artist_string, forced_joinphrase, forced_index, forced_artist=None):
"""
Return a list of parsed artist_credit information from a string like :
LoveDiversity featuring Hatingprisons
"""
if not artist_string:
return []
join_phrase = preferences.get("music__join_phrases")
join_phrase_regex = re.compile(rf"({join_phrase})", re.IGNORECASE)
split = re.split(join_phrase_regex, artist_string)
raw_artists_credits = tuple(zip(split[0::2], split[1::2]))
artists_credits_tuple = []
for index, raw_artist_credit in enumerate(raw_artists_credits):
credit = raw_artist_credit[0].strip()
if forced_joinphrase:
join_phrase = forced_joinphrase
else:
join_phrase = raw_artist_credit[1]
if join_phrase == "( " or join_phrase == ") ":
join_phrase = join_phrase.strip()
artists_credits_tuple.append(
(
credit,
join_phrase,
(index if not forced_index else forced_index),
forced_artist,
)
)
# impar split :
if len(split) % 2 != 0 and split[len(split) - 1] != "" and len(split) > 1:
artists_credits_tuple.append(
(
str(split[len(split) - 1]).rstrip(),
("" if not forced_joinphrase else forced_joinphrase),
(len(artists_credits_tuple) if not forced_index else forced_index),
forced_artist,
)
)
# if "name" is empty or didn't split
if not raw_artists_credits:
credit = forced_artist.name if forced_artist else artist_string
artists_credits_tuple.append(
(
credit,
("" if not forced_joinphrase else forced_joinphrase),
(0 if not forced_index else forced_index),
forced_artist,
)
)
return artists_credits_tuple
def get_or_create_artists_credits_from_artist_credit_metadata(
artists_credits_data, attributed_to, from_activity_id
):
artists_credits = []
for i, ac in enumerate(artists_credits_data):
ac["artist"] = get_or_create_artist_from_ac(ac, attributed_to, from_activity_id)
ac["index"] = ac.get("index", i)
credit = ac.get("credit", ac["artist"].name)
query = (
Q(artist=ac["artist"])
& Q(credit=credit)
& Q(joinphrase=ac["joinphrase"])
& Q(index=ac.get("index", i))
)
artist_credit, created = get_best_candidate_or_create(
models.ArtistCredit, query, ac, ["artist", "credit", "joinphrase"]
)
artists_credits.append(artist_credit)
return artists_credits
@receiver(signals.upload_import_status_updated)
def broadcast_import_status_update_to_owner(old_status, new_status, upload, **kwargs):
user = upload.library.actor.get_user()
if not user:
return
from . import serializers
group = f"user.{user.pk}.imports"
channels.group_send(
group,
{ {
"type": "Create", "type": "event.send",
"id": collection["id"], "text": "",
"object": collection, "data": {
"actor": library_actor.url, "type": "import.status_updated",
"to": [f.url], "upload": serializers.UploadForOwnerSerializer(upload).data,
"old_status": old_status,
"new_status": new_status,
},
},
)
@celery.app.task(name="music.clean_transcoding_cache")
def clean_transcoding_cache():
delay = preferences.get("music__transcoding_cache_duration")
if delay < 1:
return # cache clearing disabled
limit = timezone.now() - datetime.timedelta(minutes=delay)
candidates = (
models.UploadVersion.objects.filter(
Q(accessed_date__lt=limit) | Q(accessed_date=None)
)
.only("audio_file", "id")
.order_by("id")
)
return candidates.delete()
@celery.app.task(name="music.albums_set_tags_from_tracks")
@transaction.atomic
def albums_set_tags_from_tracks(ids=None, dry_run=False):
qs = models.Album.objects.filter(tagged_items__isnull=True).order_by("id")
qs = federation_utils.local_qs(qs)
qs = qs.values_list("id", flat=True)
if ids is not None:
qs = qs.filter(pk__in=ids)
data = tags_tasks.get_tags_from_foreign_key(
ids=qs,
foreign_key_model=models.Track,
foreign_key_attr="albums",
)
logger.info("Found automatic tags for %s albums…", len(data))
if dry_run:
logger.info("Running in dry-run mode, not committing")
return
tags_tasks.add_tags_batch(
data,
model=models.Album,
)
return data
@celery.app.task(name="music.artists_set_tags_from_tracks")
@transaction.atomic
def artists_set_tags_from_tracks(ids=None, dry_run=False):
qs = models.Artist.objects.filter(tagged_items__isnull=True).order_by("id")
qs = federation_utils.local_qs(qs)
qs = qs.values_list("id", flat=True)
if ids is not None:
qs = qs.filter(pk__in=ids)
data = tags_tasks.get_tags_from_foreign_key(
ids=qs,
foreign_key_model=models.Track,
foreign_key_attr="artist",
)
logger.info("Found automatic tags for %s artists…", len(data))
if dry_run:
logger.info("Running in dry-run mode, not committing")
return
tags_tasks.add_tags_batch(
data,
model=models.Artist,
)
return data
def get_prunable_tracks(
exclude_favorites=True, exclude_playlists=True, exclude_listenings=True
):
"""
Returns a list of tracks with no associated uploads,
excluding the one that were listened/favorited/included in playlists.
"""
purgeable_tracks_with_upload = (
models.Upload.objects.exclude(track=None)
.filter(import_status="skipped")
.values("track")
)
queryset = models.Track.objects.all()
queryset = queryset.filter(
Q(uploads__isnull=True) | Q(pk__in=purgeable_tracks_with_upload)
)
if exclude_favorites:
queryset = queryset.filter(track_favorites__isnull=True)
if exclude_playlists:
queryset = queryset.filter(playlist_tracks__isnull=True)
if exclude_listenings:
queryset = queryset.filter(listenings__isnull=True)
return queryset
def get_prunable_albums():
return models.Album.objects.filter(tracks__isnull=True)
def get_prunable_artists():
return models.Artist.objects.filter(artist_credit__isnull=True)
def update_library_entity(obj, data):
"""
Given an obj and some updated fields, will persist the changes on the obj
and also check if the entity need to be aliased with existing objs (i.e
if a mbid was added on the obj, and match another entity with the same mbid)
"""
for key, value in data.items():
setattr(obj, key, value)
# Todo: handle integrity error on unique fields (such as MBID)
obj.save(update_fields=list(data.keys()))
return obj
UPDATE_CONFIG = {
"track": {
"position": {},
"title": {},
"mbid": {},
"disc_number": {},
"copyright": {},
"license": {
"getter": lambda data, field: licenses.match(
data.get("license"), data.get("copyright")
)
},
},
"artists": {},
"album": {"title": {}, "mbid": {}, "release_date": {}},
"album_artist": {"name": {}, "mbid": {}},
} }
).data
activity.deliver(create, on_behalf_of=library_actor, to=[f.url])
@transaction.atomic
def update_track_metadata(audio_metadata, track):
serializer = metadata.TrackMetadataSerializer(data=audio_metadata)
serializer.is_valid(raise_exception=True)
new_data = serializer.validated_data
to_update = [
("track", track, lambda data: data),
("album", track.album, lambda data: data["album"]),
(
"artist_credit",
track.artist_credit.all(),
lambda data: data["artist_credit"],
),
(
"album_artist",
track.album.artist_credit.all() if track.album else None,
lambda data: data["album"]["artist_credit"],
),
]
for id, obj, data_getter in to_update:
if not obj:
continue
obj_updated_fields = []
try:
obj_data = data_getter(new_data)
except IndexError:
continue
if id == "artist_credit":
if new_data.get("mbid", False):
logger.warning(
"If a track mbid is provided, it will be use to generate artist_credit \
information. If you want to set a custom artist_credit you nee to remove the track mbid"
)
track_artists_credits = get_or_create_artists_credits_from_musicbrainz(
"recording", new_data.get("mbid"), None, None
)
else:
track_artists_credits = (
get_or_create_artists_credits_from_artist_credit_metadata(
obj_data,
None,
None,
)
)
if track_artists_credits == obj:
continue
track.artist_credit.set(track_artists_credits)
continue
if id == "album_artist":
if new_data["album"].get("mbid", False):
logger.warning(
"If a album mbid is provided, it will be use to generate album artist_credit \
information. If you want to set a custom artist_credit you nee to remove the track mbid"
)
album_artists_credits = get_or_create_artists_credits_from_musicbrainz(
"release", new_data["album"].get("mbid"), None, None
)
else:
album_artists_credits = (
get_or_create_artists_credits_from_artist_credit_metadata(
obj_data,
None,
None,
)
)
if album_artists_credits == obj:
continue
track.album.artist_credit.set(album_artists_credits)
continue
for field, config in UPDATE_CONFIG[id].items():
getter = config.get(
"getter", lambda data, field: data[config.get("field", field)]
)
try:
new_value = getter(obj_data, field)
except KeyError:
continue
old_value = getattr(obj, field)
if new_value == old_value:
continue
obj_updated_fields.append(field)
setattr(obj, field, new_value)
if obj_updated_fields:
obj.save(update_fields=obj_updated_fields)
tags_models.set_tags(track, *new_data.get("tags", []))
if track.album and "album" in new_data and new_data["album"].get("cover_data"):
common_utils.attach_file(
track.album, "attachment_cover", new_data["album"].get("cover_data")
)
@celery.app.task(name="music.fs_import")
@celery.require_instance(models.Library.objects.all(), "library")
def fs_import(
library,
path,
import_reference,
prune=True,
outbox=False,
broadcast=False,
replace=False,
batch_size=1000,
verbosity=1,
):
if cache.get("fs-import:status") != "pending":
raise ValueError("Invalid import status")
command = import_files.Command()
options = {
"recursive": True,
"library_id": str(library.uuid),
"path": [os.path.join(settings.MUSIC_DIRECTORY_PATH, path)],
"update_cache": True,
"in_place": True,
"reference": import_reference,
"watch": False,
"interactive": False,
"batch_size": batch_size,
"async_": False,
"prune": prune,
"replace": replace,
"verbosity": verbosity,
"exit_on_failure": False,
"outbox": outbox,
"broadcast": broadcast,
}
command.handle(**options)
import mimetypes import mimetypes
import os
import pathlib
import magic import magic
import mutagen import mutagen
from django.conf import settings
from django.core.cache import cache
from django.db.models import F
from ffmpeg import FFmpeg
from funkwhale_api.common.search import normalize_query, get_query # noqa from funkwhale_api.common import throttling
from funkwhale_api.common.search import get_fts_query # noqa
from funkwhale_api.common.search import get_query # noqa
from funkwhale_api.common.search import normalize_query # noqa
def guess_mimetype(f): def guess_mimetype(f):
b = min(1000000, f.size) b = min(1000000, f.size)
t = magic.from_buffer(f.read(b), mime=True) t = magic.from_buffer(f.read(b), mime=True)
if not t.startswith("audio/"): if not t.startswith("audio/"):
t = guess_mimetype_from_name(f.name)
return t
def guess_mimetype_from_name(name):
# failure, we try guessing by extension # failure, we try guessing by extension
mt, _ = mimetypes.guess_type(f.path) mt, _ = mimetypes.guess_type(name)
if mt: if mt:
t = mt t = mt
else:
t = EXTENSION_TO_MIMETYPE.get(name.split(".")[-1])
return t return t
...@@ -29,14 +46,41 @@ def compute_status(jobs): ...@@ -29,14 +46,41 @@ def compute_status(jobs):
AUDIO_EXTENSIONS_AND_MIMETYPE = [ AUDIO_EXTENSIONS_AND_MIMETYPE = [
("ogg", "audio/ogg"), # keep the most correct mimetype for each extension at the bottom
("mp3", "audio/mp3"),
("mp3", "audio/mpeg3"),
("mp3", "audio/x-mp3"),
("mp3", "audio/mpeg"), ("mp3", "audio/mpeg"),
("ogg", "video/ogg"),
("ogg", "audio/ogg"),
("opus", "audio/opus"),
("aac", "audio/x-m4a"),
("m4a", "audio/x-m4a"),
("m4a", "audio/m4a"),
("flac", "audio/x-flac"), ("flac", "audio/x-flac"),
("flac", "audio/flac"),
("aif", "audio/aiff"),
("aif", "audio/x-aiff"),
("aiff", "audio/aiff"),
("aiff", "audio/x-aiff"),
] ]
EXTENSION_TO_MIMETYPE = {ext: mt for ext, mt in AUDIO_EXTENSIONS_AND_MIMETYPE} EXTENSION_TO_MIMETYPE = {ext: mt for ext, mt in AUDIO_EXTENSIONS_AND_MIMETYPE}
MIMETYPE_TO_EXTENSION = {mt: ext for ext, mt in AUDIO_EXTENSIONS_AND_MIMETYPE} MIMETYPE_TO_EXTENSION = {mt: ext for ext, mt in AUDIO_EXTENSIONS_AND_MIMETYPE}
SUPPORTED_EXTENSIONS = list(sorted({ext for ext, _ in AUDIO_EXTENSIONS_AND_MIMETYPE}))
def get_extension_to_mimetype_dict():
extension_dict = {}
for ext, mimetype in AUDIO_EXTENSIONS_AND_MIMETYPE:
if ext not in extension_dict:
extension_dict[ext] = []
extension_dict[ext].append(mimetype)
return extension_dict
def get_ext_from_type(mimetype): def get_ext_from_type(mimetype):
return MIMETYPE_TO_EXTENSION.get(mimetype) return MIMETYPE_TO_EXTENSION.get(mimetype)
...@@ -54,7 +98,70 @@ def get_audio_file_data(f): ...@@ -54,7 +98,70 @@ def get_audio_file_data(f):
if not data: if not data:
return return
d = {} d = {}
d["bitrate"] = data.info.bitrate d["bitrate"] = getattr(data.info, "bitrate", 0)
d["length"] = data.info.length d["length"] = data.info.length
return d return d
def get_actor_from_request(request):
actor = None
if hasattr(request, "actor"):
actor = request.actor
elif request.user.is_authenticated:
actor = request.user.actor
return actor
def transcode_audio(audio_file_path, output_path, output_format="mp3", **kwargs):
FFmpeg().input(audio_file_path).output(
output_path, format=output_format, **kwargs
).option("y").execute()
def increment_downloads_count(upload, user, wsgi_request):
ident = throttling.get_ident(user=user, request=wsgi_request)
cache_key = "downloads_count:upload-{}:{}-{}".format(
upload.pk, ident["type"], ident["id"]
)
value = cache.get(cache_key)
if value:
# download already tracked
return
upload.downloads_count = F("downloads_count") + 1
upload.track.downloads_count = F("downloads_count") + 1
upload.save(update_fields=["downloads_count"])
upload.track.save(update_fields=["downloads_count"])
duration = max(upload.duration or 0, settings.MIN_DELAY_BETWEEN_DOWNLOADS_COUNT)
cache.set(cache_key, 1, duration)
def browse_dir(root, path):
if ".." in path:
raise ValueError("Relative browsing is not allowed")
root = pathlib.Path(root)
real_path = root / path
dirs = []
files = []
for el in sorted(os.listdir(real_path)):
if os.path.isdir(real_path / el):
dirs.append({"name": el, "dir": True})
else:
files.append({"name": el, "dir": False})
return dirs + files
def get_artist_credit_string(obj):
final_credit = ""
for ac in obj.artist_credit.all():
final_credit = final_credit + ac.credit + ac.joinphrase
return final_credit