Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2429-fix-popover-auto-close
  • 2448-complete-tags
  • 2452-fetch-third-party-metadata
  • 2469-Fix-search-bar-in-ManageUploads
  • 2476-deep-upload-links
  • 2482-upgrade-about-page-to-use-new-ui
  • 2490-experiment-use-rstore
  • 2490-fix-search-modal
  • 2490-search-modal
  • 2501-fix-compatibility-with-older-browsers
  • 2502-drop-uno-and-jquery
  • 2533-allow-followers-in-user-activiy-privacy-level
  • 2539-delete-v1-api
  • 2539-drop-ansible-installation-method-in-favor-of-docker
  • 2550-22-user-interfaces-for-federation
  • 2560-default-modal-width
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • chore/2406-compose-modularity-scope-petitminion
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • feature/2481-vui-translations
  • fix-amd64-docker-build-gfortran
  • fix-channel-creation
  • fix-front-node-version
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • fix-uk-translation
  • flupsi-backup-2487-fix-accessibility-according-to-WCAG
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • refactor/homepage
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
200 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 876-http-signature
  • develop
  • master
  • plugins
  • plugins-v2
  • themev2
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
59 results
Show changes
Showing
with 655 additions and 150 deletions
......@@ -9,15 +9,13 @@ from funkwhale_api.users.models import User, create_actor
def main(command, **kwargs):
qs = User.objects.filter(actor__isnull=True).order_by("username")
total = len(qs)
command.stdout.write("{} users found without actors".format(total))
command.stdout.write(f"{total} users found without actors")
for i, user in enumerate(qs):
command.stdout.write(
"{}/{} creating actor for {}".format(i + 1, total, user.username)
)
command.stdout.write(f"{i + 1}/{total} creating actor for {user.username}")
try:
user.actor = create_actor(user)
except IntegrityError as e:
# somehow, an actor with the the url exists in the database
command.stderr.write("Error while creating actor: {}".format(str(e)))
command.stderr.write(f"Error while creating actor: {str(e)}")
continue
user.save(update_fields=["actor"])
......@@ -6,7 +6,6 @@ from versatileimagefield.image_warmer import VersatileImageFieldWarmer
from funkwhale_api.common.models import Attachment
MODELS = [
(Attachment, "file", "attachment_square"),
]
......@@ -14,7 +13,7 @@ MODELS = [
def main(command, **kwargs):
for model, attribute, key_set in MODELS:
qs = model.objects.exclude(**{"{}__isnull".format(attribute): True})
qs = model.objects.exclude(**{f"{attribute}__isnull": True})
qs = qs.exclude(**{attribute: ""})
warmer = VersatileImageFieldWarmer(
instance_or_queryset=qs,
......@@ -22,10 +21,8 @@ def main(command, **kwargs):
image_attr=attribute,
verbose=True,
)
command.stdout.write(
"Creating images for {} / {}".format(model.__name__, attribute)
)
command.stdout.write(f"Creating images for {model.__name__} / {attribute}")
num_created, failed_to_create = warmer.warm()
command.stdout.write(
" {} created, {} in error".format(num_created, len(failed_to_create))
f" {num_created} created, {len(failed_to_create)} in error"
)
......@@ -10,5 +10,5 @@ def main(command, **kwargs):
source__startswith="http", source__contains="/federation/music/file/"
).exclude(source__contains="youtube")
total = queryset.count()
command.stdout.write("{} uploads found".format(total))
command.stdout.write(f"{total} uploads found")
queryset.delete()
......@@ -23,6 +23,6 @@ def main(command, **kwargs):
total = users.count()
command.stdout.write(
"Updating {} users with {} permission...".format(total, user_permission)
f"Updating {total} users with {user_permission} permission..."
)
users.update(**{"permission_{}".format(user_permission): True})
users.update(**{f"permission_{user_permission}": True})
......@@ -12,12 +12,12 @@ This command will also generate federation ids for existing resources.
"""
from django.conf import settings
from django.db.models import functions, CharField, F, Value
from django.db.models import CharField, F, Value, functions
from funkwhale_api.common import preferences
from funkwhale_api.federation import models as federation_models
from funkwhale_api.music import models
from funkwhale_api.users.models import User
from funkwhale_api.federation import models as federation_models
from funkwhale_api.common import preferences
def create_libraries(open_api, stdout):
......@@ -36,9 +36,7 @@ def create_libraries(open_api, stdout):
)
libraries_by_user[library.actor.user.pk] = library.pk
if created:
stdout.write(
" * Created library {} for user {}".format(library.pk, a.user.pk)
)
stdout.write(f" * Created library {library.pk} for user {a.user.pk}")
else:
stdout.write(
" * Found existing library {} for user {}".format(
......@@ -60,13 +58,9 @@ def update_uploads(libraries_by_user, stdout):
)
total = candidates.update(library=library_id, import_status="finished")
if total:
stdout.write(
" * Assigned {} uploads to user {}'s library".format(total, user_id)
)
stdout.write(f" * Assigned {total} uploads to user {user_id}'s library")
else:
stdout.write(
" * No uploads to assign to user {}'s library".format(user_id)
)
stdout.write(f" * No uploads to assign to user {user_id}'s library")
def update_orphan_uploads(open_api, stdout):
......@@ -105,14 +99,12 @@ def update_orphan_uploads(open_api, stdout):
def set_fid(queryset, path, stdout):
model = queryset.model._meta.label
qs = queryset.filter(fid=None)
base_url = "{}{}".format(settings.FUNKWHALE_URL, path)
stdout.write(
"* Assigning federation ids to {} entries (path: {})".format(model, base_url)
)
base_url = f"{settings.FUNKWHALE_URL}{path}"
stdout.write(f"* Assigning federation ids to {model} entries (path: {base_url})")
new_fid = functions.Concat(Value(base_url), F("uuid"), output_field=CharField())
total = qs.update(fid=new_fid)
stdout.write(" * {} entries updated".format(total))
stdout.write(f" * {total} entries updated")
def update_shared_inbox_url(stdout):
......@@ -123,16 +115,16 @@ def update_shared_inbox_url(stdout):
def generate_actor_urls(part, stdout):
field = "{}_url".format(part)
stdout.write("* Update {} for local actors...".format(field))
field = f"{part}_url"
stdout.write(f"* Update {field} for local actors...")
queryset = federation_models.Actor.objects.local().filter(**{field: None})
base_url = "{}/federation/actors/".format(settings.FUNKWHALE_URL)
base_url = f"{settings.FUNKWHALE_URL}/federation/actors/"
new_field = functions.Concat(
Value(base_url),
F("preferred_username"),
Value("/{}".format(part)),
Value(f"/{part}"),
output_field=CharField(),
)
......
......@@ -5,14 +5,13 @@ from django.db.models import Q
from . import utils
QUERY_REGEX = re.compile(r'(((?P<key>\w+):)?(?P<value>"[^"]+"|[\S]+))')
def parse_query(query):
"""
Given a search query such as "hello is:issue status:opened",
returns a list of dictionnaries discribing each query token
returns a list of dictionaries describing each query token
"""
matches = [m.groupdict() for m in QUERY_REGEX.finditer(query.lower())]
for m in matches:
......@@ -26,7 +25,7 @@ def normalize_query(
findterms=re.compile(r'"([^"]+)"|(\S+)').findall,
normspace=re.compile(r"\s{2,}").sub,
):
""" Splits the query string in invidual keywords, getting rid of unecessary spaces
"""Splits the query string in individual keywords, getting rid of unnecessary spaces
and grouping quoted words together.
Example:
......@@ -59,13 +58,21 @@ def get_query(query_string, search_fields):
return query
def remove_chars(string, chars):
for char in chars:
string = string.replace(char, "")
return string
def get_fts_query(query_string, fts_fields=["body_text"], model=None):
search_type = "raw"
if query_string.startswith('"') and query_string.endswith('"'):
# we pass the query directly to the FTS engine
query_string = query_string[1:-1]
else:
query_string = remove_chars(query_string, ['"', "&", "(", ")", "!", "'"])
parts = query_string.replace(":", "").split(" ")
parts = ["{}:*".format(p) for p in parts if p]
parts = [f"{p}:*" for p in parts if p]
if not parts:
return Q(pk=None)
......@@ -86,16 +93,16 @@ def get_fts_query(query_string, fts_fields=["body_text"], model=None):
subquery = related_model.objects.filter(
**{
lookup: SearchQuery(
query_string, search_type="raw", config="english_nostop"
query_string, search_type=search_type, config="english_nostop"
)
}
).values_list("pk", flat=True)
new_query = Q(**{"{}__in".format(fk_field_name): list(subquery)})
new_query = Q(**{f"{fk_field_name}__in": list(subquery)})
else:
new_query = Q(
**{
field: SearchQuery(
query_string, search_type="raw", config="english_nostop"
query_string, search_type=search_type, config="english_nostop"
)
}
)
......@@ -173,7 +180,7 @@ class SearchConfig:
except KeyError:
# no cleaning to apply
value = token["value"]
q = Q(**{"{}__icontains".format(to): value})
q = Q(**{f"{to}__icontains": value})
if not specific_field_query:
specific_field_query = q
else:
......
import collections
import io
import PIL
import os
from rest_framework import serializers
import PIL
from django.core.exceptions import ObjectDoesNotExist
from django.core.files.uploadedfile import SimpleUploadedFile
from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_str
from django.utils.translation import gettext_lazy as _
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers
from . import models
from . import utils
from . import models, utils
class RelatedField(serializers.RelatedField):
......@@ -52,7 +52,7 @@ class RelatedField(serializers.RelatedField):
self.fail(
"does_not_exist",
related_field_name=self.related_field_name,
value=smart_text(data),
value=smart_str(data),
)
except (TypeError, ValueError):
self.fail("invalid")
......@@ -82,14 +82,14 @@ class RelatedField(serializers.RelatedField):
)
class Action(object):
class Action:
def __init__(self, name, allow_all=False, qs_filter=None):
self.name = name
self.allow_all = allow_all
self.qs_filter = qs_filter
def __repr__(self):
return "<Action {}>".format(self.name)
return f"<Action {self.name}>"
class ActionSerializer(serializers.Serializer):
......@@ -113,7 +113,7 @@ class ActionSerializer(serializers.Serializer):
)
for action in self.actions_by_name.keys():
handler_name = "handle_{}".format(action)
handler_name = f"handle_{action}"
assert hasattr(self, handler_name), "{} miss a {} method".format(
self.__class__.__name__, handler_name
)
......@@ -133,9 +133,9 @@ class ActionSerializer(serializers.Serializer):
if value == "all":
return self.queryset.all().order_by("id")
if type(value) in [list, tuple]:
return self.queryset.filter(
**{"{}__in".format(self.pk_field): value}
).order_by(self.pk_field)
return self.queryset.filter(**{f"{self.pk_field}__in": value}).order_by(
self.pk_field
)
raise serializers.ValidationError(
"{} is not a valid value for objects. You must provide either a "
......@@ -270,6 +270,7 @@ class APIMutationSerializer(serializers.ModelSerializer):
"previous_state",
]
@extend_schema_field(OpenApiTypes.OBJECT)
def get_target(self, obj):
target = obj.target
if not target:
......@@ -280,7 +281,7 @@ class APIMutationSerializer(serializers.ModelSerializer):
def validate_type(self, value):
if value not in self.context["registry"]:
raise serializers.ValidationError("Invalid mutation type {}".format(value))
raise serializers.ValidationError(f"Invalid mutation type {value}")
return value
......@@ -292,27 +293,26 @@ class AttachmentSerializer(serializers.Serializer):
file = StripExifImageField(write_only=True)
urls = serializers.SerializerMethodField()
@extend_schema_field(
{
"type": "object",
"properties": {
"original": {"type": "string"},
"small_square_crop": {"type": "string"},
"medium_square_crop": {"type": "string"},
"large_square_crop": {"type": "string"},
},
}
)
def get_urls(self, o):
urls = {}
urls["source"] = o.url
urls["original"] = o.download_url_original
urls["small_square_crop"] = o.download_url_small_square_crop
urls["medium_square_crop"] = o.download_url_medium_square_crop
urls["large_square_crop"] = o.download_url_large_square_crop
return urls
def to_representation(self, o):
repr = super().to_representation(o)
# XXX: BACKWARD COMPATIBILITY
# having the attachment urls in a nested JSON obj is better,
# but we can't do this without breaking clients
# So we extract the urls and include these in the parent payload
repr.update({k: v for k, v in repr["urls"].items() if k != "source"})
# also, our legacy images had lots of variations (400x400, 200x200, 50x50)
# but we removed some of these, so we emulate these by hand (by redirecting)
# to actual, existing attachment variations
repr["square_crop"] = repr["medium_square_crop"]
repr["small_square_crop"] = repr["medium_square_crop"]
return repr
def create(self, validated_data):
return models.Attachment.objects.create(
file=validated_data["file"], actor=validated_data["actor"]
......@@ -320,15 +320,21 @@ class AttachmentSerializer(serializers.Serializer):
class ContentSerializer(serializers.Serializer):
text = serializers.CharField(max_length=models.CONTENT_TEXT_MAX_LENGTH)
content_type = serializers.ChoiceField(choices=models.CONTENT_TEXT_SUPPORTED_TYPES,)
text = serializers.CharField(
max_length=models.CONTENT_TEXT_MAX_LENGTH,
allow_null=True,
allow_blank=True,
)
content_type = serializers.ChoiceField(
choices=models.CONTENT_TEXT_SUPPORTED_TYPES,
)
html = serializers.SerializerMethodField()
def get_html(self, o):
def get_html(self, o) -> str:
return utils.render_html(o.text, o.content_type)
class NullToEmptDict(object):
class NullToEmptDict:
def get_attribute(self, o):
attr = super().get_attribute(o)
if attr is None:
......@@ -339,3 +345,36 @@ class NullToEmptDict(object):
if not v:
return v
return super().to_representation(v)
class ScopesSerializer(serializers.Serializer):
id = serializers.CharField()
rate = serializers.CharField()
description = serializers.CharField()
limit = serializers.IntegerField()
duration = serializers.IntegerField()
remaining = serializers.IntegerField()
available = serializers.IntegerField()
available_seconds = serializers.IntegerField()
reset = serializers.IntegerField()
reset_seconds = serializers.IntegerField()
class IdentSerializer(serializers.Serializer):
type = serializers.CharField()
id = serializers.CharField()
class RateLimitSerializer(serializers.Serializer):
enabled = serializers.BooleanField()
ident = IdentSerializer()
scopes = serializers.ListField(child=ScopesSerializer())
class ErrorDetailSerializer(serializers.Serializer):
detail = serializers.CharField(source="*")
class TextPreviewSerializer(serializers.Serializer):
rendered = serializers.CharField(read_only=True, source="*")
text = serializers.CharField(write_only=True)
import django.dispatch
mutation_created = django.dispatch.Signal(providing_args=["mutation"])
mutation_updated = django.dispatch.Signal(
providing_args=["mutation", "old_is_approved", "new_is_approved"]
)
""" Required args: mutation """
mutation_created = django.dispatch.Signal()
""" Required args: mutation, old_is_approved, new_is_approved """
mutation_updated = django.dispatch.Signal()
import slugify
import os
import shutil
import slugify
from django.core.files.storage import FileSystemStorage
from storages.backends.s3boto3 import S3Boto3Storage
......@@ -15,6 +17,16 @@ class ASCIIFileSystemStorage(FileSystemStorage):
def get_valid_name(self, name):
return super().get_valid_name(asciionly(name))
def force_delete(self, name):
path = self.path(name)
try:
if os.path.isdir(path):
shutil.rmtree(path)
else:
return super().delete(name)
except FileNotFoundError:
pass
class ASCIIS3Boto3Storage(S3Boto3Storage):
def get_valid_name(self, name):
......
......@@ -11,10 +11,7 @@ from django.utils import timezone
from funkwhale_api.common import channels
from funkwhale_api.taskapp import celery
from . import models
from . import serializers
from . import session
from . import signals
from . import models, serializers, session, signals
logger = logging.getLogger(__name__)
......
import collections
from django.conf import settings
from django.core.cache import cache
from rest_framework import throttling as rest_throttling
from django.conf import settings
def get_ident(user, request):
if user and user.is_authenticated:
return {"type": "authenticated", "id": user.pk}
return {"type": "authenticated", "id": f"{user.pk}"}
ident = rest_throttling.BaseThrottle().get_ident(request)
return {"type": "anonymous", "id": ident}
......
import datetime
import hashlib
from django.core.files.base import ContentFile
from django.http import request
from django.utils.deconstruct import deconstructible
import bleach.sanitizer
import logging
import markdown
import os
import shutil
import uuid
import xml.etree.ElementTree as ET
from urllib.parse import parse_qs, urlencode, urlsplit, urlunsplit
from django.conf import settings
import bleach.sanitizer
import markdown
from django import urls
from django.conf import settings
from django.core.files.base import ContentFile
from django.db import models, transaction
from django.http import request
from django.utils import timezone
from django.utils.deconstruct import deconstructible
logger = logging.getLogger(__name__)
......@@ -39,7 +36,7 @@ def rename_file(instance, field_name, new_name, allow_missing_file=False):
field = getattr(instance, field_name)
current_name, extension = os.path.splitext(field.name)
new_name_with_extension = "{}{}".format(new_name, extension)
new_name_with_extension = f"{new_name}{extension}"
try:
shutil.move(field.path, new_name_with_extension)
except FileNotFoundError:
......@@ -74,12 +71,16 @@ def set_query_parameter(url, **kwargs):
@deconstructible
class ChunkedPath(object):
class ChunkedPath:
def sanitize_filename(self, filename):
return filename.replace("/", "-")
def __init__(self, root, preserve_file_name=True):
self.root = root
self.preserve_file_name = preserve_file_name
def __call__(self, instance, filename):
self.sanitize_filename(filename)
uid = str(uuid.uuid4())
chunk_size = 2
chunks = [uid[i : i + chunk_size] for i in range(0, len(uid), chunk_size)]
......@@ -87,7 +88,7 @@ class ChunkedPath(object):
parts = chunks[:3] + [filename]
else:
ext = os.path.splitext(filename)[1][1:].lower()
new_filename = "".join(chunks[3:]) + ".{}".format(ext)
new_filename = "".join(chunks[3:]) + f".{ext}"
parts = chunks[:3] + [new_filename]
return os.path.join(self.root, *parts)
......@@ -137,7 +138,7 @@ def chunk_queryset(source_qs, chunk_size):
def join_url(start, end):
if end.startswith("http://") or end.startswith("https://"):
# alread a full URL, joining makes no sense
# already a full URL, joining makes no sense
return end
if start.endswith("/") and end.startswith("/"):
return start + end[1:]
......@@ -183,7 +184,7 @@ def order_for_search(qs, field):
When searching, it's often more useful to have short results first,
this function will order the given qs based on the length of the given field
"""
return qs.annotate(__size=models.functions.Length(field)).order_by("__size")
return qs.annotate(__size=models.functions.Length(field)).order_by("__size", "pk")
def recursive_getattr(obj, key, permissive=False):
......@@ -226,7 +227,7 @@ def replace_prefix(queryset, field, old, new):
on a whole table with a single query.
"""
qs = queryset.filter(**{"{}__startswith".format(field): old})
qs = queryset.filter(**{f"{field}__startswith": old})
# we extract the part after the old prefix, and Concat it with our new prefix
update = models.functions.Concat(
models.Value(new),
......@@ -293,6 +294,7 @@ SAFE_TAGS = [
"acronym",
"b",
"blockquote",
"br",
"code",
"em",
"i",
......@@ -306,7 +308,7 @@ HTMl_CLEANER = bleach.sanitizer.Cleaner(strip=True, tags=SAFE_TAGS)
HTML_PERMISSIVE_CLEANER = bleach.sanitizer.Cleaner(
strip=True,
tags=SAFE_TAGS + ["h1", "h2", "h3", "h4", "h5", "h6", "div", "section", "article"],
attributes=["class", "rel", "alt", "title"],
attributes=["class", "rel", "alt", "title", "href"],
)
# support for additional tlds
......@@ -351,7 +353,7 @@ def attach_content(obj, field, content_data):
from . import models
content_data = content_data or {}
existing = getattr(obj, "{}_id".format(field))
existing = getattr(obj, f"{field}_id")
if existing:
if same_content(getattr(obj, field), **content_data):
......@@ -374,10 +376,9 @@ def attach_content(obj, field, content_data):
@transaction.atomic
def attach_file(obj, field, file_data, fetch=False):
from . import models
from . import tasks
from . import models, tasks
existing = getattr(obj, "{}_id".format(field))
existing = getattr(obj, f"{field}_id")
if existing:
getattr(obj, field).delete()
......@@ -394,7 +395,7 @@ def attach_file(obj, field, file_data, fetch=False):
name = [
getattr(obj, field) for field in name_fields if getattr(obj, field, None)
][0]
filename = "{}-{}.{}".format(field, name, extension)
filename = f"{field}-{name}.{extension}"
if "url" in file_data:
attachment.url = file_data["url"]
else:
......@@ -476,14 +477,13 @@ def monkey_patch_request_build_absolute_uri():
def get_file_hash(file, algo=None, chunk_size=None, full_read=False):
algo = algo or settings.HASHING_ALGORITHM
chunk_size = chunk_size or settings.HASHING_CHUNK_SIZE
handler = getattr(hashlib, algo)
hash = handler()
hasher = hashlib.new(algo)
file.seek(0)
if full_read:
for byte_block in iter(lambda: file.read(chunk_size), b""):
hash.update(byte_block)
hasher.update(byte_block)
else:
# sometimes, it's useful to only hash the beginning of the file, e.g
# to avoid a lot of I/O when crawling large libraries
hash.update(file.read(chunk_size))
return "{}:{}".format(algo, hash.hexdigest())
hasher.update(file.read(chunk_size))
return f"{algo}:{hasher.hexdigest()}"
......@@ -6,7 +6,7 @@ from django.core.exceptions import ValidationError
from django.core.files.images import get_image_dimensions
from django.template.defaultfilters import filesizeformat
from django.utils.deconstruct import deconstructible
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import gettext_lazy as _
@deconstructible
......@@ -72,7 +72,7 @@ class ImageDimensionsValidator:
@deconstructible
class FileValidator(object):
class FileValidator:
"""
Taken from https://gist.github.com/jrosebr1/2140738
Validator for files, checking the size, extension and mimetype.
......@@ -97,7 +97,7 @@ class FileValidator(object):
"MIME type '%(mimetype)s' is not valid. Allowed types are: %(allowed_mimetypes)s."
)
min_size_message = _(
"The current file %(size)s, which is too small. The minumum file size is %(allowed_size)s."
"The current file %(size)s, which is too small. The minimum file size is %(allowed_size)s."
)
max_size_message = _(
"The current file %(size)s, which is too large. The maximum file size is %(allowed_size)s."
......@@ -163,5 +163,5 @@ class DomainValidator(validators.URLValidator):
If it fails, we know the domain is not valid.
"""
super().__call__("http://{}".format(value))
super().__call__(f"http://{value}")
return value
......@@ -3,26 +3,26 @@ import time
from django.conf import settings
from django.db import transaction
from drf_spectacular.utils import extend_schema
from rest_framework import (
exceptions,
generics,
mixins,
permissions,
response,
views,
viewsets,
)
from rest_framework.decorators import action
from rest_framework import exceptions
from rest_framework import mixins
from rest_framework import permissions
from rest_framework import response
from rest_framework import views
from rest_framework import viewsets
from config import plugins
from funkwhale_api.common.serializers import (
ErrorDetailSerializer,
TextPreviewSerializer,
)
from funkwhale_api.users.oauth import permissions as oauth_permissions
from . import filters
from . import models
from . import mutations
from . import serializers
from . import signals
from . import tasks
from . import throttling
from . import utils
from . import filters, models, mutations, serializers, signals, tasks, throttling, utils
logger = logging.getLogger(__name__)
......@@ -76,6 +76,7 @@ class MutationViewSet(
return super().perform_destroy(instance)
@extend_schema(operation_id="approve_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def approve(self, request, *args, **kwargs):
......@@ -105,6 +106,7 @@ class MutationViewSet(
)
return response.Response({}, status=200)
@extend_schema(operation_id="reject_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def reject(self, request, *args, **kwargs):
......@@ -137,6 +139,7 @@ class MutationViewSet(
class RateLimitView(views.APIView):
permission_classes = []
throttle_classes = []
serializer_class = serializers.RateLimitSerializer
def get(self, request, *args, **kwargs):
ident = throttling.get_ident(getattr(request, "user", None), request)
......@@ -145,7 +148,7 @@ class RateLimitView(views.APIView):
"ident": ident,
"scopes": throttling.get_status(ident, time.time()),
}
return response.Response(data, status=200)
return response.Response(serializers.RateLimitSerializer(data).data, status=200)
class AttachmentViewSet(
......@@ -173,7 +176,12 @@ class AttachmentViewSet(
return r
size = request.GET.get("next", "original").lower()
if size not in ["original", "medium_square_crop"]:
if size not in [
"original",
"small_square_crop",
"medium_square_crop",
"large_square_crop",
]:
size = "original"
try:
......@@ -195,18 +203,124 @@ class AttachmentViewSet(
instance.delete()
class TextPreviewView(views.APIView):
class TextPreviewView(generics.GenericAPIView):
permission_classes = []
serializer_class = TextPreviewSerializer
@extend_schema(
operation_id="preview_text",
responses={200: TextPreviewSerializer, 400: ErrorDetailSerializer},
)
def post(self, request, *args, **kwargs):
payload = request.data
if "text" not in payload:
return response.Response({"detail": "Invalid input"}, status=400)
return response.Response(
ErrorDetailSerializer("Invalid input").data, status=400
)
permissive = payload.get("permissive", False)
data = {
"rendered": utils.render_html(
payload["text"], "text/markdown", permissive=permissive
)
}
data = TextPreviewSerializer(
utils.render_html(payload["text"], "text/markdown", permissive=permissive)
).data
return response.Response(data, status=200)
class PluginViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):
required_scope = "plugins"
serializer_class = serializers.serializers.Serializer
queryset = models.PluginConfiguration.objects.none()
def list(self, request, *args, **kwargs):
user = request.user
user_plugins = [p for p in plugins._plugins.values() if p["user"] is True]
return response.Response(
[
plugins.serialize_plugin(p, confs=plugins.get_confs(user=user))
for p in user_plugins
]
)
def retrieve(self, request, *args, **kwargs):
user = request.user
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if not user_plugin:
return response.Response(status=404)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def post(self, request, *args, **kwargs):
return self.create(request, *args, **kwargs)
def create(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if kwargs["pk"] not in confs:
return response.Response(status=404)
plugins.set_conf(kwargs["pk"], request.data, user)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def delete(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
if kwargs["pk"] not in confs:
return response.Response(status=404)
user.plugins.filter(code=kwargs["pk"]).delete()
return response.Response(status=204)
@extend_schema(operation_id="enable_plugin")
@action(detail=True, methods=["post"])
def enable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], True, user)
return response.Response({}, status=200)
@extend_schema(operation_id="disable_plugin")
@action(detail=True, methods=["post"])
def disable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], False, user)
return response.Response({}, status=200)
@action(detail=True, methods=["post"])
def scan(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
conf = plugins.get_conf(kwargs["pk"], user=user)
if not conf["enabled"]:
return response.Response(status=405)
library = request.user.actor.libraries.get(uuid=conf["conf"]["library"])
hook = [
hook
for p, hook in plugins._hooks.get(plugins.SCAN, [])
if p == kwargs["pk"]
]
if not hook:
return response.Response(status=405)
hook[0](library=library, conf=conf["conf"])
return response.Response({}, status=200)
# -*- coding: utf-8 -*-
import logging
from config import plugins
from funkwhale_api.contrib.archivedl import tasks
from .funkwhale_startup import PLUGIN
logger = logging.getLogger(__name__)
@plugins.register_hook(plugins.TRIGGER_THIRD_PARTY_UPLOAD, PLUGIN)
def lauch_download(track, conf={}):
tasks.archive_download.delay(track_id=track.pk, conf=conf)
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="archivedl",
label="Archive-dl",
description="",
version="0.1",
user=False,
conf=[],
)
import asyncio
import hashlib
import logging
import os
import tempfile
import time
import urllib.parse
from datetime import timedelta
import requests
from django.core.files import File
from django.utils import timezone
from funkwhale_api.federation import actors
from funkwhale_api.music import models, utils
from funkwhale_api.taskapp import celery
logger = logging.getLogger(__name__)
class TooManyQueriesError(Exception):
pass
def check_existing_download_task(track):
if models.Upload.objects.filter(
track=track,
import_status__in=["pending", "finished"],
third_party_provider="archive-dl",
).exists():
raise TooManyQueriesError(
"Upload for this track already exist or is pending. Stopping task."
)
def check_last_third_party_queries(track, count):
# 15 per minutes according to their doc = one each 4 seconds
time_threshold = timezone.now() - timedelta(seconds=5)
if models.Upload.objects.filter(
third_party_provider="archive-dl",
import_status__in=["pending", "finished"],
creation_date__gte=time_threshold,
).exists():
logger.info(
"Last archive.org query was too recent. Trying to wait 2 seconds..."
)
time.sleep(2)
count += 1
if count > 3:
raise TooManyQueriesError(
"Probably too many archivedl tasks are queue, stopping this task"
)
check_last_third_party_queries(track, count)
def create_upload(url, track, files_data):
mimetype = f"audio/{files_data.get('format', 'unknown')}"
duration = files_data.get("mtime", 0)
filesize = files_data.get("size", 0)
bitrate = files_data.get("bitrate", 0)
service_library = models.Library.objects.create(
privacy_level="everyone",
actor=actors.get_service_actor(),
)
return models.Upload.objects.create(
mimetype=mimetype,
source=url,
third_party_provider="archive-dl",
creation_date=timezone.now(),
track=track,
duration=duration,
size=filesize,
bitrate=bitrate,
library=service_library,
from_activity=None,
import_status="pending",
)
@celery.app.task(name="archivedl.archive_download")
@celery.require_instance(models.Track.objects.select_related(), "track")
def archive_download(track, conf):
try:
check_existing_download_task(track)
check_last_third_party_queries(track, 0)
except TooManyQueriesError as e:
logger.error(e)
return
artist_name = utils.get_artist_credit_string(track)
query = f"mediatype:audio AND title:{track.title} AND creator:{artist_name}"
with requests.Session() as session:
url = get_search_url(query, page_size=1, page=1)
page_data = fetch_json(url, session)
for obj in page_data["response"]["docs"]:
logger.info(f"launching download item for {str(obj)}")
download_item(
item_data=obj,
session=session,
allowed_extensions=utils.SUPPORTED_EXTENSIONS,
track=track,
)
def fetch_json(url, session):
logger.info(f"Fetching {url}...")
with session.get(url) as response:
return response.json()
def download_item(
item_data,
session,
allowed_extensions,
track,
):
files_data = get_files_data(item_data["identifier"], session)
to_download = list(
filter_files(
files_data["result"],
allowed_extensions=allowed_extensions,
)
)
url = f"https://archive.org/download/{item_data['identifier']}/{to_download[0]['name']}"
upload = create_upload(url, track, to_download[0])
try:
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, to_download[0]["name"])
download_file(
path,
url=url,
session=session,
checksum=to_download[0]["sha1"],
upload=upload,
to_download=to_download,
)
logger.info(f"Finished to download item {item_data['identifier']}...")
except Exception as e:
upload.delete()
raise e
def check_integrity(path, expected_checksum):
with open(path, mode="rb") as f:
hash = hashlib.sha1()
hash.update(f.read())
return expected_checksum == hash.hexdigest()
def get_files_data(identifier, session):
url = f"https://archive.org/metadata/{identifier}/files"
logger.info(f"Fetching files data at {url}...")
with session.get(url) as response:
return response.json()
def download_file(path, url, session, checksum, upload, to_download):
if os.path.exists(path) and check_integrity(path, checksum):
logger.info(f"Skipping already downloaded file at {path}")
return
logger.info(f"Downloading file {url}...")
with open(path, mode="wb") as f:
try:
with session.get(url) as response:
f.write(response.content)
except asyncio.TimeoutError as e:
logger.error(f"Timeout error while downloading {url}: {e}")
with open(path, "rb") as f:
upload.audio_file.save(f"{to_download['name']}", File(f))
upload.import_status = "finished"
upload.url = url
upload.save()
return upload
def filter_files(files, allowed_extensions):
for f in files:
if allowed_extensions:
extension = os.path.splitext(f["name"])[-1][1:]
if extension not in allowed_extensions:
continue
yield f
def get_search_url(query, page_size, page):
q = urllib.parse.urlencode({"q": query})
return f"https://archive.org/advancedsearch.php?{q}&sort[]=addeddate+desc&rows={page_size}\
&page={page}&output=json"
import liblistenbrainz
import funkwhale_api
from config import plugins
from funkwhale_api.favorites import models as favorites_models
from funkwhale_api.history import models as history_models
from . import tasks
from .funkwhale_startup import PLUGIN
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def submit_listen(listening, conf, **kwargs):
user_token = conf["user_token"]
if not user_token and not conf["submit_listenings"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting listen to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
client.set_auth_token(user_token)
listen = get_lb_listen(listening)
client.submit_single_listen(listen)
def get_lb_listen(listening):
track = listening.track
additional_info = {
"media_player": "Funkwhale",
"media_player_version": funkwhale_api.__version__,
"submission_client": "Funkwhale ListenBrainz plugin",
"submission_client_version": PLUGIN["version"],
"tracknumber": track.position,
"discnumber": track.disc_number,
}
if track.mbid:
additional_info["recording_mbid"] = str(track.mbid)
if track.album:
if track.album.title:
release_name = track.album.title
if track.album.mbid:
additional_info["release_mbid"] = str(track.album.mbid)
mbids = [ac.artist.mbid for ac in track.artist_credit.all() if ac.artist.mbid]
if mbids:
additional_info["artist_mbids"] = mbids
upload = track.uploads.filter(duration__gte=0).first()
if upload:
additional_info["duration"] = upload.duration
return liblistenbrainz.Listen(
track_name=track.title,
listened_at=listening.creation_date.timestamp(),
artist_name=track.get_artist_credit_string,
release_name=release_name,
additional_info=additional_info,
)
@plugins.register_hook(plugins.FAVORITE_CREATED, PLUGIN)
def submit_favorite_creation(track_favorite, conf, **kwargs):
user_token = conf["user_token"]
if not user_token or not conf["submit_favorites"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting favorite to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
track = track_favorite.track
if not track.mbid:
logger.warning(
"This tracks doesn't have a mbid. Feedback will not be submitted to Listenbrainz"
)
return
client.submit_user_feedback(1, track.mbid)
@plugins.register_hook(plugins.FAVORITE_DELETED, PLUGIN)
def submit_favorite_deletion(track_favorite, conf, **kwargs):
user_token = conf["user_token"]
if not user_token or not conf["submit_favorites"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting favorite deletion to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
track = track_favorite.track
if not track.mbid:
logger.warning(
"This tracks doesn't have a mbid. Feedback will not be submitted to Listenbrainz"
)
return
client.submit_user_feedback(0, track.mbid)
@plugins.register_hook(plugins.LISTENING_SYNC, PLUGIN)
def sync_listenings_from_listenbrainz(user, conf):
user_name = conf["user_name"]
if not user_name or not conf["sync_listenings"]:
return
logger = PLUGIN["logger"]
logger.info("Getting listenings from ListenBrainz")
try:
last_ts = (
history_models.Listening.objects.filter(actor=user.actor)
.filter(source="Listenbrainz")
.latest("creation_date")
.values_list("creation_date", flat=True)
).timestamp()
except funkwhale_api.history.models.Listening.DoesNotExist:
tasks.import_listenbrainz_listenings(user, user_name, 0)
return
tasks.import_listenbrainz_listenings(user, user_name, last_ts)
@plugins.register_hook(plugins.FAVORITE_SYNC, PLUGIN)
def sync_favorites_from_listenbrainz(user, conf):
user_name = conf["user_name"]
if not user_name or not conf["sync_favorites"]:
return
try:
last_ts = (
favorites_models.TrackFavorite.objects.filter(actor=user.actor)
.filter(source="Listenbrainz")
.latest("creation_date")
.creation_date.timestamp()
)
except favorites_models.TrackFavorite.DoesNotExist:
tasks.import_listenbrainz_favorites(user, user_name, 0)
return
tasks.import_listenbrainz_favorites(user, user_name, last_ts)