Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2448-complete-tags
  • 2451-delete-no-user-query
  • 2452-fetch-third-party-metadata
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • fix-amd64-docker-build-gfortran
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • update-frontend-dependencies
  • upload-process-spec
  • user-concept-docs
  • v2-artists
  • vite-ws-ssl-compatible
  • wip/2091-improve-visuals
  • wvffle/dependency-maintenance
  • wvffle/new-upload-process
  • wvffle/ui-rewrite-sidebar
  • wvffle/ui-sidebar
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
190 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 303-json-ld
  • 629-cookie-auth
  • 735-table-truncate
  • develop
  • domain-policies
  • live-streaming
  • master
  • webdav
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
50 results
Show changes
Showing
with 1223 additions and 113 deletions
......@@ -9,15 +9,13 @@ from funkwhale_api.users.models import User, create_actor
def main(command, **kwargs):
qs = User.objects.filter(actor__isnull=True).order_by("username")
total = len(qs)
command.stdout.write("{} users found without actors".format(total))
command.stdout.write(f"{total} users found without actors")
for i, user in enumerate(qs):
command.stdout.write(
"{}/{} creating actor for {}".format(i + 1, total, user.username)
)
command.stdout.write(f"{i + 1}/{total} creating actor for {user.username}")
try:
user.actor = create_actor(user)
except IntegrityError as e:
# somehow, an actor with the the url exists in the database
command.stderr.write("Error while creating actor: {}".format(str(e)))
command.stderr.write(f"Error while creating actor: {str(e)}")
continue
user.save(update_fields=["actor"])
......@@ -4,16 +4,16 @@ Compute different sizes of image used for Album covers and User avatars
from versatileimagefield.image_warmer import VersatileImageFieldWarmer
from funkwhale_api.music.models import Album
from funkwhale_api.users.models import User
from funkwhale_api.common.models import Attachment
MODELS = [(Album, "cover", "square"), (User, "avatar", "square")]
MODELS = [
(Attachment, "file", "attachment_square"),
]
def main(command, **kwargs):
for model, attribute, key_set in MODELS:
qs = model.objects.exclude(**{"{}__isnull".format(attribute): True})
qs = model.objects.exclude(**{f"{attribute}__isnull": True})
qs = qs.exclude(**{attribute: ""})
warmer = VersatileImageFieldWarmer(
instance_or_queryset=qs,
......@@ -21,10 +21,8 @@ def main(command, **kwargs):
image_attr=attribute,
verbose=True,
)
command.stdout.write(
"Creating images for {} / {}".format(model.__name__, attribute)
)
command.stdout.write(f"Creating images for {model.__name__} / {attribute}")
num_created, failed_to_create = warmer.warm()
command.stdout.write(
" {} created, {} in error".format(num_created, len(failed_to_create))
f" {num_created} created, {len(failed_to_create)} in error"
)
......@@ -10,5 +10,5 @@ def main(command, **kwargs):
source__startswith="http", source__contains="/federation/music/file/"
).exclude(source__contains="youtube")
total = queryset.count()
command.stdout.write("{} uploads found".format(total))
command.stdout.write(f"{total} uploads found")
queryset.delete()
......@@ -23,6 +23,6 @@ def main(command, **kwargs):
total = users.count()
command.stdout.write(
"Updating {} users with {} permission...".format(total, user_permission)
f"Updating {total} users with {user_permission} permission..."
)
users.update(**{"permission_{}".format(user_permission): True})
users.update(**{f"permission_{user_permission}": True})
......@@ -12,12 +12,12 @@ This command will also generate federation ids for existing resources.
"""
from django.conf import settings
from django.db.models import functions, CharField, F, Value
from django.db.models import CharField, F, Value, functions
from funkwhale_api.common import preferences
from funkwhale_api.federation import models as federation_models
from funkwhale_api.music import models
from funkwhale_api.users.models import User
from funkwhale_api.federation import models as federation_models
from funkwhale_api.common import preferences
def create_libraries(open_api, stdout):
......@@ -36,9 +36,7 @@ def create_libraries(open_api, stdout):
)
libraries_by_user[library.actor.user.pk] = library.pk
if created:
stdout.write(
" * Created library {} for user {}".format(library.pk, a.user.pk)
)
stdout.write(f" * Created library {library.pk} for user {a.user.pk}")
else:
stdout.write(
" * Found existing library {} for user {}".format(
......@@ -60,13 +58,9 @@ def update_uploads(libraries_by_user, stdout):
)
total = candidates.update(library=library_id, import_status="finished")
if total:
stdout.write(
" * Assigned {} uploads to user {}'s library".format(total, user_id)
)
stdout.write(f" * Assigned {total} uploads to user {user_id}'s library")
else:
stdout.write(
" * No uploads to assign to user {}'s library".format(user_id)
)
stdout.write(f" * No uploads to assign to user {user_id}'s library")
def update_orphan_uploads(open_api, stdout):
......@@ -105,14 +99,12 @@ def update_orphan_uploads(open_api, stdout):
def set_fid(queryset, path, stdout):
model = queryset.model._meta.label
qs = queryset.filter(fid=None)
base_url = "{}{}".format(settings.FUNKWHALE_URL, path)
stdout.write(
"* Assigning federation ids to {} entries (path: {})".format(model, base_url)
)
base_url = f"{settings.FUNKWHALE_URL}{path}"
stdout.write(f"* Assigning federation ids to {model} entries (path: {base_url})")
new_fid = functions.Concat(Value(base_url), F("uuid"), output_field=CharField())
total = qs.update(fid=new_fid)
stdout.write(" * {} entries updated".format(total))
stdout.write(f" * {total} entries updated")
def update_shared_inbox_url(stdout):
......@@ -123,16 +115,16 @@ def update_shared_inbox_url(stdout):
def generate_actor_urls(part, stdout):
field = "{}_url".format(part)
stdout.write("* Update {} for local actors...".format(field))
field = f"{part}_url"
stdout.write(f"* Update {field} for local actors...")
queryset = federation_models.Actor.objects.local().filter(**{field: None})
base_url = "{}/federation/actors/".format(settings.FUNKWHALE_URL)
base_url = f"{settings.FUNKWHALE_URL}/federation/actors/"
new_field = functions.Concat(
Value(base_url),
F("preferred_username"),
Value("/{}".format(part)),
Value(f"/{part}"),
output_field=CharField(),
)
......
import re
from django.contrib.postgres.search import SearchQuery
from django.db.models import Q
from . import utils
QUERY_REGEX = re.compile(r'(((?P<key>\w+):)?(?P<value>"[^"]+"|[\S]+))')
......@@ -9,7 +11,7 @@ QUERY_REGEX = re.compile(r'(((?P<key>\w+):)?(?P<value>"[^"]+"|[\S]+))')
def parse_query(query):
"""
Given a search query such as "hello is:issue status:opened",
returns a list of dictionnaries discribing each query token
returns a list of dictionaries describing each query token
"""
matches = [m.groupdict() for m in QUERY_REGEX.finditer(query.lower())]
for m in matches:
......@@ -23,7 +25,7 @@ def normalize_query(
findterms=re.compile(r'"([^"]+)"|(\S+)').findall,
normspace=re.compile(r"\s{2,}").sub,
):
""" Splits the query string in invidual keywords, getting rid of unecessary spaces
"""Splits the query string in individual keywords, getting rid of unnecessary spaces
and grouping quoted words together.
Example:
......@@ -56,6 +58,59 @@ def get_query(query_string, search_fields):
return query
def remove_chars(string, chars):
for char in chars:
string = string.replace(char, "")
return string
def get_fts_query(query_string, fts_fields=["body_text"], model=None):
search_type = "raw"
if query_string.startswith('"') and query_string.endswith('"'):
# we pass the query directly to the FTS engine
query_string = query_string[1:-1]
else:
query_string = remove_chars(query_string, ['"', "&", "(", ")", "!", "'"])
parts = query_string.replace(":", "").split(" ")
parts = [f"{p}:*" for p in parts if p]
if not parts:
return Q(pk=None)
query_string = "&".join(parts)
if not fts_fields or not query_string.strip():
return Q(pk=None)
query = None
for field in fts_fields:
if "__" in field and model:
# When we have a nested lookup, we switch to a subquery for enhanced performance
fk_field_name, lookup = (
field.split("__")[0],
"__".join(field.split("__")[1:]),
)
fk_field = model._meta.get_field(fk_field_name)
related_model = fk_field.related_model
subquery = related_model.objects.filter(
**{
lookup: SearchQuery(
query_string, search_type=search_type, config="english_nostop"
)
}
).values_list("pk", flat=True)
new_query = Q(**{f"{fk_field_name}__in": list(subquery)})
else:
new_query = Q(
**{
field: SearchQuery(
query_string, search_type=search_type, config="english_nostop"
)
}
)
query = utils.join_queries_or(query, new_query)
return query
def filter_tokens(tokens, valid):
return [t for t in tokens if t["key"] in valid]
......@@ -65,6 +120,9 @@ def apply(qs, config_data):
q = config_data.get(k)
if q:
qs = qs.filter(q)
distinct = config_data.get("distinct", False)
if distinct:
qs = qs.distinct()
return qs
......@@ -77,13 +135,28 @@ class SearchConfig:
def clean(self, query):
tokens = parse_query(query)
cleaned_data = {}
cleaned_data["types"] = self.clean_types(filter_tokens(tokens, ["is"]))
cleaned_data["search_query"] = self.clean_search_query(
filter_tokens(tokens, [None, "in"])
filter_tokens(tokens, [None, "in"] + list(self.search_fields.keys()))
)
unhandled_tokens = [
t
for t in tokens
if t["key"] not in [None, "is", "in"] + list(self.search_fields.keys())
]
cleaned_data["filter_query"], matching_filters = self.clean_filter_query(
unhandled_tokens
)
if matching_filters:
cleaned_data["distinct"] = any(
[
self.filter_fields[k].get("distinct", False)
for k in matching_filters
if k in self.filter_fields
]
)
unhandled_tokens = [t for t in tokens if t["key"] not in [None, "is", "in"]]
cleaned_data["filter_query"] = self.clean_filter_query(unhandled_tokens)
else:
cleaned_data["distinct"] = False
return cleaned_data
def clean_search_query(self, tokens):
......@@ -95,12 +168,37 @@ class SearchConfig:
} or set(self.search_fields.keys())
fields_subset = set(self.search_fields.keys()) & fields_subset
to_fields = [self.search_fields[k]["to"] for k in fields_subset]
specific_field_query = None
for token in tokens:
if token["key"] not in self.search_fields:
continue
to = self.search_fields[token["key"]]["to"]
try:
field = token["field"]
value = field.clean(token["value"])
except KeyError:
# no cleaning to apply
value = token["value"]
q = Q(**{f"{to}__icontains": value})
if not specific_field_query:
specific_field_query = q
else:
specific_field_query &= q
query_string = " ".join([t["value"] for t in filter_tokens(tokens, [None])])
return get_query(query_string, sorted(to_fields))
unhandled_tokens_query = get_query(query_string, sorted(to_fields))
if specific_field_query and unhandled_tokens_query:
return unhandled_tokens_query & specific_field_query
elif specific_field_query:
return specific_field_query
elif unhandled_tokens_query:
return unhandled_tokens_query
return None
def clean_filter_query(self, tokens):
if not self.filter_fields or not tokens:
return
return None, []
matching = [t for t in tokens if t["key"] in self.filter_fields]
queries = [self.get_filter_query(token) for token in matching]
......@@ -110,7 +208,7 @@ class SearchConfig:
query = q
else:
query = query & q
return query
return query, [m["key"] for m in matching]
def get_filter_query(self, token):
raw_value = token["value"]
......
import collections
import io
import PIL
import os
from rest_framework import serializers
import PIL
from django.core.exceptions import ObjectDoesNotExist
from django.core.files.uploadedfile import SimpleUploadedFile
from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_str
from django.utils.translation import gettext_lazy as _
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers
from . import models
from . import models, utils
class RelatedField(serializers.RelatedField):
......@@ -23,9 +24,11 @@ class RelatedField(serializers.RelatedField):
self.related_field_name = related_field_name
self.serializer = serializer
self.filters = kwargs.pop("filters", None)
kwargs["queryset"] = kwargs.pop(
"queryset", self.serializer.Meta.model.objects.all()
)
self.queryset_filter = kwargs.pop("queryset_filter", None)
try:
kwargs["queryset"] = kwargs.pop("queryset")
except KeyError:
kwargs["queryset"] = self.serializer.Meta.model.objects.all()
super().__init__(**kwargs)
def get_filters(self, data):
......@@ -34,16 +37,22 @@ class RelatedField(serializers.RelatedField):
filters.update(self.filters(self.context))
return filters
def filter_queryset(self, queryset):
if self.queryset_filter:
queryset = self.queryset_filter(queryset, self.context)
return queryset
def to_internal_value(self, data):
try:
queryset = self.get_queryset()
filters = self.get_filters(data)
queryset = self.filter_queryset(queryset)
return queryset.get(**filters)
except ObjectDoesNotExist:
self.fail(
"does_not_exist",
related_field_name=self.related_field_name,
value=smart_text(data),
value=smart_str(data),
)
except (TypeError, ValueError):
self.fail("invalid")
......@@ -68,18 +77,19 @@ class RelatedField(serializers.RelatedField):
self.display_value(item),
)
for item in queryset
if self.serializer
]
)
class Action(object):
class Action:
def __init__(self, name, allow_all=False, qs_filter=None):
self.name = name
self.allow_all = allow_all
self.qs_filter = qs_filter
def __repr__(self):
return "<Action {}>".format(self.name)
return f"<Action {self.name}>"
class ActionSerializer(serializers.Serializer):
......@@ -103,7 +113,7 @@ class ActionSerializer(serializers.Serializer):
)
for action in self.actions_by_name.keys():
handler_name = "handle_{}".format(action)
handler_name = f"handle_{action}"
assert hasattr(self, handler_name), "{} miss a {} method".format(
self.__class__.__name__, handler_name
)
......@@ -123,9 +133,9 @@ class ActionSerializer(serializers.Serializer):
if value == "all":
return self.queryset.all().order_by("id")
if type(value) in [list, tuple]:
return self.queryset.filter(
**{"{}__in".format(self.pk_field): value}
).order_by(self.pk_field)
return self.queryset.filter(**{f"{self.pk_field}__in": value}).order_by(
self.pk_field
)
raise serializers.ValidationError(
"{} is not a valid value for objects. You must provide either a "
......@@ -210,7 +220,7 @@ class StripExifImageField(serializers.ImageField):
with io.BytesIO() as output:
image_without_exif.save(
output,
format=PIL.Image.EXTENSION[os.path.splitext(file_obj.name)[-1]],
format=PIL.Image.EXTENSION[os.path.splitext(file_obj.name)[-1].lower()],
quality=100,
)
content = output.getvalue()
......@@ -260,6 +270,7 @@ class APIMutationSerializer(serializers.ModelSerializer):
"previous_state",
]
@extend_schema_field(OpenApiTypes.OBJECT)
def get_target(self, obj):
target = obj.target
if not target:
......@@ -270,5 +281,98 @@ class APIMutationSerializer(serializers.ModelSerializer):
def validate_type(self, value):
if value not in self.context["registry"]:
raise serializers.ValidationError("Invalid mutation type {}".format(value))
raise serializers.ValidationError(f"Invalid mutation type {value}")
return value
class AttachmentSerializer(serializers.Serializer):
uuid = serializers.UUIDField(read_only=True)
size = serializers.IntegerField(read_only=True)
mimetype = serializers.CharField(read_only=True)
creation_date = serializers.DateTimeField(read_only=True)
file = StripExifImageField(write_only=True)
urls = serializers.SerializerMethodField()
@extend_schema_field(
{
"type": "object",
"properties": {
"original": {"type": "string"},
"small_square_crop": {"type": "string"},
"medium_square_crop": {"type": "string"},
"large_square_crop": {"type": "string"},
},
}
)
def get_urls(self, o):
urls = {}
urls["source"] = o.url
urls["original"] = o.download_url_original
urls["small_square_crop"] = o.download_url_small_square_crop
urls["medium_square_crop"] = o.download_url_medium_square_crop
urls["large_square_crop"] = o.download_url_large_square_crop
return urls
def create(self, validated_data):
return models.Attachment.objects.create(
file=validated_data["file"], actor=validated_data["actor"]
)
class ContentSerializer(serializers.Serializer):
text = serializers.CharField(
max_length=models.CONTENT_TEXT_MAX_LENGTH, allow_null=True
)
content_type = serializers.ChoiceField(
choices=models.CONTENT_TEXT_SUPPORTED_TYPES,
)
html = serializers.SerializerMethodField()
def get_html(self, o) -> str:
return utils.render_html(o.text, o.content_type)
class NullToEmptDict:
def get_attribute(self, o):
attr = super().get_attribute(o)
if attr is None:
return {}
return attr
def to_representation(self, v):
if not v:
return v
return super().to_representation(v)
class ScopesSerializer(serializers.Serializer):
id = serializers.CharField()
rate = serializers.CharField()
description = serializers.CharField()
limit = serializers.IntegerField()
duration = serializers.IntegerField()
remaining = serializers.IntegerField()
available = serializers.IntegerField()
available_seconds = serializers.IntegerField()
reset = serializers.IntegerField()
reset_seconds = serializers.IntegerField()
class IdentSerializer(serializers.Serializer):
type = serializers.CharField()
id = serializers.CharField()
class RateLimitSerializer(serializers.Serializer):
enabled = serializers.BooleanField()
ident = IdentSerializer()
scopes = serializers.ListField(child=ScopesSerializer())
class ErrorDetailSerializer(serializers.Serializer):
detail = serializers.CharField(source="*")
class TextPreviewSerializer(serializers.Serializer):
rendered = serializers.CharField(read_only=True, source="*")
text = serializers.CharField(write_only=True)
......@@ -4,6 +4,13 @@ from django.conf import settings
import funkwhale_api
class FunkwhaleSession(requests.Session):
def request(self, *args, **kwargs):
kwargs.setdefault("verify", settings.EXTERNAL_REQUESTS_VERIFY_SSL)
kwargs.setdefault("timeout", settings.EXTERNAL_REQUESTS_TIMEOUT)
return super().request(*args, **kwargs)
def get_user_agent():
return "python-requests (funkwhale/{}; +{})".format(
funkwhale_api.__version__, settings.FUNKWHALE_URL
......@@ -11,6 +18,6 @@ def get_user_agent():
def get_session():
s = requests.Session()
s = FunkwhaleSession()
s.headers["User-Agent"] = get_user_agent()
return s
import django.dispatch
mutation_created = django.dispatch.Signal(providing_args=["mutation"])
mutation_updated = django.dispatch.Signal(
providing_args=["mutation", "old_is_approved", "new_is_approved"]
)
""" Required args: mutation """
mutation_created = django.dispatch.Signal()
""" Required args: mutation, old_is_approved, new_is_approved """
mutation_updated = django.dispatch.Signal()
import unicodedata
import os
import shutil
import slugify
from django.core.files.storage import FileSystemStorage
from storages.backends.s3boto3 import S3Boto3Storage
class ASCIIFileSystemStorage(FileSystemStorage):
def asciionly(name):
"""
Convert unicode characters in name to ASCII characters.
"""
return slugify.slugify(name, ok=slugify.SLUG_OK + ".", only_ascii=True)
class ASCIIFileSystemStorage(FileSystemStorage):
def get_valid_name(self, name):
return super().get_valid_name(asciionly(name))
def force_delete(self, name):
path = self.path(name)
try:
if os.path.isdir(path):
shutil.rmtree(path)
else:
return super().delete(name)
except FileNotFoundError:
pass
class ASCIIS3Boto3Storage(S3Boto3Storage):
def get_valid_name(self, name):
name = unicodedata.normalize("NFKD", name).encode("ascii", "ignore")
return super().get_valid_name(name)
return super().get_valid_name(asciionly(name))
import datetime
import logging
import tempfile
from django.conf import settings
from django.core.files import File
from django.db import transaction
from django.dispatch import receiver
from django.utils import timezone
from funkwhale_api.common import channels
from funkwhale_api.taskapp import celery
from . import models
from . import serializers
from . import signals
from . import models, serializers, session, signals
logger = logging.getLogger(__name__)
@celery.app.task(name="common.apply_mutation")
......@@ -57,3 +63,36 @@ def broadcast_mutation_update(mutation, old_is_approved, new_is_approved, **kwar
},
},
)
def fetch_remote_attachment(attachment, filename=None, save=True):
if attachment.file:
# already there, no need to fetch
return
s = session.get_session()
attachment.last_fetch_date = timezone.now()
with tempfile.TemporaryFile() as tf:
with s.get(attachment.url, timeout=5, stream=True) as r:
for chunk in r.iter_content(chunk_size=1024 * 100):
tf.write(chunk)
tf.seek(0)
if not filename:
filename = attachment.url.split("/")[-1]
filename = filename[-50:]
attachment.file.save(filename, File(tf), save=save)
@celery.app.task(name="common.prune_unattached_attachments")
def prune_unattached_attachments():
limit = timezone.now() - datetime.timedelta(
seconds=settings.ATTACHMENTS_UNATTACHED_PRUNE_DELAY
)
candidates = models.Attachment.objects.attached(False).filter(
creation_date__lte=limit
)
total = candidates.count()
logger.info("Deleting %s unattached attachments…", total)
result = candidates.delete()
logger.info("Deletion done: %s", result)
import collections
from django.conf import settings
from django.core.cache import cache
from rest_framework import throttling as rest_throttling
def get_ident(user, request):
if user and user.is_authenticated:
return {"type": "authenticated", "id": f"{user.pk}"}
ident = rest_throttling.BaseThrottle().get_ident(request)
return {"type": "anonymous", "id": ident}
def get_cache_key(scope, ident):
parts = ["throttling", scope, ident["type"], str(ident["id"])]
return ":".join(parts)
def get_scope_for_action_and_ident_type(action, ident_type, view_conf={}):
config = collections.ChainMap(view_conf, settings.THROTTLING_SCOPES)
try:
action_config = config[action]
except KeyError:
action_config = config.get("*", {})
try:
return action_config[ident_type]
except KeyError:
return
def get_status(ident, now):
data = []
throttle = FunkwhaleThrottle()
for key in sorted(settings.THROTTLING_RATES.keys()):
conf = settings.THROTTLING_RATES[key]
row_data = {"id": key, "rate": conf["rate"], "description": conf["description"]}
if conf["rate"]:
num_requests, duration = throttle.parse_rate(conf["rate"])
history = cache.get(get_cache_key(key, ident)) or []
relevant_history = [h for h in history if h > now - duration]
row_data["limit"] = num_requests
row_data["duration"] = duration
row_data["remaining"] = num_requests - len(relevant_history)
if relevant_history and len(relevant_history) >= num_requests:
# At this point, the endpoint becomes available again
now_request = relevant_history[-1]
remaining = duration - (now - int(now_request))
row_data["available"] = int(now + remaining) or None
row_data["available_seconds"] = int(remaining) or None
else:
row_data["available"] = None
row_data["available_seconds"] = None
if relevant_history:
# At this point, all Rate Limit is reset to 0
latest_request = relevant_history[0]
remaining = duration - (now - int(latest_request))
row_data["reset"] = int(now + remaining)
row_data["reset_seconds"] = int(remaining)
else:
row_data["reset"] = None
row_data["reset_seconds"] = None
else:
row_data["limit"] = None
row_data["duration"] = None
row_data["remaining"] = None
row_data["available"] = None
row_data["available_seconds"] = None
row_data["reset"] = None
row_data["reset_seconds"] = None
data.append(row_data)
return data
class FunkwhaleThrottle(rest_throttling.SimpleRateThrottle):
def __init__(self):
pass
def get_cache_key(self, request, view):
return get_cache_key(self.scope, self.ident)
def allow_request(self, request, view):
self.request = request
self.ident = get_ident(getattr(request, "user", None), request)
action = getattr(view, "action", "*")
view_scopes = getattr(view, "throttling_scopes", {})
if view_scopes is None:
return True
self.scope = get_scope_for_action_and_ident_type(
action=action, ident_type=self.ident["type"], view_conf=view_scopes
)
if not self.scope or self.scope not in settings.THROTTLING_RATES:
return True
self.rate = settings.THROTTLING_RATES[self.scope].get("rate")
self.num_requests, self.duration = self.parse_rate(self.rate)
self.request = request
return super().allow_request(request, view)
def attach_info(self):
info = {
"num_requests": self.num_requests,
"duration": self.duration,
"scope": self.scope,
"history": self.history or [],
"wait": self.wait(),
}
setattr(self.request, "_throttle_status", info)
def throttle_success(self):
self.attach_info()
return super().throttle_success()
def throttle_failure(self):
self.attach_info()
return super().throttle_failure()
class TooManyRequests(Exception):
pass
DummyView = collections.namedtuple("DummyView", "action throttling_scopes")
def check_request(request, scope):
"""
A simple wrapper around FunkwhaleThrottle for views that aren't API views
or cannot use rest_framework automatic throttling.
Raise TooManyRequests if limit is reached.
"""
if not settings.THROTTLING_ENABLED:
return True
view = DummyView(
action=scope,
throttling_scopes={scope: {"anonymous": scope, "authenticated": scope}},
)
throttle = FunkwhaleThrottle()
if not throttle.allow_request(request, view):
raise TooManyRequests()
return True
from django.utils.deconstruct import deconstructible
import datetime
import hashlib
import logging
import os
import shutil
import uuid
import xml.etree.ElementTree as ET
from urllib.parse import parse_qs, urlencode, urlsplit, urlunsplit
from django.conf import settings
import bleach.sanitizer
import markdown
from django import urls
from django.conf import settings
from django.core.files.base import ContentFile
from django.db import models, transaction
from django.http import request
from django.utils import timezone
from django.utils.deconstruct import deconstructible
logger = logging.getLogger(__name__)
def batch(iterable, n=1):
has_entries = True
while has_entries:
current = []
for i in range(0, n):
try:
current.append(next(iterable))
except StopIteration:
has_entries = False
yield current
def rename_file(instance, field_name, new_name, allow_missing_file=False):
field = getattr(instance, field_name)
current_name, extension = os.path.splitext(field.name)
new_name_with_extension = "{}{}".format(new_name, extension)
new_name_with_extension = f"{new_name}{extension}"
try:
shutil.move(field.path, new_name_with_extension)
except FileNotFoundError:
......@@ -51,12 +71,16 @@ def set_query_parameter(url, **kwargs):
@deconstructible
class ChunkedPath(object):
class ChunkedPath:
def sanitize_filename(self, filename):
return filename.replace("/", "-")
def __init__(self, root, preserve_file_name=True):
self.root = root
self.preserve_file_name = preserve_file_name
def __call__(self, instance, filename):
self.sanitize_filename(filename)
uid = str(uuid.uuid4())
chunk_size = 2
chunks = [uid[i : i + chunk_size] for i in range(0, len(uid), chunk_size)]
......@@ -64,7 +88,7 @@ class ChunkedPath(object):
parts = chunks[:3] + [filename]
else:
ext = os.path.splitext(filename)[1][1:].lower()
new_filename = "".join(chunks[3:]) + ".{}".format(ext)
new_filename = "".join(chunks[3:]) + f".{ext}"
parts = chunks[:3] + [new_filename]
return os.path.join(self.root, *parts)
......@@ -113,6 +137,9 @@ def chunk_queryset(source_qs, chunk_size):
def join_url(start, end):
if end.startswith("http://") or end.startswith("https://"):
# already a full URL, joining makes no sense
return end
if start.endswith("/") and end.startswith("/"):
return start + end[1:]
......@@ -122,6 +149,17 @@ def join_url(start, end):
return start + end
def media_url(path):
if settings.MEDIA_URL.startswith("http://") or settings.MEDIA_URL.startswith(
"https://"
):
return join_url(settings.MEDIA_URL, path)
from funkwhale_api.federation import utils as federation_utils
return federation_utils.full_url(path)
def spa_reverse(name, args=[], kwargs={}):
return urls.reverse(name, urlconf=settings.SPA_URLCONF, args=args, kwargs=kwargs)
......@@ -146,12 +184,12 @@ def order_for_search(qs, field):
When searching, it's often more useful to have short results first,
this function will order the given qs based on the length of the given field
"""
return qs.annotate(__size=models.functions.Length(field)).order_by("__size")
return qs.annotate(__size=models.functions.Length(field)).order_by("__size", "pk")
def recursive_getattr(obj, key, permissive=False):
"""
Given a dictionary such as {'user': {'name': 'Bob'}} and
Given a dictionary such as {'user': {'name': 'Bob'}} or and object and
a dotted string such as user.name, returns 'Bob'.
If the value is not present, returns None
......@@ -159,7 +197,10 @@ def recursive_getattr(obj, key, permissive=False):
v = obj
for k in key.split("."):
try:
if hasattr(v, "get"):
v = v.get(k)
else:
v = getattr(v, k)
except (TypeError, AttributeError):
if not permissive:
raise
......@@ -186,7 +227,7 @@ def replace_prefix(queryset, field, old, new):
on a whole table with a single query.
"""
qs = queryset.filter(**{"{}__startswith".format(field): old})
qs = queryset.filter(**{f"{field}__startswith": old})
# we extract the part after the old prefix, and Concat it with our new prefix
update = models.functions.Concat(
models.Value(new),
......@@ -201,3 +242,248 @@ def concat_dicts(*dicts):
n.update(d)
return n
def get_updated_fields(conf, data, obj):
"""
Given a list of fields, a dict and an object, will return the dict keys/values
that differ from the corresponding fields on the object.
"""
final_conf = []
for c in conf:
if isinstance(c, str):
final_conf.append((c, c))
else:
final_conf.append(c)
final_data = {}
for data_field, obj_field in final_conf:
try:
data_value = data[data_field]
except KeyError:
continue
if obj.pk:
obj_value = getattr(obj, obj_field)
if obj_value != data_value:
final_data[obj_field] = data_value
else:
final_data[obj_field] = data_value
return final_data
def join_queries_or(left, right):
if left:
return left | right
else:
return right
MARKDOWN_RENDERER = markdown.Markdown(extensions=settings.MARKDOWN_EXTENSIONS)
def render_markdown(text):
return MARKDOWN_RENDERER.convert(text)
SAFE_TAGS = [
"p",
"a",
"abbr",
"acronym",
"b",
"blockquote",
"br",
"code",
"em",
"i",
"li",
"ol",
"strong",
"ul",
]
HTMl_CLEANER = bleach.sanitizer.Cleaner(strip=True, tags=SAFE_TAGS)
HTML_PERMISSIVE_CLEANER = bleach.sanitizer.Cleaner(
strip=True,
tags=SAFE_TAGS + ["h1", "h2", "h3", "h4", "h5", "h6", "div", "section", "article"],
attributes=["class", "rel", "alt", "title", "href"],
)
# support for additional tlds
# cf https://github.com/mozilla/bleach/issues/367#issuecomment-384631867
ALL_TLDS = set(settings.LINKIFIER_SUPPORTED_TLDS + bleach.linkifier.TLDS)
URL_RE = bleach.linkifier.build_url_re(tlds=sorted(ALL_TLDS, reverse=True))
HTML_LINKER = bleach.linkifier.Linker(url_re=URL_RE)
def clean_html(html, permissive=False):
return (
HTML_PERMISSIVE_CLEANER.clean(html) if permissive else HTMl_CLEANER.clean(html)
)
def render_html(text, content_type, permissive=False):
if not text:
return ""
rendered = render_markdown(text)
if content_type == "text/html":
rendered = text
elif content_type == "text/markdown":
rendered = render_markdown(text)
else:
rendered = render_markdown(text)
rendered = HTML_LINKER.linkify(rendered)
return clean_html(rendered, permissive=permissive).strip().replace("\n", "")
def render_plain_text(html):
if not html:
return ""
return bleach.clean(html, tags=[], strip=True)
def same_content(old, text=None, content_type=None):
return old.text == text and old.content_type == content_type
@transaction.atomic
def attach_content(obj, field, content_data):
from . import models
content_data = content_data or {}
existing = getattr(obj, f"{field}_id")
if existing:
if same_content(getattr(obj, field), **content_data):
# optimization to avoid a delete/save if possible
return getattr(obj, field)
getattr(obj, field).delete()
setattr(obj, field, None)
if not content_data:
return
content_obj = models.Content.objects.create(
text=content_data["text"][: models.CONTENT_TEXT_MAX_LENGTH],
content_type=content_data["content_type"],
)
setattr(obj, field, content_obj)
obj.save(update_fields=[field])
return content_obj
@transaction.atomic
def attach_file(obj, field, file_data, fetch=False):
from . import models, tasks
existing = getattr(obj, f"{field}_id")
if existing:
getattr(obj, field).delete()
if not file_data:
return
if isinstance(file_data, models.Attachment):
attachment = file_data
else:
extensions = {"image/jpeg": "jpg", "image/png": "png", "image/gif": "gif"}
extension = extensions.get(file_data["mimetype"], "jpg")
attachment = models.Attachment(mimetype=file_data["mimetype"])
name_fields = ["uuid", "full_username", "pk"]
name = [
getattr(obj, field) for field in name_fields if getattr(obj, field, None)
][0]
filename = f"{field}-{name}.{extension}"
if "url" in file_data:
attachment.url = file_data["url"]
else:
f = ContentFile(file_data["content"])
attachment.file.save(filename, f, save=False)
if not attachment.file and fetch:
try:
tasks.fetch_remote_attachment(attachment, filename=filename, save=False)
except Exception as e:
logger.warn(
"Cannot download attachment at url %s: %s", attachment.url, e
)
attachment = None
if attachment:
attachment.save()
setattr(obj, field, attachment)
obj.save(update_fields=[field])
return attachment
def get_mimetype_from_ext(path):
parts = path.lower().split(".")
ext = parts[-1]
match = {
"jpeg": "image/jpeg",
"jpg": "image/jpeg",
"png": "image/png",
"gif": "image/gif",
}
return match.get(ext)
def get_audio_mimetype(mt):
aliases = {"audio/x-mp3": "audio/mpeg", "audio/mpeg3": "audio/mpeg"}
return aliases.get(mt, mt)
def update_modification_date(obj, field="modification_date", date=None):
IGNORE_DELAY = 60
current_value = getattr(obj, field)
date = date or timezone.now()
ignore = current_value is not None and current_value < date - datetime.timedelta(
seconds=IGNORE_DELAY
)
if ignore:
setattr(obj, field, date)
obj.__class__.objects.filter(pk=obj.pk).update(**{field: date})
return date
def monkey_patch_request_build_absolute_uri():
"""
Since we have FUNKWHALE_HOSTNAME and PROTOCOL hardcoded in settings, we can
override django's multisite logic which can break when reverse proxy aren't configured
properly.
"""
builtin_scheme = request.HttpRequest.scheme
def scheme(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_PROTOCOL
return builtin_scheme.fget(self)
builtin_get_host = request.HttpRequest.get_host
def get_host(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_HOSTNAME
return builtin_get_host(self)
request.HttpRequest.scheme = property(scheme)
request.HttpRequest.get_host = get_host
def get_file_hash(file, algo=None, chunk_size=None, full_read=False):
algo = algo or settings.HASHING_ALGORITHM
chunk_size = chunk_size or settings.HASHING_CHUNK_SIZE
hasher = hashlib.new(algo)
file.seek(0)
if full_read:
for byte_block in iter(lambda: file.read(chunk_size), b""):
hasher.update(byte_block)
else:
# sometimes, it's useful to only hash the beginning of the file, e.g
# to avoid a lot of I/O when crawling large libraries
hasher.update(file.read(chunk_size))
return f"{algo}:{hasher.hexdigest()}"
......@@ -6,7 +6,7 @@ from django.core.exceptions import ValidationError
from django.core.files.images import get_image_dimensions
from django.template.defaultfilters import filesizeformat
from django.utils.deconstruct import deconstructible
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import gettext_lazy as _
@deconstructible
......@@ -72,7 +72,7 @@ class ImageDimensionsValidator:
@deconstructible
class FileValidator(object):
class FileValidator:
"""
Taken from https://gist.github.com/jrosebr1/2140738
Validator for files, checking the size, extension and mimetype.
......@@ -97,7 +97,7 @@ class FileValidator(object):
"MIME type '%(mimetype)s' is not valid. Allowed types are: %(allowed_mimetypes)s."
)
min_size_message = _(
"The current file %(size)s, which is too small. The minumum file size is %(allowed_size)s."
"The current file %(size)s, which is too small. The minimum file size is %(allowed_size)s."
)
max_size_message = _(
"The current file %(size)s, which is too large. The maximum file size is %(allowed_size)s."
......@@ -163,5 +163,5 @@ class DomainValidator(validators.URLValidator):
If it fails, we know the domain is not valid.
"""
super().__call__("http://{}".format(value))
super().__call__(f"http://{value}")
return value
from django.db import transaction
import logging
import time
from django.conf import settings
from django.db import transaction
from drf_spectacular.utils import extend_schema
from rest_framework import (
exceptions,
generics,
mixins,
permissions,
response,
views,
viewsets,
)
from rest_framework.decorators import action
from rest_framework import exceptions
from rest_framework import mixins
from rest_framework import permissions
from rest_framework import response
from rest_framework import viewsets
from . import filters
from . import models
from . import mutations
from . import serializers
from . import signals
from . import tasks
from . import utils
from config import plugins
from funkwhale_api.common.serializers import (
ErrorDetailSerializer,
TextPreviewSerializer,
)
from funkwhale_api.users.oauth import permissions as oauth_permissions
from . import filters, models, mutations, serializers, signals, tasks, throttling, utils
logger = logging.getLogger(__name__)
class SkipFilterForGetObject:
......@@ -36,6 +47,7 @@ class MutationViewSet(
lookup_field = "uuid"
queryset = (
models.Mutation.objects.all()
.exclude(target_id=None)
.order_by("-creation_date")
.select_related("created_by", "approved_by")
.prefetch_related("target")
......@@ -64,6 +76,7 @@ class MutationViewSet(
return super().perform_destroy(instance)
@extend_schema(operation_id="approve_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def approve(self, request, *args, **kwargs):
......@@ -93,6 +106,7 @@ class MutationViewSet(
)
return response.Response({}, status=200)
@extend_schema(operation_id="reject_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def reject(self, request, *args, **kwargs):
......@@ -120,3 +134,193 @@ class MutationViewSet(
new_is_approved=instance.is_approved,
)
return response.Response({}, status=200)
class RateLimitView(views.APIView):
permission_classes = []
throttle_classes = []
serializer_class = serializers.RateLimitSerializer
def get(self, request, *args, **kwargs):
ident = throttling.get_ident(getattr(request, "user", None), request)
data = {
"enabled": settings.THROTTLING_ENABLED,
"ident": ident,
"scopes": throttling.get_status(ident, time.time()),
}
return response.Response(serializers.RateLimitSerializer(data).data, status=200)
class AttachmentViewSet(
mixins.RetrieveModelMixin,
mixins.CreateModelMixin,
mixins.DestroyModelMixin,
viewsets.GenericViewSet,
):
lookup_field = "uuid"
queryset = models.Attachment.objects.all()
serializer_class = serializers.AttachmentSerializer
permission_classes = [oauth_permissions.ScopePermission]
required_scope = "libraries"
anonymous_policy = "setting"
@action(
detail=True, methods=["get"], permission_classes=[], authentication_classes=[]
)
@transaction.atomic
def proxy(self, request, *args, **kwargs):
instance = self.get_object()
if not settings.EXTERNAL_MEDIA_PROXY_ENABLED:
r = response.Response(status=302)
r["Location"] = instance.url
return r
size = request.GET.get("next", "original").lower()
if size not in [
"original",
"small_square_crop",
"medium_square_crop",
"large_square_crop",
]:
size = "original"
try:
tasks.fetch_remote_attachment(instance)
except Exception:
logger.exception("Error while fetching attachment %s", instance.url)
return response.Response(status=500)
data = self.serializer_class(instance).data
redirect = response.Response(status=302)
redirect["Location"] = data["urls"][size]
return redirect
def perform_create(self, serializer):
return serializer.save(actor=self.request.user.actor)
def perform_destroy(self, instance):
if instance.actor is None or instance.actor != self.request.user.actor:
raise exceptions.PermissionDenied()
instance.delete()
class TextPreviewView(generics.GenericAPIView):
permission_classes = []
serializer_class = TextPreviewSerializer
@extend_schema(
operation_id="preview_text",
responses={200: TextPreviewSerializer, 400: ErrorDetailSerializer},
)
def post(self, request, *args, **kwargs):
payload = request.data
if "text" not in payload:
return response.Response(
ErrorDetailSerializer("Invalid input").data, status=400
)
permissive = payload.get("permissive", False)
data = TextPreviewSerializer(
utils.render_html(payload["text"], "text/markdown", permissive=permissive)
).data
return response.Response(data, status=200)
class PluginViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):
required_scope = "plugins"
serializer_class = serializers.serializers.Serializer
queryset = models.PluginConfiguration.objects.none()
def list(self, request, *args, **kwargs):
user = request.user
user_plugins = [p for p in plugins._plugins.values() if p["user"] is True]
return response.Response(
[
plugins.serialize_plugin(p, confs=plugins.get_confs(user=user))
for p in user_plugins
]
)
def retrieve(self, request, *args, **kwargs):
user = request.user
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if not user_plugin:
return response.Response(status=404)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def post(self, request, *args, **kwargs):
return self.create(request, *args, **kwargs)
def create(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if kwargs["pk"] not in confs:
return response.Response(status=404)
plugins.set_conf(kwargs["pk"], request.data, user)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def delete(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
if kwargs["pk"] not in confs:
return response.Response(status=404)
user.plugins.filter(code=kwargs["pk"]).delete()
return response.Response(status=204)
@extend_schema(operation_id="enable_plugin")
@action(detail=True, methods=["post"])
def enable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], True, user)
return response.Response({}, status=200)
@extend_schema(operation_id="disable_plugin")
@action(detail=True, methods=["post"])
def disable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], False, user)
return response.Response({}, status=200)
@action(detail=True, methods=["post"])
def scan(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
conf = plugins.get_conf(kwargs["pk"], user=user)
if not conf["enabled"]:
return response.Response(status=405)
library = request.user.actor.libraries.get(uuid=conf["conf"]["library"])
hook = [
hook
for p, hook in plugins._hooks.get(plugins.SCAN, [])
if p == kwargs["pk"]
]
if not hook:
return response.Response(status=405)
hook[0](library=library, conf=conf["conf"])
return response.Response({}, status=200)
# -*- coding: utf-8 -*-
import logging
from config import plugins
from funkwhale_api.contrib.archivedl import tasks
from .funkwhale_startup import PLUGIN
logger = logging.getLogger(__name__)
@plugins.register_hook(plugins.TRIGGER_THIRD_PARTY_UPLOAD, PLUGIN)
def lauch_download(track, conf={}):
tasks.archive_download.delay(track_id=track.pk, conf=conf)
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="archivedl",
label="Archive-dl",
description="",
version="0.1",
user=False,
conf=[],
)
import asyncio
import hashlib
import logging
import os
import tempfile
import time
import urllib.parse
from datetime import timedelta
import requests
from django.core.files import File
from django.utils import timezone
from funkwhale_api.federation import actors
from funkwhale_api.music import models, utils
from funkwhale_api.taskapp import celery
logger = logging.getLogger(__name__)
class TooManyQueriesError(Exception):
pass
def check_existing_download_task(track):
if models.Upload.objects.filter(
track=track,
import_status__in=["pending", "finished"],
third_party_provider="archive-dl",
).exists():
raise TooManyQueriesError(
"Upload for this track already exist or is pending. Stopping task."
)
def check_last_third_party_queries(track, count):
# 15 per minutes according to their doc = one each 4 seconds
time_threshold = timezone.now() - timedelta(seconds=5)
if models.Upload.objects.filter(
third_party_provider="archive-dl",
import_status__in=["pending", "finished"],
creation_date__gte=time_threshold,
).exists():
logger.info(
"Last archive.org query was too recent. Trying to wait 2 seconds..."
)
time.sleep(2)
count += 1
if count > 3:
raise TooManyQueriesError(
"Probably too many archivedl tasks are queue, stopping this task"
)
check_last_third_party_queries(track, count)
def create_upload(url, track, files_data):
mimetype = f"audio/{files_data.get('format', 'unknown')}"
duration = files_data.get("mtime", 0)
filesize = files_data.get("size", 0)
bitrate = files_data.get("bitrate", 0)
service_library = models.Library.objects.create(
privacy_level="everyone",
actor=actors.get_service_actor(),
)
return models.Upload.objects.create(
mimetype=mimetype,
source=url,
third_party_provider="archive-dl",
creation_date=timezone.now(),
track=track,
duration=duration,
size=filesize,
bitrate=bitrate,
library=service_library,
from_activity=None,
import_status="pending",
)
@celery.app.task(name="archivedl.archive_download")
@celery.require_instance(models.Track.objects.select_related(), "track")
def archive_download(track, conf):
try:
check_existing_download_task(track)
check_last_third_party_queries(track, 0)
except TooManyQueriesError as e:
logger.error(e)
return
artist_name = utils.get_artist_credit_string(track)
query = f"mediatype:audio AND title:{track.title} AND creator:{artist_name}"
with requests.Session() as session:
url = get_search_url(query, page_size=1, page=1)
page_data = fetch_json(url, session)
for obj in page_data["response"]["docs"]:
logger.info(f"launching download item for {str(obj)}")
download_item(
item_data=obj,
session=session,
allowed_extensions=utils.SUPPORTED_EXTENSIONS,
track=track,
)
def fetch_json(url, session):
logger.info(f"Fetching {url}...")
with session.get(url) as response:
return response.json()
def download_item(
item_data,
session,
allowed_extensions,
track,
):
files_data = get_files_data(item_data["identifier"], session)
to_download = list(
filter_files(
files_data["result"],
allowed_extensions=allowed_extensions,
)
)
url = f"https://archive.org/download/{item_data['identifier']}/{to_download[0]['name']}"
upload = create_upload(url, track, to_download[0])
try:
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, to_download[0]["name"])
download_file(
path,
url=url,
session=session,
checksum=to_download[0]["sha1"],
upload=upload,
to_download=to_download,
)
logger.info(f"Finished to download item {item_data['identifier']}...")
except Exception as e:
upload.delete()
raise e
def check_integrity(path, expected_checksum):
with open(path, mode="rb") as f:
hash = hashlib.sha1()
hash.update(f.read())
return expected_checksum == hash.hexdigest()
def get_files_data(identifier, session):
url = f"https://archive.org/metadata/{identifier}/files"
logger.info(f"Fetching files data at {url}...")
with session.get(url) as response:
return response.json()
def download_file(path, url, session, checksum, upload, to_download):
if os.path.exists(path) and check_integrity(path, checksum):
logger.info(f"Skipping already downloaded file at {path}")
return
logger.info(f"Downloading file {url}...")
with open(path, mode="wb") as f:
try:
with session.get(url) as response:
f.write(response.content)
except asyncio.TimeoutError as e:
logger.error(f"Timeout error while downloading {url}: {e}")
with open(path, "rb") as f:
upload.audio_file.save(f"{to_download['name']}", File(f))
upload.import_status = "finished"
upload.url = url
upload.save()
return upload
def filter_files(files, allowed_extensions):
for f in files:
if allowed_extensions:
extension = os.path.splitext(f["name"])[-1][1:]
if extension not in allowed_extensions:
continue
yield f
def get_search_url(query, page_size, page):
q = urllib.parse.urlencode({"q": query})
return f"https://archive.org/advancedsearch.php?{q}&sort[]=addeddate+desc&rows={page_size}\
&page={page}&output=json"