Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2422-trigger-libraries-follow-on-user-follow
  • 2429-fix-popover-auto-close
  • 2448-complete-tags
  • 2452-fetch-third-party-metadata
  • 2469-Fix-search-bar-in-ManageUploads
  • 2476-deep-upload-links
  • 2490-experiment-use-rstore
  • 2490-experimental-use-simple-data-store
  • 2490-fix-search-modal
  • 2490-search-modal
  • 2501-fix-compatibility-with-older-browsers
  • 2502-drop-uno-and-jquery
  • 2533-allow-followers-in-user-activiy-privacy-level
  • 2539-drop-ansible-installation-method-in-favor-of-docker
  • 2560-default-modal-width
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • feature/2481-vui-translations
  • fix-amd64-docker-build-gfortran
  • fix-front-node-version
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • refactor/homepage
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • update-frontend-dependencies
  • upload-process-spec
  • user-concept-docs
  • v2-artists
  • vite-ws-ssl-compatible
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
200 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 278-search-browse
  • 303-json-ld
  • 316-ultrasonic
  • 334-don-t-display-an-empty-page-browser
  • 463-user-libraries
  • 463-user-libraries-full
  • ButterflyOfFire/funkwhale-patch-1
  • avatar-everywhere
  • develop
  • master
  • playlist-component
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
49 results
Show changes
Showing
with 27854 additions and 68 deletions
import operator
from django.core.exceptions import ObjectDoesNotExist
from django.http import Http404
from rest_framework.permissions import BasePermission
......@@ -9,7 +10,9 @@ from funkwhale_api.common import preferences
class ConditionalAuthentication(BasePermission):
def has_permission(self, request, view):
if preferences.get("common__api_authentication_required"):
return request.user and request.user.is_authenticated
return (request.user and request.user.is_authenticated) or (
hasattr(request, "actor") and request.actor
)
return True
......@@ -44,7 +47,68 @@ class OwnerPermission(BasePermission):
return True
owner_field = getattr(view, "owner_field", "user")
owner_exception = getattr(view, "owner_exception", Http404)
try:
owner = operator.attrgetter(owner_field)(obj)
if owner != request.user:
raise Http404
except ObjectDoesNotExist:
raise owner_exception
if not owner or not request.user.is_authenticated or owner != request.user:
raise owner_exception
return True
class PrivacyLevelPermission(BasePermission):
"""
Ensure the request actor have access to the object considering the privacylevel configuration
of the user.
request.user is None if actor, else its Anonymous if user is not auth.
"""
def has_object_permission(self, request, view, obj):
if (
not hasattr(obj, "user")
and hasattr(obj, "actor")
and not obj.actor.is_local
):
# it's a remote actor object. It should be public.
# But we could trigger an update of the remote actor data
# to avoid leaking data (#2326)
return True
if hasattr(obj, "privacy_level"):
privacy_level = obj.privacy_level
elif hasattr(obj, "actor") and obj.actor.user:
privacy_level = obj.actor.user.privacy_level
else:
privacy_level = obj.user.privacy_level
obj_actor = obj.actor if hasattr(obj, "actor") else obj.user.actor
if privacy_level == "everyone":
return True
# user is anonymous
if hasattr(request, "actor"):
request_actor = request.actor
elif request.user and request.user.is_authenticated:
request_actor = request.user.actor
else:
return False
if privacy_level == "instance":
# user is local
if request.user and hasattr(request.user, "actor"):
return True
elif hasattr(request, "actor") and request.actor and request.actor.is_local:
return True
else:
return False
elif privacy_level == "me" and obj_actor == request_actor:
return True
elif request_actor in obj_actor.get_approved_followers():
return True
else:
return False
import json
from django import forms
from django.conf import settings
from django.forms import JSONField
from dynamic_preferences import serializers, types
from dynamic_preferences.registries import global_preferences_registry
class DefaultFromSettingMixin(object):
class DefaultFromSettingMixin:
def get_default(self):
return getattr(settings, self.setting)
......@@ -14,6 +17,11 @@ def get(pref):
return manager[pref]
def all():
manager = global_preferences_registry.manager()
return manager.all()
def set(pref, value):
manager = global_preferences_registry.manager()
manager[pref] = value
......@@ -30,7 +38,7 @@ class StringListSerializer(serializers.BaseSerializer):
if type(value) not in [list, tuple]:
raise cls.exception(
"Cannot serialize, value {} is not a list or a tuple".format(value)
f"Cannot serialize, value {value} is not a list or a tuple"
)
if cls.sort:
......@@ -49,6 +57,50 @@ class StringListPreference(types.BasePreferenceType):
field_class = forms.MultipleChoiceField
def get_api_additional_data(self):
d = super(StringListPreference, self).get_api_additional_data()
d = super().get_api_additional_data()
d["choices"] = self.get("choices")
return d
class JSONSerializer(serializers.BaseSerializer):
required = True
@classmethod
def to_db(cls, value, **kwargs):
if not cls.required and value is None:
return json.dumps(value)
data_serializer = cls.data_serializer_class(data=value)
if not data_serializer.is_valid():
raise cls.exception(
f"{value} is not a valid value: {data_serializer.errors}"
)
value = data_serializer.validated_data
try:
return json.dumps(value, sort_keys=True)
except TypeError:
raise cls.exception(
f"Cannot serialize, value {value} is not JSON serializable"
)
@classmethod
def to_python(cls, value, **kwargs):
return json.loads(value)
class SerializedPreference(types.BasePreferenceType):
"""
A preference that store arbitrary JSON and validate it using a rest_framework
serializer
"""
data_serializer_class = None
field_class = JSONField
widget = forms.Textarea
@property
def serializer(self):
class _internal(JSONSerializer):
data_serializer_class = self.data_serializer_class
required = self.get("required")
return _internal
from rest_framework.renderers import JSONRenderer
class ActivityStreamRenderer(JSONRenderer):
media_type = "application/activity+json"
from rest_framework.routers import DefaultRouter
class OptionalSlashRouter(DefaultRouter):
def __init__(self):
super().__init__()
self.trailing_slash = "/?"
Source diff could not be displayed: it is too large. Options to address this: view the blob.
from . import create_actors
from . import create_image_variations
from . import django_permissions_to_user_permissions
from . import test
from . import (
create_actors,
delete_pre_017_federated_uploads,
django_permissions_to_user_permissions,
migrate_to_user_libraries,
test,
)
__all__ = [
"create_actors",
"create_image_variations",
"django_permissions_to_user_permissions",
"migrate_to_user_libraries",
"delete_pre_017_federated_uploads",
"test",
]
......@@ -9,15 +9,13 @@ from funkwhale_api.users.models import User, create_actor
def main(command, **kwargs):
qs = User.objects.filter(actor__isnull=True).order_by("username")
total = len(qs)
command.stdout.write("{} users found without actors".format(total))
command.stdout.write(f"{total} users found without actors")
for i, user in enumerate(qs):
command.stdout.write(
"{}/{} creating actor for {}".format(i + 1, total, user.username)
)
command.stdout.write(f"{i + 1}/{total} creating actor for {user.username}")
try:
user.actor = create_actor(user)
except IntegrityError as e:
# somehow, an actor with the the url exists in the database
command.stderr.write("Error while creating actor: {}".format(str(e)))
command.stderr.write(f"Error while creating actor: {str(e)}")
continue
user.save(update_fields=["actor"])
......@@ -4,16 +4,16 @@ Compute different sizes of image used for Album covers and User avatars
from versatileimagefield.image_warmer import VersatileImageFieldWarmer
from funkwhale_api.music.models import Album
from funkwhale_api.users.models import User
from funkwhale_api.common.models import Attachment
MODELS = [(Album, "cover", "square"), (User, "avatar", "square")]
MODELS = [
(Attachment, "file", "attachment_square"),
]
def main(command, **kwargs):
for model, attribute, key_set in MODELS:
qs = model.objects.exclude(**{"{}__isnull".format(attribute): True})
qs = model.objects.exclude(**{f"{attribute}__isnull": True})
qs = qs.exclude(**{attribute: ""})
warmer = VersatileImageFieldWarmer(
instance_or_queryset=qs,
......@@ -21,10 +21,8 @@ def main(command, **kwargs):
image_attr=attribute,
verbose=True,
)
command.stdout.write(
"Creating images for {} / {}".format(model.__name__, attribute)
)
command.stdout.write(f"Creating images for {model.__name__} / {attribute}")
num_created, failed_to_create = warmer.warm()
command.stdout.write(
" {} created, {} in error".format(num_created, len(failed_to_create))
f" {num_created} created, {len(failed_to_create)} in error"
)
"""
Compute different sizes of image used for Album covers and User avatars
"""
from funkwhale_api.music.models import Upload
def main(command, **kwargs):
queryset = Upload.objects.filter(
source__startswith="http", source__contains="/federation/music/file/"
).exclude(source__contains="youtube")
total = queryset.count()
command.stdout.write(f"{total} uploads found")
queryset.delete()
......@@ -10,7 +10,6 @@ from funkwhale_api.users import models
mapping = {
"dynamic_preferences.change_globalpreferencemodel": "settings",
"music.add_importbatch": "library",
"federation.change_library": "federation",
}
......@@ -24,6 +23,6 @@ def main(command, **kwargs):
total = users.count()
command.stdout.write(
"Updating {} users with {} permission...".format(total, user_permission)
f"Updating {total} users with {user_permission} permission..."
)
users.update(**{"permission_{}".format(user_permission): True})
users.update(**{f"permission_{user_permission}": True})
"""
Mirate instance files to a library #463. For each user that imported music on an
instance, we will create a "default" library with related files and an instance-level
visibility (unless instance has common__api_authentication_required set to False,
in which case the libraries will be public).
Files without any import job will be bounded to a "default" library on the first
superuser account found. This should now happen though.
This command will also generate federation ids for existing resources.
"""
from django.conf import settings
from django.db.models import CharField, F, Value, functions
from funkwhale_api.common import preferences
from funkwhale_api.federation import models as federation_models
from funkwhale_api.music import models
from funkwhale_api.users.models import User
def create_libraries(open_api, stdout):
local_actors = federation_models.Actor.objects.exclude(user=None).only("pk", "user")
privacy_level = "everyone" if open_api else "instance"
stdout.write(
"* Creating {} libraries with {} visibility".format(
len(local_actors), privacy_level
)
)
libraries_by_user = {}
for a in local_actors:
library, created = models.Library.objects.get_or_create(
name="default", actor=a, defaults={"privacy_level": privacy_level}
)
libraries_by_user[library.actor.user.pk] = library.pk
if created:
stdout.write(f" * Created library {library.pk} for user {a.user.pk}")
else:
stdout.write(
" * Found existing library {} for user {}".format(
library.pk, a.user.pk
)
)
return libraries_by_user
def update_uploads(libraries_by_user, stdout):
stdout.write("* Updating uploads with proper libraries...")
for user_id, library_id in libraries_by_user.items():
jobs = models.ImportJob.objects.filter(
upload__library=None, batch__submitted_by=user_id
)
candidates = models.Upload.objects.filter(
pk__in=jobs.values_list("upload", flat=True)
)
total = candidates.update(library=library_id, import_status="finished")
if total:
stdout.write(f" * Assigned {total} uploads to user {user_id}'s library")
else:
stdout.write(f" * No uploads to assign to user {user_id}'s library")
def update_orphan_uploads(open_api, stdout):
privacy_level = "everyone" if open_api else "instance"
first_superuser = (
User.objects.filter(is_superuser=True)
.exclude(actor=None)
.order_by("pk")
.first()
)
if not first_superuser:
stdout.write("* No superuser found, skipping update orphan uploads")
return
library, _ = models.Library.objects.get_or_create(
name="default",
actor=first_superuser.actor,
defaults={"privacy_level": privacy_level},
)
candidates = (
models.Upload.objects.filter(library=None, jobs__isnull=True)
.exclude(audio_file=None)
.exclude(audio_file="")
)
total = candidates.update(library=library, import_status="finished")
if total:
stdout.write(
"* Assigned {} orphaned uploads to superuser {}".format(
total, first_superuser.pk
)
)
else:
stdout.write("* No orphaned uploads found")
def set_fid(queryset, path, stdout):
model = queryset.model._meta.label
qs = queryset.filter(fid=None)
base_url = f"{settings.FUNKWHALE_URL}{path}"
stdout.write(f"* Assigning federation ids to {model} entries (path: {base_url})")
new_fid = functions.Concat(Value(base_url), F("uuid"), output_field=CharField())
total = qs.update(fid=new_fid)
stdout.write(f" * {total} entries updated")
def update_shared_inbox_url(stdout):
stdout.write("* Update shared inbox url for local actors...")
candidates = federation_models.Actor.objects.local()
url = federation_models.get_shared_inbox_url()
candidates.update(shared_inbox_url=url)
def generate_actor_urls(part, stdout):
field = f"{part}_url"
stdout.write(f"* Update {field} for local actors...")
queryset = federation_models.Actor.objects.local().filter(**{field: None})
base_url = f"{settings.FUNKWHALE_URL}/federation/actors/"
new_field = functions.Concat(
Value(base_url),
F("preferred_username"),
Value(f"/{part}"),
output_field=CharField(),
)
queryset.update(**{field: new_field})
def main(command, **kwargs):
open_api = not preferences.get("common__api_authentication_required")
libraries_by_user = create_libraries(open_api, command.stdout)
update_uploads(libraries_by_user, command.stdout)
update_orphan_uploads(open_api, command.stdout)
set_fid_params = [
(
models.Upload.objects.exclude(library__actor__user=None),
"/federation/music/uploads/",
),
(models.Artist.objects.all(), "/federation/music/artists/"),
(models.Album.objects.all(), "/federation/music/albums/"),
(models.Track.objects.all(), "/federation/music/tracks/"),
]
for qs, path in set_fid_params:
set_fid(qs, path, command.stdout)
update_shared_inbox_url(command.stdout)
for part in ["followers", "following"]:
generate_actor_urls(part, command.stdout)
import re
from django.contrib.postgres.search import SearchQuery
from django.db.models import Q
from . import utils
QUERY_REGEX = re.compile('(((?P<key>\w+):)?(?P<value>"[^"]+"|[\S]+))')
QUERY_REGEX = re.compile(r'(((?P<key>\w+):)?(?P<value>"[^"]+"|[\S]+))')
def parse_query(query):
"""
Given a search query such as "hello is:issue status:opened",
returns a list of dictionnaries discribing each query token
returns a list of dictionaries describing each query token
"""
matches = [m.groupdict() for m in QUERY_REGEX.finditer(query.lower())]
for m in matches:
......@@ -23,7 +25,7 @@ def normalize_query(
findterms=re.compile(r'"([^"]+)"|(\S+)').findall,
normspace=re.compile(r"\s{2,}").sub,
):
""" Splits the query string in invidual keywords, getting rid of unecessary spaces
"""Splits the query string in individual keywords, getting rid of unnecessary spaces
and grouping quoted words together.
Example:
......@@ -56,6 +58,59 @@ def get_query(query_string, search_fields):
return query
def remove_chars(string, chars):
for char in chars:
string = string.replace(char, "")
return string
def get_fts_query(query_string, fts_fields=["body_text"], model=None):
search_type = "raw"
if query_string.startswith('"') and query_string.endswith('"'):
# we pass the query directly to the FTS engine
query_string = query_string[1:-1]
else:
query_string = remove_chars(query_string, ['"', "&", "(", ")", "!", "'"])
parts = query_string.replace(":", "").split(" ")
parts = [f"{p}:*" for p in parts if p]
if not parts:
return Q(pk=None)
query_string = "&".join(parts)
if not fts_fields or not query_string.strip():
return Q(pk=None)
query = None
for field in fts_fields:
if "__" in field and model:
# When we have a nested lookup, we switch to a subquery for enhanced performance
fk_field_name, lookup = (
field.split("__")[0],
"__".join(field.split("__")[1:]),
)
fk_field = model._meta.get_field(fk_field_name)
related_model = fk_field.related_model
subquery = related_model.objects.filter(
**{
lookup: SearchQuery(
query_string, search_type=search_type, config="english_nostop"
)
}
).values_list("pk", flat=True)
new_query = Q(**{f"{fk_field_name}__in": list(subquery)})
else:
new_query = Q(
**{
field: SearchQuery(
query_string, search_type=search_type, config="english_nostop"
)
}
)
query = utils.join_queries_or(query, new_query)
return query
def filter_tokens(tokens, valid):
return [t for t in tokens if t["key"] in valid]
......@@ -65,6 +120,9 @@ def apply(qs, config_data):
q = config_data.get(k)
if q:
qs = qs.filter(q)
distinct = config_data.get("distinct", False)
if distinct:
qs = qs.distinct()
return qs
......@@ -77,13 +135,28 @@ class SearchConfig:
def clean(self, query):
tokens = parse_query(query)
cleaned_data = {}
cleaned_data["types"] = self.clean_types(filter_tokens(tokens, ["is"]))
cleaned_data["search_query"] = self.clean_search_query(
filter_tokens(tokens, [None, "in"])
filter_tokens(tokens, [None, "in"] + list(self.search_fields.keys()))
)
unhandled_tokens = [
t
for t in tokens
if t["key"] not in [None, "is", "in"] + list(self.search_fields.keys())
]
cleaned_data["filter_query"], matching_filters = self.clean_filter_query(
unhandled_tokens
)
unhandled_tokens = [t for t in tokens if t["key"] not in [None, "is", "in"]]
cleaned_data["filter_query"] = self.clean_filter_query(unhandled_tokens)
if matching_filters:
cleaned_data["distinct"] = any(
[
self.filter_fields[k].get("distinct", False)
for k in matching_filters
if k in self.filter_fields
]
)
else:
cleaned_data["distinct"] = False
return cleaned_data
def clean_search_query(self, tokens):
......@@ -95,24 +168,67 @@ class SearchConfig:
} or set(self.search_fields.keys())
fields_subset = set(self.search_fields.keys()) & fields_subset
to_fields = [self.search_fields[k]["to"] for k in fields_subset]
specific_field_query = None
for token in tokens:
if token["key"] not in self.search_fields:
continue
to = self.search_fields[token["key"]]["to"]
try:
field = token["field"]
value = field.clean(token["value"])
except KeyError:
# no cleaning to apply
value = token["value"]
q = Q(**{f"{to}__icontains": value})
if not specific_field_query:
specific_field_query = q
else:
specific_field_query &= q
query_string = " ".join([t["value"] for t in filter_tokens(tokens, [None])])
return get_query(query_string, sorted(to_fields))
unhandled_tokens_query = get_query(query_string, sorted(to_fields))
if specific_field_query and unhandled_tokens_query:
return unhandled_tokens_query & specific_field_query
elif specific_field_query:
return specific_field_query
elif unhandled_tokens_query:
return unhandled_tokens_query
return None
def clean_filter_query(self, tokens):
if not self.filter_fields or not tokens:
return
return None, []
matching = [t for t in tokens if t["key"] in self.filter_fields]
queries = [
Q(**{self.filter_fields[t["key"]]["to"]: t["value"]}) for t in matching
]
queries = [self.get_filter_query(token) for token in matching]
query = None
for q in queries:
if not query:
query = q
else:
query = query & q
return query
return query, [m["key"] for m in matching]
def get_filter_query(self, token):
raw_value = token["value"]
try:
field = self.filter_fields[token["key"]]["field"]
value = field.clean(raw_value)
except KeyError:
# no cleaning to apply
value = raw_value
try:
query_field = self.filter_fields[token["key"]]["to"]
return Q(**{query_field: value})
except KeyError:
pass
# we don't have a basic filter -> field mapping, this likely means we
# have a dynamic handler in the config
handler = self.filter_fields[token["key"]]["handler"]
value = handler(value)
return value
def clean_types(self, tokens):
if not self.types:
......
import collections
import io
import os
import PIL
from django.core.exceptions import ObjectDoesNotExist
from django.core.files.uploadedfile import SimpleUploadedFile
from django.utils.encoding import smart_str
from django.utils.translation import gettext_lazy as _
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers
from . import models, utils
class RelatedField(serializers.RelatedField):
default_error_messages = {
"does_not_exist": _("Object with {related_field_name}={value} does not exist."),
"invalid": _("Invalid value."),
}
def __init__(self, related_field_name, serializer, **kwargs):
self.related_field_name = related_field_name
self.serializer = serializer
self.filters = kwargs.pop("filters", None)
self.queryset_filter = kwargs.pop("queryset_filter", None)
try:
kwargs["queryset"] = kwargs.pop("queryset")
except KeyError:
kwargs["queryset"] = self.serializer.Meta.model.objects.all()
super().__init__(**kwargs)
def get_filters(self, data):
filters = {self.related_field_name: data}
if self.filters:
filters.update(self.filters(self.context))
return filters
def filter_queryset(self, queryset):
if self.queryset_filter:
queryset = self.queryset_filter(queryset, self.context)
return queryset
def to_internal_value(self, data):
try:
queryset = self.get_queryset()
filters = self.get_filters(data)
queryset = self.filter_queryset(queryset)
return queryset.get(**filters)
except ObjectDoesNotExist:
self.fail(
"does_not_exist",
related_field_name=self.related_field_name,
value=smart_str(data),
)
except (TypeError, ValueError):
self.fail("invalid")
def to_representation(self, obj):
return self.serializer.to_representation(obj)
def get_choices(self, cutoff=None):
queryset = self.get_queryset()
if queryset is None:
# Ensure that field.choices returns something sensible
# even when accessed with a read-only field.
return {}
if cutoff is not None:
queryset = queryset[:cutoff]
return collections.OrderedDict(
[
(
self.to_representation(item)[self.related_field_name],
self.display_value(item),
)
for item in queryset
if self.serializer
]
)
class Action(object):
class Action:
def __init__(self, name, allow_all=False, qs_filter=None):
self.name = name
self.allow_all = allow_all
self.qs_filter = qs_filter
def __repr__(self):
return "<Action {}>".format(self.name)
return f"<Action {self.name}>"
class ActionSerializer(serializers.Serializer):
......@@ -21,6 +102,7 @@ class ActionSerializer(serializers.Serializer):
objects = serializers.JSONField(required=True)
filters = serializers.DictField(required=False)
actions = None
pk_field = "pk"
def __init__(self, *args, **kwargs):
self.actions_by_name = {a.name: a for a in self.actions}
......@@ -31,7 +113,7 @@ class ActionSerializer(serializers.Serializer):
)
for action in self.actions_by_name.keys():
handler_name = "handle_{}".format(action)
handler_name = f"handle_{action}"
assert hasattr(self, handler_name), "{} miss a {} method".format(
self.__class__.__name__, handler_name
)
......@@ -51,7 +133,9 @@ class ActionSerializer(serializers.Serializer):
if value == "all":
return self.queryset.all().order_by("id")
if type(value) in [list, tuple]:
return self.queryset.filter(pk__in=value).order_by("id")
return self.queryset.filter(**{f"{self.pk_field}__in": value}).order_by(
self.pk_field
)
raise serializers.ValidationError(
"{} is not a valid value for objects. You must provide either a "
......@@ -91,3 +175,206 @@ class ActionSerializer(serializers.Serializer):
"result": result,
}
return payload
def track_fields_for_update(*fields):
"""
Apply this decorator to serializer to call function when specific values
are updated on an object:
.. code-block:: python
@track_fields_for_update('privacy_level')
class LibrarySerializer(serializers.ModelSerializer):
def on_updated_privacy_level(self, obj, old_value, new_value):
print('Do someting')
"""
def decorator(serializer_class):
original_update = serializer_class.update
def new_update(self, obj, validated_data):
tracked_fields_before = {f: getattr(obj, f) for f in fields}
obj = original_update(self, obj, validated_data)
tracked_fields_after = {f: getattr(obj, f) for f in fields}
if tracked_fields_before != tracked_fields_after:
self.on_updated_fields(obj, tracked_fields_before, tracked_fields_after)
return obj
serializer_class.update = new_update
return serializer_class
return decorator
class StripExifImageField(serializers.ImageField):
def to_internal_value(self, data):
file_obj = super().to_internal_value(data)
image = PIL.Image.open(file_obj)
data = list(image.getdata())
image_without_exif = PIL.Image.new(image.mode, image.size)
image_without_exif.putdata(data)
with io.BytesIO() as output:
image_without_exif.save(
output,
format=PIL.Image.EXTENSION[os.path.splitext(file_obj.name)[-1].lower()],
quality=100,
)
content = output.getvalue()
return SimpleUploadedFile(
file_obj.name, content, content_type=file_obj.content_type
)
from funkwhale_api.federation import serializers as federation_serializers # noqa
TARGET_ID_TYPE_MAPPING = {
"music.Track": ("id", "track"),
"music.Artist": ("id", "artist"),
"music.Album": ("id", "album"),
}
class APIMutationSerializer(serializers.ModelSerializer):
created_by = federation_serializers.APIActorSerializer(read_only=True)
target = serializers.SerializerMethodField()
class Meta:
model = models.Mutation
fields = [
"fid",
"uuid",
"type",
"creation_date",
"applied_date",
"is_approved",
"is_applied",
"created_by",
"approved_by",
"summary",
"payload",
"previous_state",
"target",
]
read_only_fields = [
"uuid",
"creation_date",
"fid",
"is_applied",
"created_by",
"approved_by",
"previous_state",
]
@extend_schema_field(OpenApiTypes.OBJECT)
def get_target(self, obj):
target = obj.target
if not target:
return
id_field, type = TARGET_ID_TYPE_MAPPING[target._meta.label]
return {"type": type, "id": getattr(target, id_field), "repr": str(target)}
def validate_type(self, value):
if value not in self.context["registry"]:
raise serializers.ValidationError(f"Invalid mutation type {value}")
return value
class AttachmentSerializer(serializers.Serializer):
uuid = serializers.UUIDField(read_only=True)
size = serializers.IntegerField(read_only=True)
mimetype = serializers.CharField(read_only=True)
creation_date = serializers.DateTimeField(read_only=True)
file = StripExifImageField(write_only=True)
urls = serializers.SerializerMethodField()
@extend_schema_field(
{
"type": "object",
"properties": {
"original": {"type": "string"},
"small_square_crop": {"type": "string"},
"medium_square_crop": {"type": "string"},
"large_square_crop": {"type": "string"},
},
}
)
def get_urls(self, o):
urls = {}
urls["source"] = o.url
urls["original"] = o.download_url_original
urls["small_square_crop"] = o.download_url_small_square_crop
urls["medium_square_crop"] = o.download_url_medium_square_crop
urls["large_square_crop"] = o.download_url_large_square_crop
return urls
def create(self, validated_data):
return models.Attachment.objects.create(
file=validated_data["file"], actor=validated_data["actor"]
)
class ContentSerializer(serializers.Serializer):
text = serializers.CharField(
max_length=models.CONTENT_TEXT_MAX_LENGTH,
allow_null=True,
allow_blank=True,
)
content_type = serializers.ChoiceField(
choices=models.CONTENT_TEXT_SUPPORTED_TYPES,
)
html = serializers.SerializerMethodField()
def get_html(self, o) -> str:
return utils.render_html(o.text, o.content_type)
class NullToEmptDict:
def get_attribute(self, o):
attr = super().get_attribute(o)
if attr is None:
return {}
return attr
def to_representation(self, v):
if not v:
return v
return super().to_representation(v)
class ScopesSerializer(serializers.Serializer):
id = serializers.CharField()
rate = serializers.CharField()
description = serializers.CharField()
limit = serializers.IntegerField()
duration = serializers.IntegerField()
remaining = serializers.IntegerField()
available = serializers.IntegerField()
available_seconds = serializers.IntegerField()
reset = serializers.IntegerField()
reset_seconds = serializers.IntegerField()
class IdentSerializer(serializers.Serializer):
type = serializers.CharField()
id = serializers.CharField()
class RateLimitSerializer(serializers.Serializer):
enabled = serializers.BooleanField()
ident = IdentSerializer()
scopes = serializers.ListField(child=ScopesSerializer())
class ErrorDetailSerializer(serializers.Serializer):
detail = serializers.CharField(source="*")
class TextPreviewSerializer(serializers.Serializer):
rendered = serializers.CharField(read_only=True, source="*")
text = serializers.CharField(write_only=True)
......@@ -4,6 +4,13 @@ from django.conf import settings
import funkwhale_api
class FunkwhaleSession(requests.Session):
def request(self, *args, **kwargs):
kwargs.setdefault("verify", settings.EXTERNAL_REQUESTS_VERIFY_SSL)
kwargs.setdefault("timeout", settings.EXTERNAL_REQUESTS_TIMEOUT)
return super().request(*args, **kwargs)
def get_user_agent():
return "python-requests (funkwhale/{}; +{})".format(
funkwhale_api.__version__, settings.FUNKWHALE_URL
......@@ -11,6 +18,6 @@ def get_user_agent():
def get_session():
s = requests.Session()
s = FunkwhaleSession()
s.headers["User-Agent"] = get_user_agent()
return s
import django.dispatch
""" Required args: mutation """
mutation_created = django.dispatch.Signal()
""" Required args: mutation, old_is_approved, new_is_approved """
mutation_updated = django.dispatch.Signal()
import unicodedata
import os
import shutil
import slugify
from django.core.files.storage import FileSystemStorage
from storages.backends.s3boto3 import S3Boto3Storage
class ASCIIFileSystemStorage(FileSystemStorage):
def asciionly(name):
"""
Convert unicode characters in name to ASCII characters.
"""
return slugify.slugify(name, ok=slugify.SLUG_OK + ".", only_ascii=True)
class ASCIIFileSystemStorage(FileSystemStorage):
def get_valid_name(self, name):
return super().get_valid_name(asciionly(name))
def force_delete(self, name):
path = self.path(name)
try:
if os.path.isdir(path):
shutil.rmtree(path)
else:
return super().delete(name)
except FileNotFoundError:
pass
class ASCIIS3Boto3Storage(S3Boto3Storage):
def get_valid_name(self, name):
name = unicodedata.normalize("NFKD", name).encode("ascii", "ignore")
return super().get_valid_name(name)
return super().get_valid_name(asciionly(name))
import datetime
import logging
import tempfile
from django.conf import settings
from django.core.files import File
from django.db import transaction
from django.dispatch import receiver
from django.utils import timezone
from funkwhale_api.common import channels
from funkwhale_api.taskapp import celery
from . import models, serializers, session, signals
logger = logging.getLogger(__name__)
@celery.app.task(name="common.apply_mutation")
@transaction.atomic
@celery.require_instance(
models.Mutation.objects.exclude(is_applied=True).select_for_update(), "mutation"
)
def apply_mutation(mutation):
mutation.apply()
@receiver(signals.mutation_created)
def broadcast_mutation_created(mutation, **kwargs):
group = "instance_activity"
channels.group_send(
group,
{
"type": "event.send",
"text": "",
"data": {
"type": "mutation.created",
"mutation": serializers.APIMutationSerializer(mutation).data,
"pending_review_count": models.Mutation.objects.filter(
is_approved=None
).count(),
},
},
)
@receiver(signals.mutation_updated)
def broadcast_mutation_update(mutation, old_is_approved, new_is_approved, **kwargs):
group = "instance_activity"
channels.group_send(
group,
{
"type": "event.send",
"text": "",
"data": {
"type": "mutation.updated",
"mutation": serializers.APIMutationSerializer(mutation).data,
"pending_review_count": models.Mutation.objects.filter(
is_approved=None
).count(),
"old_is_approved": old_is_approved,
"new_is_approved": new_is_approved,
},
},
)
def fetch_remote_attachment(attachment, filename=None, save=True):
if attachment.file:
# already there, no need to fetch
return
s = session.get_session()
attachment.last_fetch_date = timezone.now()
with tempfile.TemporaryFile() as tf:
with s.get(attachment.url, timeout=5, stream=True) as r:
for chunk in r.iter_content(chunk_size=1024 * 100):
tf.write(chunk)
tf.seek(0)
if not filename:
filename = attachment.url.split("/")[-1]
filename = filename[-50:]
attachment.file.save(filename, File(tf), save=save)
@celery.app.task(name="common.prune_unattached_attachments")
def prune_unattached_attachments():
limit = timezone.now() - datetime.timedelta(
seconds=settings.ATTACHMENTS_UNATTACHED_PRUNE_DELAY
)
candidates = models.Attachment.objects.attached(False).filter(
creation_date__lte=limit
)
total = candidates.count()
logger.info("Deleting %s unattached attachments…", total)
result = candidates.delete()
logger.info("Deletion done: %s", result)
import collections
from django.conf import settings
from django.core.cache import cache
from rest_framework import throttling as rest_throttling
def get_ident(user, request):
if user and user.is_authenticated:
return {"type": "authenticated", "id": f"{user.pk}"}
ident = rest_throttling.BaseThrottle().get_ident(request)
return {"type": "anonymous", "id": ident}
def get_cache_key(scope, ident):
parts = ["throttling", scope, ident["type"], str(ident["id"])]
return ":".join(parts)
def get_scope_for_action_and_ident_type(action, ident_type, view_conf={}):
config = collections.ChainMap(view_conf, settings.THROTTLING_SCOPES)
try:
action_config = config[action]
except KeyError:
action_config = config.get("*", {})
try:
return action_config[ident_type]
except KeyError:
return
def get_status(ident, now):
data = []
throttle = FunkwhaleThrottle()
for key in sorted(settings.THROTTLING_RATES.keys()):
conf = settings.THROTTLING_RATES[key]
row_data = {"id": key, "rate": conf["rate"], "description": conf["description"]}
if conf["rate"]:
num_requests, duration = throttle.parse_rate(conf["rate"])
history = cache.get(get_cache_key(key, ident)) or []
relevant_history = [h for h in history if h > now - duration]
row_data["limit"] = num_requests
row_data["duration"] = duration
row_data["remaining"] = num_requests - len(relevant_history)
if relevant_history and len(relevant_history) >= num_requests:
# At this point, the endpoint becomes available again
now_request = relevant_history[-1]
remaining = duration - (now - int(now_request))
row_data["available"] = int(now + remaining) or None
row_data["available_seconds"] = int(remaining) or None
else:
row_data["available"] = None
row_data["available_seconds"] = None
if relevant_history:
# At this point, all Rate Limit is reset to 0
latest_request = relevant_history[0]
remaining = duration - (now - int(latest_request))
row_data["reset"] = int(now + remaining)
row_data["reset_seconds"] = int(remaining)
else:
row_data["reset"] = None
row_data["reset_seconds"] = None
else:
row_data["limit"] = None
row_data["duration"] = None
row_data["remaining"] = None
row_data["available"] = None
row_data["available_seconds"] = None
row_data["reset"] = None
row_data["reset_seconds"] = None
data.append(row_data)
return data
class FunkwhaleThrottle(rest_throttling.SimpleRateThrottle):
def __init__(self):
pass
def get_cache_key(self, request, view):
return get_cache_key(self.scope, self.ident)
def allow_request(self, request, view):
self.request = request
self.ident = get_ident(getattr(request, "user", None), request)
action = getattr(view, "action", "*")
view_scopes = getattr(view, "throttling_scopes", {})
if view_scopes is None:
return True
self.scope = get_scope_for_action_and_ident_type(
action=action, ident_type=self.ident["type"], view_conf=view_scopes
)
if not self.scope or self.scope not in settings.THROTTLING_RATES:
return True
self.rate = settings.THROTTLING_RATES[self.scope].get("rate")
self.num_requests, self.duration = self.parse_rate(self.rate)
self.request = request
return super().allow_request(request, view)
def attach_info(self):
info = {
"num_requests": self.num_requests,
"duration": self.duration,
"scope": self.scope,
"history": self.history or [],
"wait": self.wait(),
}
setattr(self.request, "_throttle_status", info)
def throttle_success(self):
self.attach_info()
return super().throttle_success()
def throttle_failure(self):
self.attach_info()
return super().throttle_failure()
class TooManyRequests(Exception):
pass
DummyView = collections.namedtuple("DummyView", "action throttling_scopes")
def check_request(request, scope):
"""
A simple wrapper around FunkwhaleThrottle for views that aren't API views
or cannot use rest_framework automatic throttling.
Raise TooManyRequests if limit is reached.
"""
if not settings.THROTTLING_ENABLED:
return True
view = DummyView(
action=scope,
throttling_scopes={scope: {"anonymous": scope, "authenticated": scope}},
)
throttle = FunkwhaleThrottle()
if not throttle.allow_request(request, view):
raise TooManyRequests()
return True
from django.utils.deconstruct import deconstructible
import datetime
import hashlib
import logging
import os
import shutil
import uuid
import xml.etree.ElementTree as ET
from urllib.parse import parse_qs, urlencode, urlsplit, urlunsplit
from django.db import transaction
import bleach.sanitizer
import markdown
from django import urls
from django.conf import settings
from django.core.files.base import ContentFile
from django.db import models, transaction
from django.http import request
from django.utils import timezone
from django.utils.deconstruct import deconstructible
logger = logging.getLogger(__name__)
def batch(iterable, n=1):
has_entries = True
while has_entries:
current = []
for i in range(0, n):
try:
current.append(next(iterable))
except StopIteration:
has_entries = False
yield current
def rename_file(instance, field_name, new_name, allow_missing_file=False):
field = getattr(instance, field_name)
current_name, extension = os.path.splitext(field.name)
new_name_with_extension = "{}{}".format(new_name, extension)
new_name_with_extension = f"{new_name}{extension}"
try:
shutil.move(field.path, new_name_with_extension)
except FileNotFoundError:
......@@ -48,12 +71,16 @@ def set_query_parameter(url, **kwargs):
@deconstructible
class ChunkedPath(object):
class ChunkedPath:
def sanitize_filename(self, filename):
return filename.replace("/", "-")
def __init__(self, root, preserve_file_name=True):
self.root = root
self.preserve_file_name = preserve_file_name
def __call__(self, instance, filename):
self.sanitize_filename(filename)
uid = str(uuid.uuid4())
chunk_size = 2
chunks = [uid[i : i + chunk_size] for i in range(0, len(uid), chunk_size)]
......@@ -61,6 +88,402 @@ class ChunkedPath(object):
parts = chunks[:3] + [filename]
else:
ext = os.path.splitext(filename)[1][1:].lower()
new_filename = "".join(chunks[3:]) + ".{}".format(ext)
new_filename = "".join(chunks[3:]) + f".{ext}"
parts = chunks[:3] + [new_filename]
return os.path.join(self.root, *parts)
def chunk_queryset(source_qs, chunk_size):
"""
From https://github.com/peopledoc/django-chunkator/blob/master/chunkator/__init__.py
"""
pk = None
# In django 1.9, _fields is always present and `None` if 'values()' is used
# In Django 1.8 and below, _fields will only be present if using `values()`
has_fields = hasattr(source_qs, "_fields") and source_qs._fields
if has_fields:
if "pk" not in source_qs._fields:
raise ValueError("The values() call must include the `pk` field")
field = source_qs.model._meta.pk
# set the correct field name:
# for ForeignKeys, we want to use `model_id` field, and not `model`,
# to bypass default ordering on related model
order_by_field = field.attname
source_qs = source_qs.order_by(order_by_field)
queryset = source_qs
while True:
if pk:
queryset = source_qs.filter(pk__gt=pk)
page = queryset[:chunk_size]
page = list(page)
nb_items = len(page)
if nb_items == 0:
return
last_item = page[-1]
# source_qs._fields exists *and* is not none when using "values()"
if has_fields:
pk = last_item["pk"]
else:
pk = last_item.pk
yield page
if nb_items < chunk_size:
return
def join_url(start, end):
if end.startswith("http://") or end.startswith("https://"):
# already a full URL, joining makes no sense
return end
if start.endswith("/") and end.startswith("/"):
return start + end[1:]
if not start.endswith("/") and not end.startswith("/"):
return start + "/" + end
return start + end
def media_url(path):
if settings.MEDIA_URL.startswith("http://") or settings.MEDIA_URL.startswith(
"https://"
):
return join_url(settings.MEDIA_URL, path)
from funkwhale_api.federation import utils as federation_utils
return federation_utils.full_url(path)
def spa_reverse(name, args=[], kwargs={}):
return urls.reverse(name, urlconf=settings.SPA_URLCONF, args=args, kwargs=kwargs)
def spa_resolve(path):
return urls.resolve(path, urlconf=settings.SPA_URLCONF)
def parse_meta(html):
# dirty but this is only for testing so we don't really care,
# we convert the html string to xml so it can be parsed as xml
html = '<?xml version="1.0"?>' + html
tree = ET.fromstring(html)
meta = [elem for elem in tree.iter() if elem.tag in ["meta", "link"]]
return [dict([("tag", elem.tag)] + list(elem.items())) for elem in meta]
def order_for_search(qs, field):
"""
When searching, it's often more useful to have short results first,
this function will order the given qs based on the length of the given field
"""
return qs.annotate(__size=models.functions.Length(field)).order_by("__size", "pk")
def recursive_getattr(obj, key, permissive=False):
"""
Given a dictionary such as {'user': {'name': 'Bob'}} or and object and
a dotted string such as user.name, returns 'Bob'.
If the value is not present, returns None
"""
v = obj
for k in key.split("."):
try:
if hasattr(v, "get"):
v = v.get(k)
else:
v = getattr(v, k)
except (TypeError, AttributeError):
if not permissive:
raise
return
if v is None:
return
return v
def replace_prefix(queryset, field, old, new):
"""
Given a queryset of objects and a field name, will find objects
for which the field have the given value, and replace the old prefix by
the new one.
This is especially useful to find/update bad federation ids, to replace:
http://wrongprotocolanddomain/path
by
https://goodprotocalanddomain/path
on a whole table with a single query.
"""
qs = queryset.filter(**{f"{field}__startswith": old})
# we extract the part after the old prefix, and Concat it with our new prefix
update = models.functions.Concat(
models.Value(new),
models.functions.Substr(field, len(old) + 1, output_field=models.CharField()),
)
return qs.update(**{field: update})
def concat_dicts(*dicts):
n = {}
for d in dicts:
n.update(d)
return n
def get_updated_fields(conf, data, obj):
"""
Given a list of fields, a dict and an object, will return the dict keys/values
that differ from the corresponding fields on the object.
"""
final_conf = []
for c in conf:
if isinstance(c, str):
final_conf.append((c, c))
else:
final_conf.append(c)
final_data = {}
for data_field, obj_field in final_conf:
try:
data_value = data[data_field]
except KeyError:
continue
if obj.pk:
obj_value = getattr(obj, obj_field)
if obj_value != data_value:
final_data[obj_field] = data_value
else:
final_data[obj_field] = data_value
return final_data
def join_queries_or(left, right):
if left:
return left | right
else:
return right
MARKDOWN_RENDERER = markdown.Markdown(extensions=settings.MARKDOWN_EXTENSIONS)
def render_markdown(text):
return MARKDOWN_RENDERER.convert(text)
SAFE_TAGS = [
"p",
"a",
"abbr",
"acronym",
"b",
"blockquote",
"br",
"code",
"em",
"i",
"li",
"ol",
"strong",
"ul",
]
HTMl_CLEANER = bleach.sanitizer.Cleaner(strip=True, tags=SAFE_TAGS)
HTML_PERMISSIVE_CLEANER = bleach.sanitizer.Cleaner(
strip=True,
tags=SAFE_TAGS + ["h1", "h2", "h3", "h4", "h5", "h6", "div", "section", "article"],
attributes=["class", "rel", "alt", "title", "href"],
)
# support for additional tlds
# cf https://github.com/mozilla/bleach/issues/367#issuecomment-384631867
ALL_TLDS = set(settings.LINKIFIER_SUPPORTED_TLDS + bleach.linkifier.TLDS)
URL_RE = bleach.linkifier.build_url_re(tlds=sorted(ALL_TLDS, reverse=True))
HTML_LINKER = bleach.linkifier.Linker(url_re=URL_RE)
def clean_html(html, permissive=False):
return (
HTML_PERMISSIVE_CLEANER.clean(html) if permissive else HTMl_CLEANER.clean(html)
)
def render_html(text, content_type, permissive=False):
if not text:
return ""
rendered = render_markdown(text)
if content_type == "text/html":
rendered = text
elif content_type == "text/markdown":
rendered = render_markdown(text)
else:
rendered = render_markdown(text)
rendered = HTML_LINKER.linkify(rendered)
return clean_html(rendered, permissive=permissive).strip().replace("\n", "")
def render_plain_text(html):
if not html:
return ""
return bleach.clean(html, tags=[], strip=True)
def same_content(old, text=None, content_type=None):
return old.text == text and old.content_type == content_type
@transaction.atomic
def attach_content(obj, field, content_data):
from . import models
content_data = content_data or {}
existing = getattr(obj, f"{field}_id")
if existing:
if same_content(getattr(obj, field), **content_data):
# optimization to avoid a delete/save if possible
return getattr(obj, field)
getattr(obj, field).delete()
setattr(obj, field, None)
if not content_data:
return
content_obj = models.Content.objects.create(
text=content_data["text"][: models.CONTENT_TEXT_MAX_LENGTH],
content_type=content_data["content_type"],
)
setattr(obj, field, content_obj)
obj.save(update_fields=[field])
return content_obj
@transaction.atomic
def attach_file(obj, field, file_data, fetch=False):
from . import models, tasks
existing = getattr(obj, f"{field}_id")
if existing:
getattr(obj, field).delete()
if not file_data:
return
if isinstance(file_data, models.Attachment):
attachment = file_data
else:
extensions = {"image/jpeg": "jpg", "image/png": "png", "image/gif": "gif"}
extension = extensions.get(file_data["mimetype"], "jpg")
attachment = models.Attachment(mimetype=file_data["mimetype"])
name_fields = ["uuid", "full_username", "pk"]
name = [
getattr(obj, field) for field in name_fields if getattr(obj, field, None)
][0]
filename = f"{field}-{name}.{extension}"
if "url" in file_data:
attachment.url = file_data["url"]
else:
f = ContentFile(file_data["content"])
attachment.file.save(filename, f, save=False)
if not attachment.file and fetch:
try:
tasks.fetch_remote_attachment(attachment, filename=filename, save=False)
except Exception as e:
logger.warn(
"Cannot download attachment at url %s: %s", attachment.url, e
)
attachment = None
if attachment:
attachment.save()
setattr(obj, field, attachment)
obj.save(update_fields=[field])
return attachment
def get_mimetype_from_ext(path):
parts = path.lower().split(".")
ext = parts[-1]
match = {
"jpeg": "image/jpeg",
"jpg": "image/jpeg",
"png": "image/png",
"gif": "image/gif",
}
return match.get(ext)
def get_audio_mimetype(mt):
aliases = {"audio/x-mp3": "audio/mpeg", "audio/mpeg3": "audio/mpeg"}
return aliases.get(mt, mt)
def update_modification_date(obj, field="modification_date", date=None):
IGNORE_DELAY = 60
current_value = getattr(obj, field)
date = date or timezone.now()
ignore = current_value is not None and current_value < date - datetime.timedelta(
seconds=IGNORE_DELAY
)
if ignore:
setattr(obj, field, date)
obj.__class__.objects.filter(pk=obj.pk).update(**{field: date})
return date
def monkey_patch_request_build_absolute_uri():
"""
Since we have FUNKWHALE_HOSTNAME and PROTOCOL hardcoded in settings, we can
override django's multisite logic which can break when reverse proxy aren't configured
properly.
"""
builtin_scheme = request.HttpRequest.scheme
def scheme(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_PROTOCOL
return builtin_scheme.fget(self)
builtin_get_host = request.HttpRequest.get_host
def get_host(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_HOSTNAME
return builtin_get_host(self)
request.HttpRequest.scheme = property(scheme)
request.HttpRequest.get_host = get_host
def get_file_hash(file, algo=None, chunk_size=None, full_read=False):
algo = algo or settings.HASHING_ALGORITHM
chunk_size = chunk_size or settings.HASHING_CHUNK_SIZE
hasher = hashlib.new(algo)
file.seek(0)
if full_read:
for byte_block in iter(lambda: file.read(chunk_size), b""):
hasher.update(byte_block)
else:
# sometimes, it's useful to only hash the beginning of the file, e.g
# to avoid a lot of I/O when crawling large libraries
hasher.update(file.read(chunk_size))
return f"{algo}:{hasher.hexdigest()}"
import mimetypes
from os.path import splitext
from django.core import validators
from django.core.exceptions import ValidationError
from django.core.files.images import get_image_dimensions
from django.template.defaultfilters import filesizeformat
from django.utils.deconstruct import deconstructible
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import gettext_lazy as _
@deconstructible
......@@ -71,7 +72,7 @@ class ImageDimensionsValidator:
@deconstructible
class FileValidator(object):
class FileValidator:
"""
Taken from https://gist.github.com/jrosebr1/2140738
Validator for files, checking the size, extension and mimetype.
......@@ -96,7 +97,7 @@ class FileValidator(object):
"MIME type '%(mimetype)s' is not valid. Allowed types are: %(allowed_mimetypes)s."
)
min_size_message = _(
"The current file %(size)s, which is too small. The minumum file size is %(allowed_size)s."
"The current file %(size)s, which is too small. The minimum file size is %(allowed_size)s."
)
max_size_message = _(
"The current file %(size)s, which is too large. The maximum file size is %(allowed_size)s."
......@@ -150,3 +151,17 @@ class FileValidator(object):
}
raise ValidationError(message)
class DomainValidator(validators.URLValidator):
message = "Enter a valid domain name."
def __call__(self, value):
"""
This is a bit hackish but since we don't have any built-in domain validator,
we use the url one, and prepend http:// in front of it.
If it fails, we know the domain is not valid.
"""
super().__call__(f"http://{value}")
return value