Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2422-trigger-libraries-follow-on-user-follow
  • 2429-fix-popover-auto-close
  • 2448-complete-tags
  • 2452-fetch-third-party-metadata
  • 2469-Fix-search-bar-in-ManageUploads
  • 2476-deep-upload-links
  • 2490-experiment-use-rstore
  • 2490-experimental-use-simple-data-store
  • 2490-fix-search-modal
  • 2490-search-modal
  • 2501-fix-compatibility-with-older-browsers
  • 2502-drop-uno-and-jquery
  • 2533-allow-followers-in-user-activiy-privacy-level
  • 2539-drop-ansible-installation-method-in-favor-of-docker
  • 2560-default-modal-width
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • feature/2481-vui-translations
  • fix-amd64-docker-build-gfortran
  • fix-front-node-version
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • refactor/homepage
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • update-frontend-dependencies
  • upload-process-spec
  • user-concept-docs
  • v2-artists
  • vite-ws-ssl-compatible
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
200 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 278-search-browse
  • 303-json-ld
  • ButterflyOfFire/funkwhale-patch-1
  • develop
  • master
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
39 results
Show changes
Showing
with 1956 additions and 3 deletions
import datetime
import logging
import tempfile
from django.conf import settings
from django.core.files import File
from django.db import transaction
from django.dispatch import receiver
from django.utils import timezone
from funkwhale_api.common import channels
from funkwhale_api.taskapp import celery
from . import models, serializers, session, signals
logger = logging.getLogger(__name__)
@celery.app.task(name="common.apply_mutation")
@transaction.atomic
@celery.require_instance(
models.Mutation.objects.exclude(is_applied=True).select_for_update(), "mutation"
)
def apply_mutation(mutation):
mutation.apply()
@receiver(signals.mutation_created)
def broadcast_mutation_created(mutation, **kwargs):
group = "instance_activity"
channels.group_send(
group,
{
"type": "event.send",
"text": "",
"data": {
"type": "mutation.created",
"mutation": serializers.APIMutationSerializer(mutation).data,
"pending_review_count": models.Mutation.objects.filter(
is_approved=None
).count(),
},
},
)
@receiver(signals.mutation_updated)
def broadcast_mutation_update(mutation, old_is_approved, new_is_approved, **kwargs):
group = "instance_activity"
channels.group_send(
group,
{
"type": "event.send",
"text": "",
"data": {
"type": "mutation.updated",
"mutation": serializers.APIMutationSerializer(mutation).data,
"pending_review_count": models.Mutation.objects.filter(
is_approved=None
).count(),
"old_is_approved": old_is_approved,
"new_is_approved": new_is_approved,
},
},
)
def fetch_remote_attachment(attachment, filename=None, save=True):
if attachment.file:
# already there, no need to fetch
return
s = session.get_session()
attachment.last_fetch_date = timezone.now()
with tempfile.TemporaryFile() as tf:
with s.get(attachment.url, timeout=5, stream=True) as r:
for chunk in r.iter_content(chunk_size=1024 * 100):
tf.write(chunk)
tf.seek(0)
if not filename:
filename = attachment.url.split("/")[-1]
filename = filename[-50:]
attachment.file.save(filename, File(tf), save=save)
@celery.app.task(name="common.prune_unattached_attachments")
def prune_unattached_attachments():
limit = timezone.now() - datetime.timedelta(
seconds=settings.ATTACHMENTS_UNATTACHED_PRUNE_DELAY
)
candidates = models.Attachment.objects.attached(False).filter(
creation_date__lte=limit
)
total = candidates.count()
logger.info("Deleting %s unattached attachments…", total)
result = candidates.delete()
logger.info("Deletion done: %s", result)
import collections
from django.conf import settings
from django.core.cache import cache
from rest_framework import throttling as rest_throttling
def get_ident(user, request):
if user and user.is_authenticated:
return {"type": "authenticated", "id": f"{user.pk}"}
ident = rest_throttling.BaseThrottle().get_ident(request)
return {"type": "anonymous", "id": ident}
def get_cache_key(scope, ident):
parts = ["throttling", scope, ident["type"], str(ident["id"])]
return ":".join(parts)
def get_scope_for_action_and_ident_type(action, ident_type, view_conf={}):
config = collections.ChainMap(view_conf, settings.THROTTLING_SCOPES)
try:
action_config = config[action]
except KeyError:
action_config = config.get("*", {})
try:
return action_config[ident_type]
except KeyError:
return
def get_status(ident, now):
data = []
throttle = FunkwhaleThrottle()
for key in sorted(settings.THROTTLING_RATES.keys()):
conf = settings.THROTTLING_RATES[key]
row_data = {"id": key, "rate": conf["rate"], "description": conf["description"]}
if conf["rate"]:
num_requests, duration = throttle.parse_rate(conf["rate"])
history = cache.get(get_cache_key(key, ident)) or []
relevant_history = [h for h in history if h > now - duration]
row_data["limit"] = num_requests
row_data["duration"] = duration
row_data["remaining"] = num_requests - len(relevant_history)
if relevant_history and len(relevant_history) >= num_requests:
# At this point, the endpoint becomes available again
now_request = relevant_history[-1]
remaining = duration - (now - int(now_request))
row_data["available"] = int(now + remaining) or None
row_data["available_seconds"] = int(remaining) or None
else:
row_data["available"] = None
row_data["available_seconds"] = None
if relevant_history:
# At this point, all Rate Limit is reset to 0
latest_request = relevant_history[0]
remaining = duration - (now - int(latest_request))
row_data["reset"] = int(now + remaining)
row_data["reset_seconds"] = int(remaining)
else:
row_data["reset"] = None
row_data["reset_seconds"] = None
else:
row_data["limit"] = None
row_data["duration"] = None
row_data["remaining"] = None
row_data["available"] = None
row_data["available_seconds"] = None
row_data["reset"] = None
row_data["reset_seconds"] = None
data.append(row_data)
return data
class FunkwhaleThrottle(rest_throttling.SimpleRateThrottle):
def __init__(self):
pass
def get_cache_key(self, request, view):
return get_cache_key(self.scope, self.ident)
def allow_request(self, request, view):
self.request = request
self.ident = get_ident(getattr(request, "user", None), request)
action = getattr(view, "action", "*")
view_scopes = getattr(view, "throttling_scopes", {})
if view_scopes is None:
return True
self.scope = get_scope_for_action_and_ident_type(
action=action, ident_type=self.ident["type"], view_conf=view_scopes
)
if not self.scope or self.scope not in settings.THROTTLING_RATES:
return True
self.rate = settings.THROTTLING_RATES[self.scope].get("rate")
self.num_requests, self.duration = self.parse_rate(self.rate)
self.request = request
return super().allow_request(request, view)
def attach_info(self):
info = {
"num_requests": self.num_requests,
"duration": self.duration,
"scope": self.scope,
"history": self.history or [],
"wait": self.wait(),
}
setattr(self.request, "_throttle_status", info)
def throttle_success(self):
self.attach_info()
return super().throttle_success()
def throttle_failure(self):
self.attach_info()
return super().throttle_failure()
class TooManyRequests(Exception):
pass
DummyView = collections.namedtuple("DummyView", "action throttling_scopes")
def check_request(request, scope):
"""
A simple wrapper around FunkwhaleThrottle for views that aren't API views
or cannot use rest_framework automatic throttling.
Raise TooManyRequests if limit is reached.
"""
if not settings.THROTTLING_ENABLED:
return True
view = DummyView(
action=scope,
throttling_scopes={scope: {"anonymous": scope, "authenticated": scope}},
)
throttle = FunkwhaleThrottle()
if not throttle.allow_request(request, view):
raise TooManyRequests()
return True
import datetime
import hashlib
import logging
import os import os
import shutil import shutil
import uuid
import xml.etree.ElementTree as ET
from urllib.parse import parse_qs, urlencode, urlsplit, urlunsplit from urllib.parse import parse_qs, urlencode, urlsplit, urlunsplit
from django.db import transaction import bleach.sanitizer
import markdown
from django import urls
from django.conf import settings
from django.core.files.base import ContentFile
from django.db import models, transaction
from django.http import request
from django.utils import timezone
from django.utils.deconstruct import deconstructible
logger = logging.getLogger(__name__)
def batch(iterable, n=1):
has_entries = True
while has_entries:
current = []
for i in range(0, n):
try:
current.append(next(iterable))
except StopIteration:
has_entries = False
yield current
def rename_file(instance, field_name, new_name, allow_missing_file=False): def rename_file(instance, field_name, new_name, allow_missing_file=False):
field = getattr(instance, field_name) field = getattr(instance, field_name)
current_name, extension = os.path.splitext(field.name) current_name, extension = os.path.splitext(field.name)
new_name_with_extension = "{}{}".format(new_name, extension) new_name_with_extension = f"{new_name}{extension}"
try: try:
shutil.move(field.path, new_name_with_extension) shutil.move(field.path, new_name_with_extension)
except FileNotFoundError: except FileNotFoundError:
...@@ -41,3 +68,422 @@ def set_query_parameter(url, **kwargs): ...@@ -41,3 +68,422 @@ def set_query_parameter(url, **kwargs):
new_query_string = urlencode(query_params, doseq=True) new_query_string = urlencode(query_params, doseq=True)
return urlunsplit((scheme, netloc, path, new_query_string, fragment)) return urlunsplit((scheme, netloc, path, new_query_string, fragment))
@deconstructible
class ChunkedPath:
def sanitize_filename(self, filename):
return filename.replace("/", "-")
def __init__(self, root, preserve_file_name=True):
self.root = root
self.preserve_file_name = preserve_file_name
def __call__(self, instance, filename):
self.sanitize_filename(filename)
uid = str(uuid.uuid4())
chunk_size = 2
chunks = [uid[i : i + chunk_size] for i in range(0, len(uid), chunk_size)]
if self.preserve_file_name:
parts = chunks[:3] + [filename]
else:
ext = os.path.splitext(filename)[1][1:].lower()
new_filename = "".join(chunks[3:]) + f".{ext}"
parts = chunks[:3] + [new_filename]
return os.path.join(self.root, *parts)
def chunk_queryset(source_qs, chunk_size):
"""
From https://github.com/peopledoc/django-chunkator/blob/master/chunkator/__init__.py
"""
pk = None
# In django 1.9, _fields is always present and `None` if 'values()' is used
# In Django 1.8 and below, _fields will only be present if using `values()`
has_fields = hasattr(source_qs, "_fields") and source_qs._fields
if has_fields:
if "pk" not in source_qs._fields:
raise ValueError("The values() call must include the `pk` field")
field = source_qs.model._meta.pk
# set the correct field name:
# for ForeignKeys, we want to use `model_id` field, and not `model`,
# to bypass default ordering on related model
order_by_field = field.attname
source_qs = source_qs.order_by(order_by_field)
queryset = source_qs
while True:
if pk:
queryset = source_qs.filter(pk__gt=pk)
page = queryset[:chunk_size]
page = list(page)
nb_items = len(page)
if nb_items == 0:
return
last_item = page[-1]
# source_qs._fields exists *and* is not none when using "values()"
if has_fields:
pk = last_item["pk"]
else:
pk = last_item.pk
yield page
if nb_items < chunk_size:
return
def join_url(start, end):
if end.startswith("http://") or end.startswith("https://"):
# already a full URL, joining makes no sense
return end
if start.endswith("/") and end.startswith("/"):
return start + end[1:]
if not start.endswith("/") and not end.startswith("/"):
return start + "/" + end
return start + end
def media_url(path):
if settings.MEDIA_URL.startswith("http://") or settings.MEDIA_URL.startswith(
"https://"
):
return join_url(settings.MEDIA_URL, path)
from funkwhale_api.federation import utils as federation_utils
return federation_utils.full_url(path)
def spa_reverse(name, args=[], kwargs={}):
return urls.reverse(name, urlconf=settings.SPA_URLCONF, args=args, kwargs=kwargs)
def spa_resolve(path):
return urls.resolve(path, urlconf=settings.SPA_URLCONF)
def parse_meta(html):
# dirty but this is only for testing so we don't really care,
# we convert the html string to xml so it can be parsed as xml
html = '<?xml version="1.0"?>' + html
tree = ET.fromstring(html)
meta = [elem for elem in tree.iter() if elem.tag in ["meta", "link"]]
return [dict([("tag", elem.tag)] + list(elem.items())) for elem in meta]
def order_for_search(qs, field):
"""
When searching, it's often more useful to have short results first,
this function will order the given qs based on the length of the given field
"""
return qs.annotate(__size=models.functions.Length(field)).order_by("__size", "pk")
def recursive_getattr(obj, key, permissive=False):
"""
Given a dictionary such as {'user': {'name': 'Bob'}} or and object and
a dotted string such as user.name, returns 'Bob'.
If the value is not present, returns None
"""
v = obj
for k in key.split("."):
try:
if hasattr(v, "get"):
v = v.get(k)
else:
v = getattr(v, k)
except (TypeError, AttributeError):
if not permissive:
raise
return
if v is None:
return
return v
def replace_prefix(queryset, field, old, new):
"""
Given a queryset of objects and a field name, will find objects
for which the field have the given value, and replace the old prefix by
the new one.
This is especially useful to find/update bad federation ids, to replace:
http://wrongprotocolanddomain/path
by
https://goodprotocalanddomain/path
on a whole table with a single query.
"""
qs = queryset.filter(**{f"{field}__startswith": old})
# we extract the part after the old prefix, and Concat it with our new prefix
update = models.functions.Concat(
models.Value(new),
models.functions.Substr(field, len(old) + 1, output_field=models.CharField()),
)
return qs.update(**{field: update})
def concat_dicts(*dicts):
n = {}
for d in dicts:
n.update(d)
return n
def get_updated_fields(conf, data, obj):
"""
Given a list of fields, a dict and an object, will return the dict keys/values
that differ from the corresponding fields on the object.
"""
final_conf = []
for c in conf:
if isinstance(c, str):
final_conf.append((c, c))
else:
final_conf.append(c)
final_data = {}
for data_field, obj_field in final_conf:
try:
data_value = data[data_field]
except KeyError:
continue
if obj.pk:
obj_value = getattr(obj, obj_field)
if obj_value != data_value:
final_data[obj_field] = data_value
else:
final_data[obj_field] = data_value
return final_data
def join_queries_or(left, right):
if left:
return left | right
else:
return right
MARKDOWN_RENDERER = markdown.Markdown(extensions=settings.MARKDOWN_EXTENSIONS)
def render_markdown(text):
return MARKDOWN_RENDERER.convert(text)
SAFE_TAGS = [
"p",
"a",
"abbr",
"acronym",
"b",
"blockquote",
"br",
"code",
"em",
"i",
"li",
"ol",
"strong",
"ul",
]
HTMl_CLEANER = bleach.sanitizer.Cleaner(strip=True, tags=SAFE_TAGS)
HTML_PERMISSIVE_CLEANER = bleach.sanitizer.Cleaner(
strip=True,
tags=SAFE_TAGS + ["h1", "h2", "h3", "h4", "h5", "h6", "div", "section", "article"],
attributes=["class", "rel", "alt", "title", "href"],
)
# support for additional tlds
# cf https://github.com/mozilla/bleach/issues/367#issuecomment-384631867
ALL_TLDS = set(settings.LINKIFIER_SUPPORTED_TLDS + bleach.linkifier.TLDS)
URL_RE = bleach.linkifier.build_url_re(tlds=sorted(ALL_TLDS, reverse=True))
HTML_LINKER = bleach.linkifier.Linker(url_re=URL_RE)
def clean_html(html, permissive=False):
return (
HTML_PERMISSIVE_CLEANER.clean(html) if permissive else HTMl_CLEANER.clean(html)
)
def render_html(text, content_type, permissive=False):
if not text:
return ""
rendered = render_markdown(text)
if content_type == "text/html":
rendered = text
elif content_type == "text/markdown":
rendered = render_markdown(text)
else:
rendered = render_markdown(text)
rendered = HTML_LINKER.linkify(rendered)
return clean_html(rendered, permissive=permissive).strip().replace("\n", "")
def render_plain_text(html):
if not html:
return ""
return bleach.clean(html, tags=[], strip=True)
def same_content(old, text=None, content_type=None):
return old.text == text and old.content_type == content_type
@transaction.atomic
def attach_content(obj, field, content_data):
from . import models
content_data = content_data or {}
existing = getattr(obj, f"{field}_id")
if existing:
if same_content(getattr(obj, field), **content_data):
# optimization to avoid a delete/save if possible
return getattr(obj, field)
getattr(obj, field).delete()
setattr(obj, field, None)
if not content_data:
return
content_obj = models.Content.objects.create(
text=content_data["text"][: models.CONTENT_TEXT_MAX_LENGTH],
content_type=content_data["content_type"],
)
setattr(obj, field, content_obj)
obj.save(update_fields=[field])
return content_obj
@transaction.atomic
def attach_file(obj, field, file_data, fetch=False):
from . import models, tasks
existing = getattr(obj, f"{field}_id")
if existing:
getattr(obj, field).delete()
if not file_data:
return
if isinstance(file_data, models.Attachment):
attachment = file_data
else:
extensions = {"image/jpeg": "jpg", "image/png": "png", "image/gif": "gif"}
extension = extensions.get(file_data["mimetype"], "jpg")
attachment = models.Attachment(mimetype=file_data["mimetype"])
name_fields = ["uuid", "full_username", "pk"]
name = [
getattr(obj, field) for field in name_fields if getattr(obj, field, None)
][0]
filename = f"{field}-{name}.{extension}"
if "url" in file_data:
attachment.url = file_data["url"]
else:
f = ContentFile(file_data["content"])
attachment.file.save(filename, f, save=False)
if not attachment.file and fetch:
try:
tasks.fetch_remote_attachment(attachment, filename=filename, save=False)
except Exception as e:
logger.warn(
"Cannot download attachment at url %s: %s", attachment.url, e
)
attachment = None
if attachment:
attachment.save()
setattr(obj, field, attachment)
obj.save(update_fields=[field])
return attachment
def get_mimetype_from_ext(path):
parts = path.lower().split(".")
ext = parts[-1]
match = {
"jpeg": "image/jpeg",
"jpg": "image/jpeg",
"png": "image/png",
"gif": "image/gif",
}
return match.get(ext)
def get_audio_mimetype(mt):
aliases = {"audio/x-mp3": "audio/mpeg", "audio/mpeg3": "audio/mpeg"}
return aliases.get(mt, mt)
def update_modification_date(obj, field="modification_date", date=None):
IGNORE_DELAY = 60
current_value = getattr(obj, field)
date = date or timezone.now()
ignore = current_value is not None and current_value < date - datetime.timedelta(
seconds=IGNORE_DELAY
)
if ignore:
setattr(obj, field, date)
obj.__class__.objects.filter(pk=obj.pk).update(**{field: date})
return date
def monkey_patch_request_build_absolute_uri():
"""
Since we have FUNKWHALE_HOSTNAME and PROTOCOL hardcoded in settings, we can
override django's multisite logic which can break when reverse proxy aren't configured
properly.
"""
builtin_scheme = request.HttpRequest.scheme
def scheme(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_PROTOCOL
return builtin_scheme.fget(self)
builtin_get_host = request.HttpRequest.get_host
def get_host(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_HOSTNAME
return builtin_get_host(self)
request.HttpRequest.scheme = property(scheme)
request.HttpRequest.get_host = get_host
def get_file_hash(file, algo=None, chunk_size=None, full_read=False):
algo = algo or settings.HASHING_ALGORITHM
chunk_size = chunk_size or settings.HASHING_CHUNK_SIZE
hasher = hashlib.new(algo)
file.seek(0)
if full_read:
for byte_block in iter(lambda: file.read(chunk_size), b""):
hasher.update(byte_block)
else:
# sometimes, it's useful to only hash the beginning of the file, e.g
# to avoid a lot of I/O when crawling large libraries
hasher.update(file.read(chunk_size))
return f"{algo}:{hasher.hexdigest()}"
import mimetypes
from os.path import splitext
from django.core import validators
from django.core.exceptions import ValidationError
from django.core.files.images import get_image_dimensions
from django.template.defaultfilters import filesizeformat
from django.utils.deconstruct import deconstructible
from django.utils.translation import gettext_lazy as _
@deconstructible
class ImageDimensionsValidator:
"""
ImageField dimensions validator.
from https://gist.github.com/emilio-rst/4f81ea2718736a6aaf9bdb64d5f2ea6c
"""
def __init__(
self,
width=None,
height=None,
min_width=None,
max_width=None,
min_height=None,
max_height=None,
):
"""
Constructor
Args:
width (int): exact width
height (int): exact height
min_width (int): minimum width
min_height (int): minimum height
max_width (int): maximum width
max_height (int): maximum height
"""
self.width = width
self.height = height
self.min_width = min_width
self.max_width = max_width
self.min_height = min_height
self.max_height = max_height
def __call__(self, image):
w, h = get_image_dimensions(image)
if self.width is not None and w != self.width:
raise ValidationError(_("Width must be %dpx.") % (self.width,))
if self.height is not None and h != self.height:
raise ValidationError(_("Height must be %dpx.") % (self.height,))
if self.min_width is not None and w < self.min_width:
raise ValidationError(_("Minimum width must be %dpx.") % (self.min_width,))
if self.min_height is not None and h < self.min_height:
raise ValidationError(
_("Minimum height must be %dpx.") % (self.min_height,)
)
if self.max_width is not None and w > self.max_width:
raise ValidationError(_("Maximum width must be %dpx.") % (self.max_width,))
if self.max_height is not None and h > self.max_height:
raise ValidationError(
_("Maximum height must be %dpx.") % (self.max_height,)
)
@deconstructible
class FileValidator:
"""
Taken from https://gist.github.com/jrosebr1/2140738
Validator for files, checking the size, extension and mimetype.
Initialization parameters:
allowed_extensions: iterable with allowed file extensions
ie. ('txt', 'doc')
allowd_mimetypes: iterable with allowed mimetypes
ie. ('image/png', )
min_size: minimum number of bytes allowed
ie. 100
max_size: maximum number of bytes allowed
ie. 24*1024*1024 for 24 MB
Usage example::
MyModel(models.Model):
myfile = FileField(validators=FileValidator(max_size=24*1024*1024), ...)
"""
extension_message = _(
"Extension '%(extension)s' not allowed. Allowed extensions are: '%(allowed_extensions)s.'"
)
mime_message = _(
"MIME type '%(mimetype)s' is not valid. Allowed types are: %(allowed_mimetypes)s."
)
min_size_message = _(
"The current file %(size)s, which is too small. The minimum file size is %(allowed_size)s."
)
max_size_message = _(
"The current file %(size)s, which is too large. The maximum file size is %(allowed_size)s."
)
def __init__(self, *args, **kwargs):
self.allowed_extensions = kwargs.pop("allowed_extensions", None)
self.allowed_mimetypes = kwargs.pop("allowed_mimetypes", None)
self.min_size = kwargs.pop("min_size", 0)
self.max_size = kwargs.pop("max_size", None)
def __call__(self, value):
"""
Check the extension, content type and file size.
"""
# Check the extension
ext = splitext(value.name)[1][1:].lower()
if self.allowed_extensions and ext not in self.allowed_extensions:
message = self.extension_message % {
"extension": ext,
"allowed_extensions": ", ".join(self.allowed_extensions),
}
raise ValidationError(message)
# Check the content type
mimetype = mimetypes.guess_type(value.name)[0]
if self.allowed_mimetypes and mimetype not in self.allowed_mimetypes:
message = self.mime_message % {
"mimetype": mimetype,
"allowed_mimetypes": ", ".join(self.allowed_mimetypes),
}
raise ValidationError(message)
# Check the file size
filesize = len(value)
if self.max_size and filesize > self.max_size:
message = self.max_size_message % {
"size": filesizeformat(filesize),
"allowed_size": filesizeformat(self.max_size),
}
raise ValidationError(message)
elif filesize < self.min_size:
message = self.min_size_message % {
"size": filesizeformat(filesize),
"allowed_size": filesizeformat(self.min_size),
}
raise ValidationError(message)
class DomainValidator(validators.URLValidator):
message = "Enter a valid domain name."
def __call__(self, value):
"""
This is a bit hackish but since we don't have any built-in domain validator,
we use the url one, and prepend http:// in front of it.
If it fails, we know the domain is not valid.
"""
super().__call__(f"http://{value}")
return value
import logging
import time
from django.conf import settings
from django.db import transaction
from drf_spectacular.utils import extend_schema
from rest_framework import (
exceptions,
generics,
mixins,
permissions,
response,
views,
viewsets,
)
from rest_framework.decorators import action
from config import plugins
from funkwhale_api.common.serializers import (
ErrorDetailSerializer,
TextPreviewSerializer,
)
from funkwhale_api.users.oauth import permissions as oauth_permissions
from . import filters, models, mutations, serializers, signals, tasks, throttling, utils
logger = logging.getLogger(__name__)
class SkipFilterForGetObject:
def get_object(self, *args, **kwargs):
setattr(self.request, "_skip_filters", True)
return super().get_object(*args, **kwargs)
def filter_queryset(self, queryset):
if getattr(self.request, "_skip_filters", False):
return queryset
return super().filter_queryset(queryset)
class MutationViewSet(
mixins.ListModelMixin,
mixins.RetrieveModelMixin,
mixins.DestroyModelMixin,
viewsets.GenericViewSet,
):
lookup_field = "uuid"
queryset = (
models.Mutation.objects.all()
.exclude(target_id=None)
.order_by("-creation_date")
.select_related("created_by", "approved_by")
.prefetch_related("target")
)
serializer_class = serializers.APIMutationSerializer
permission_classes = [permissions.IsAuthenticated]
ordering_fields = ("creation_date",)
filterset_class = filters.MutationFilter
def perform_destroy(self, instance):
if instance.is_applied:
raise exceptions.PermissionDenied("You cannot delete an applied mutation")
actor = self.request.user.actor
is_owner = actor == instance.created_by
if not any(
[
is_owner,
mutations.registry.has_perm(
perm="approve", type=instance.type, obj=instance.target, actor=actor
),
]
):
raise exceptions.PermissionDenied()
return super().perform_destroy(instance)
@extend_schema(operation_id="approve_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def approve(self, request, *args, **kwargs):
instance = self.get_object()
if instance.is_applied:
return response.Response(
{"error": "This mutation was already applied"}, status=403
)
actor = self.request.user.actor
can_approve = mutations.registry.has_perm(
perm="approve", type=instance.type, obj=instance.target, actor=actor
)
if not can_approve:
raise exceptions.PermissionDenied()
previous_is_approved = instance.is_approved
instance.approved_by = actor
instance.is_approved = True
instance.save(update_fields=["approved_by", "is_approved"])
utils.on_commit(tasks.apply_mutation.delay, mutation_id=instance.id)
utils.on_commit(
signals.mutation_updated.send,
sender=None,
mutation=instance,
old_is_approved=previous_is_approved,
new_is_approved=instance.is_approved,
)
return response.Response({}, status=200)
@extend_schema(operation_id="reject_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def reject(self, request, *args, **kwargs):
instance = self.get_object()
if instance.is_applied:
return response.Response(
{"error": "This mutation was already applied"}, status=403
)
actor = self.request.user.actor
can_approve = mutations.registry.has_perm(
perm="approve", type=instance.type, obj=instance.target, actor=actor
)
if not can_approve:
raise exceptions.PermissionDenied()
previous_is_approved = instance.is_approved
instance.approved_by = actor
instance.is_approved = False
instance.save(update_fields=["approved_by", "is_approved"])
utils.on_commit(
signals.mutation_updated.send,
sender=None,
mutation=instance,
old_is_approved=previous_is_approved,
new_is_approved=instance.is_approved,
)
return response.Response({}, status=200)
class RateLimitView(views.APIView):
permission_classes = []
throttle_classes = []
serializer_class = serializers.RateLimitSerializer
def get(self, request, *args, **kwargs):
ident = throttling.get_ident(getattr(request, "user", None), request)
data = {
"enabled": settings.THROTTLING_ENABLED,
"ident": ident,
"scopes": throttling.get_status(ident, time.time()),
}
return response.Response(serializers.RateLimitSerializer(data).data, status=200)
class AttachmentViewSet(
mixins.RetrieveModelMixin,
mixins.CreateModelMixin,
mixins.DestroyModelMixin,
viewsets.GenericViewSet,
):
lookup_field = "uuid"
queryset = models.Attachment.objects.all()
serializer_class = serializers.AttachmentSerializer
permission_classes = [oauth_permissions.ScopePermission]
required_scope = "libraries"
anonymous_policy = "setting"
@action(
detail=True, methods=["get"], permission_classes=[], authentication_classes=[]
)
@transaction.atomic
def proxy(self, request, *args, **kwargs):
instance = self.get_object()
if not settings.EXTERNAL_MEDIA_PROXY_ENABLED:
r = response.Response(status=302)
r["Location"] = instance.url
return r
size = request.GET.get("next", "original").lower()
if size not in [
"original",
"small_square_crop",
"medium_square_crop",
"large_square_crop",
]:
size = "original"
try:
tasks.fetch_remote_attachment(instance)
except Exception:
logger.exception("Error while fetching attachment %s", instance.url)
return response.Response(status=500)
data = self.serializer_class(instance).data
redirect = response.Response(status=302)
redirect["Location"] = data["urls"][size]
return redirect
def perform_create(self, serializer):
return serializer.save(actor=self.request.user.actor)
def perform_destroy(self, instance):
if instance.actor is None or instance.actor != self.request.user.actor:
raise exceptions.PermissionDenied()
instance.delete()
class TextPreviewView(generics.GenericAPIView):
permission_classes = []
serializer_class = TextPreviewSerializer
@extend_schema(
operation_id="preview_text",
responses={200: TextPreviewSerializer, 400: ErrorDetailSerializer},
)
def post(self, request, *args, **kwargs):
payload = request.data
if "text" not in payload:
return response.Response(
ErrorDetailSerializer("Invalid input").data, status=400
)
permissive = payload.get("permissive", False)
data = TextPreviewSerializer(
utils.render_html(payload["text"], "text/markdown", permissive=permissive)
).data
return response.Response(data, status=200)
class PluginViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):
required_scope = "plugins"
serializer_class = serializers.serializers.Serializer
queryset = models.PluginConfiguration.objects.none()
def list(self, request, *args, **kwargs):
user = request.user
user_plugins = [p for p in plugins._plugins.values() if p["user"] is True]
return response.Response(
[
plugins.serialize_plugin(p, confs=plugins.get_confs(user=user))
for p in user_plugins
]
)
def retrieve(self, request, *args, **kwargs):
user = request.user
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if not user_plugin:
return response.Response(status=404)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def post(self, request, *args, **kwargs):
return self.create(request, *args, **kwargs)
def create(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if kwargs["pk"] not in confs:
return response.Response(status=404)
plugins.set_conf(kwargs["pk"], request.data, user)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def delete(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
if kwargs["pk"] not in confs:
return response.Response(status=404)
user.plugins.filter(code=kwargs["pk"]).delete()
return response.Response(status=204)
@extend_schema(operation_id="enable_plugin")
@action(detail=True, methods=["post"])
def enable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], True, user)
return response.Response({}, status=200)
@extend_schema(operation_id="disable_plugin")
@action(detail=True, methods=["post"])
def disable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], False, user)
return response.Response({}, status=200)
@action(detail=True, methods=["post"])
def scan(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
conf = plugins.get_conf(kwargs["pk"], user=user)
if not conf["enabled"]:
return response.Response(status=405)
library = request.user.actor.libraries.get(uuid=conf["conf"]["library"])
hook = [
hook
for p, hook in plugins._hooks.get(plugins.SCAN, [])
if p == kwargs["pk"]
]
if not hook:
return response.Response(status=405)
hook[0](library=library, conf=conf["conf"])
return response.Response({}, status=200)
# -*- coding: utf-8 -*-
import logging
from config import plugins
from funkwhale_api.contrib.archivedl import tasks
from .funkwhale_startup import PLUGIN
logger = logging.getLogger(__name__)
@plugins.register_hook(plugins.TRIGGER_THIRD_PARTY_UPLOAD, PLUGIN)
def lauch_download(track, conf={}):
tasks.archive_download.delay(track_id=track.pk, conf=conf)
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="archivedl",
label="Archive-dl",
description="",
version="0.1",
user=False,
conf=[],
)
import asyncio
import hashlib
import logging
import os
import tempfile
import time
import urllib.parse
from datetime import timedelta
import requests
from django.core.files import File
from django.utils import timezone
from funkwhale_api.federation import actors
from funkwhale_api.music import models, utils
from funkwhale_api.taskapp import celery
logger = logging.getLogger(__name__)
class TooManyQueriesError(Exception):
pass
def check_existing_download_task(track):
if models.Upload.objects.filter(
track=track,
import_status__in=["pending", "finished"],
third_party_provider="archive-dl",
).exists():
raise TooManyQueriesError(
"Upload for this track already exist or is pending. Stopping task."
)
def check_last_third_party_queries(track, count):
# 15 per minutes according to their doc = one each 4 seconds
time_threshold = timezone.now() - timedelta(seconds=5)
if models.Upload.objects.filter(
third_party_provider="archive-dl",
import_status__in=["pending", "finished"],
creation_date__gte=time_threshold,
).exists():
logger.info(
"Last archive.org query was too recent. Trying to wait 2 seconds..."
)
time.sleep(2)
count += 1
if count > 3:
raise TooManyQueriesError(
"Probably too many archivedl tasks are queue, stopping this task"
)
check_last_third_party_queries(track, count)
def create_upload(url, track, files_data):
mimetype = f"audio/{files_data.get('format', 'unknown')}"
duration = files_data.get("mtime", 0)
filesize = files_data.get("size", 0)
bitrate = files_data.get("bitrate", 0)
service_library = models.Library.objects.create(
privacy_level="everyone",
actor=actors.get_service_actor(),
)
return models.Upload.objects.create(
mimetype=mimetype,
source=url,
third_party_provider="archive-dl",
creation_date=timezone.now(),
track=track,
duration=duration,
size=filesize,
bitrate=bitrate,
library=service_library,
from_activity=None,
import_status="pending",
)
@celery.app.task(name="archivedl.archive_download")
@celery.require_instance(models.Track.objects.select_related(), "track")
def archive_download(track, conf):
try:
check_existing_download_task(track)
check_last_third_party_queries(track, 0)
except TooManyQueriesError as e:
logger.error(e)
return
artist_name = utils.get_artist_credit_string(track)
query = f"mediatype:audio AND title:{track.title} AND creator:{artist_name}"
with requests.Session() as session:
url = get_search_url(query, page_size=1, page=1)
page_data = fetch_json(url, session)
for obj in page_data["response"]["docs"]:
logger.info(f"launching download item for {str(obj)}")
download_item(
item_data=obj,
session=session,
allowed_extensions=utils.SUPPORTED_EXTENSIONS,
track=track,
)
def fetch_json(url, session):
logger.info(f"Fetching {url}...")
with session.get(url) as response:
return response.json()
def download_item(
item_data,
session,
allowed_extensions,
track,
):
files_data = get_files_data(item_data["identifier"], session)
to_download = list(
filter_files(
files_data["result"],
allowed_extensions=allowed_extensions,
)
)
url = f"https://archive.org/download/{item_data['identifier']}/{to_download[0]['name']}"
upload = create_upload(url, track, to_download[0])
try:
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, to_download[0]["name"])
download_file(
path,
url=url,
session=session,
checksum=to_download[0]["sha1"],
upload=upload,
to_download=to_download,
)
logger.info(f"Finished to download item {item_data['identifier']}...")
except Exception as e:
upload.delete()
raise e
def check_integrity(path, expected_checksum):
with open(path, mode="rb") as f:
hash = hashlib.sha1()
hash.update(f.read())
return expected_checksum == hash.hexdigest()
def get_files_data(identifier, session):
url = f"https://archive.org/metadata/{identifier}/files"
logger.info(f"Fetching files data at {url}...")
with session.get(url) as response:
return response.json()
def download_file(path, url, session, checksum, upload, to_download):
if os.path.exists(path) and check_integrity(path, checksum):
logger.info(f"Skipping already downloaded file at {path}")
return
logger.info(f"Downloading file {url}...")
with open(path, mode="wb") as f:
try:
with session.get(url) as response:
f.write(response.content)
except asyncio.TimeoutError as e:
logger.error(f"Timeout error while downloading {url}: {e}")
with open(path, "rb") as f:
upload.audio_file.save(f"{to_download['name']}", File(f))
upload.import_status = "finished"
upload.url = url
upload.save()
return upload
def filter_files(files, allowed_extensions):
for f in files:
if allowed_extensions:
extension = os.path.splitext(f["name"])[-1][1:]
if extension not in allowed_extensions:
continue
yield f
def get_search_url(query, page_size, page):
q = urllib.parse.urlencode({"q": query})
return f"https://archive.org/advancedsearch.php?{q}&sort[]=addeddate+desc&rows={page_size}\
&page={page}&output=json"
import liblistenbrainz
import funkwhale_api
from config import plugins
from funkwhale_api.favorites import models as favorites_models
from funkwhale_api.history import models as history_models
from . import tasks
from .funkwhale_startup import PLUGIN
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def submit_listen(listening, conf, **kwargs):
user_token = conf["user_token"]
if not user_token and not conf["submit_listenings"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting listen to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
client.set_auth_token(user_token)
listen = get_lb_listen(listening)
client.submit_single_listen(listen)
def get_lb_listen(listening):
track = listening.track
additional_info = {
"media_player": "Funkwhale",
"media_player_version": funkwhale_api.__version__,
"submission_client": "Funkwhale ListenBrainz plugin",
"submission_client_version": PLUGIN["version"],
"tracknumber": track.position,
"discnumber": track.disc_number,
}
if track.mbid:
additional_info["recording_mbid"] = str(track.mbid)
if track.album:
if track.album.title:
release_name = track.album.title
if track.album.mbid:
additional_info["release_mbid"] = str(track.album.mbid)
mbids = [ac.artist.mbid for ac in track.artist_credit.all() if ac.artist.mbid]
if mbids:
additional_info["artist_mbids"] = mbids
upload = track.uploads.filter(duration__gte=0).first()
if upload:
additional_info["duration"] = upload.duration
return liblistenbrainz.Listen(
track_name=track.title,
listened_at=listening.creation_date.timestamp(),
artist_name=track.get_artist_credit_string,
release_name=release_name,
additional_info=additional_info,
)
@plugins.register_hook(plugins.FAVORITE_CREATED, PLUGIN)
def submit_favorite_creation(track_favorite, conf, **kwargs):
user_token = conf["user_token"]
if not user_token or not conf["submit_favorites"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting favorite to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
track = track_favorite.track
if not track.mbid:
logger.warning(
"This tracks doesn't have a mbid. Feedback will not be submitted to Listenbrainz"
)
return
client.submit_user_feedback(1, track.mbid)
@plugins.register_hook(plugins.FAVORITE_DELETED, PLUGIN)
def submit_favorite_deletion(track_favorite, conf, **kwargs):
user_token = conf["user_token"]
if not user_token or not conf["submit_favorites"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting favorite deletion to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
track = track_favorite.track
if not track.mbid:
logger.warning(
"This tracks doesn't have a mbid. Feedback will not be submitted to Listenbrainz"
)
return
client.submit_user_feedback(0, track.mbid)
@plugins.register_hook(plugins.LISTENING_SYNC, PLUGIN)
def sync_listenings_from_listenbrainz(user, conf):
user_name = conf["user_name"]
if not user_name or not conf["sync_listenings"]:
return
logger = PLUGIN["logger"]
logger.info("Getting listenings from ListenBrainz")
try:
last_ts = (
history_models.Listening.objects.filter(actor=user.actor)
.filter(source="Listenbrainz")
.latest("creation_date")
.values_list("creation_date", flat=True)
).timestamp()
except funkwhale_api.history.models.Listening.DoesNotExist:
tasks.import_listenbrainz_listenings(user, user_name, 0)
return
tasks.import_listenbrainz_listenings(user, user_name, last_ts)
@plugins.register_hook(plugins.FAVORITE_SYNC, PLUGIN)
def sync_favorites_from_listenbrainz(user, conf):
user_name = conf["user_name"]
if not user_name or not conf["sync_favorites"]:
return
try:
last_ts = (
favorites_models.TrackFavorite.objects.filter(actor=user.actor)
.filter(source="Listenbrainz")
.latest("creation_date")
.creation_date.timestamp()
)
except favorites_models.TrackFavorite.DoesNotExist:
tasks.import_listenbrainz_favorites(user, user_name, 0)
return
tasks.import_listenbrainz_favorites(user, user_name, last_ts)
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="listenbrainz",
label="ListenBrainz",
description="A plugin that allows you to submit or sync your listens and favorites to ListenBrainz.",
homepage="https://docs.funkwhale.audio/users/builtinplugins.html#listenbrainz-plugin", # noqa
version="0.3",
user=True,
conf=[
{
"name": "user_token",
"type": "text",
"label": "Your ListenBrainz user token",
"help": "You can find your user token in your ListenBrainz profile at https://listenbrainz.org/profile/",
},
{
"name": "user_name",
"type": "text",
"required": False,
"label": "Your ListenBrainz user name.",
"help": "Required for importing listenings and favorites with ListenBrainz \
but not to send activities",
},
{
"name": "submit_listenings",
"type": "boolean",
"default": True,
"label": "Enable listening submission to ListenBrainz",
"help": "If enabled, your listenings from Funkwhale will be imported into ListenBrainz.",
},
{
"name": "sync_listenings",
"type": "boolean",
"default": False,
"label": "Enable listenings sync",
"help": "If enabled, your listening from ListenBrainz will be imported into Funkwhale. This means they \
will be used along with Funkwhale listenings to filter out recently listened content or \
generate recommendations",
},
{
"name": "sync_favorites",
"type": "boolean",
"default": False,
"label": "Enable favorite sync",
"help": "If enabled, your favorites from ListenBrainz will be imported into Funkwhale. This means they \
will be used along with Funkwhale favorites (UI display, federation activity)",
},
{
"name": "submit_favorites",
"type": "boolean",
"default": False,
"label": "Enable favorite submission to ListenBrainz services",
"help": "If enabled, your favorites from Funkwhale will be submitted to ListenBrainz",
},
],
)
import datetime
import liblistenbrainz
from django.utils import timezone
from config import plugins
from funkwhale_api.favorites import models as favorites_models
from funkwhale_api.history import models as history_models
from funkwhale_api.music import models as music_models
from funkwhale_api.taskapp import celery
from funkwhale_api.users import models
from .funkwhale_startup import PLUGIN
@celery.app.task(name="listenbrainz.trigger_listening_sync_with_listenbrainz")
def trigger_listening_sync_with_listenbrainz():
now = timezone.now()
active_month = now - datetime.timedelta(days=30)
users = (
models.User.objects.filter(plugins__code="listenbrainz")
.filter(plugins__conf__sync_listenings=True)
.filter(last_activity__gte=active_month)
)
for user in users:
plugins.trigger_hook(
plugins.LISTENING_SYNC,
user=user,
confs=plugins.get_confs(user),
)
@celery.app.task(name="listenbrainz.trigger_favorite_sync_with_listenbrainz")
def trigger_favorite_sync_with_listenbrainz():
now = timezone.now()
active_month = now - datetime.timedelta(days=30)
users = (
models.User.objects.filter(plugins__code="listenbrainz")
.filter(plugins__conf__sync_listenings=True)
.filter(last_activity__gte=active_month)
)
for user in users:
plugins.trigger_hook(
plugins.FAVORITE_SYNC,
user=user,
confs=plugins.get_confs(user),
)
@celery.app.task(name="listenbrainz.import_listenbrainz_listenings")
def import_listenbrainz_listenings(user, user_name, since):
client = liblistenbrainz.ListenBrainz()
response = client.get_listens(username=user_name, min_ts=since, count=100)
listens = response["payload"]["listens"]
while listens:
add_lb_listenings_to_db(listens, user)
new_ts = max(
listens,
key=lambda obj: datetime.datetime.fromtimestamp(
obj.listened_at, datetime.timezone.utc
),
)
response = client.get_listens(username=user_name, min_ts=new_ts, count=100)
listens = response["payload"]["listens"]
def add_lb_listenings_to_db(listens, user):
logger = PLUGIN["logger"]
fw_listens = []
for listen in listens:
if (
listen.additional_info.get("submission_client")
and listen.additional_info.get("submission_client")
== "Funkwhale ListenBrainz plugin"
and history_models.Listening.objects.filter(
creation_date=datetime.datetime.fromtimestamp(
listen.listened_at, datetime.timezone.utc
)
).exists()
):
logger.info(
f"Listen with ts {listen.listened_at} skipped because already in db"
)
continue
mbid = (
listen.mbid_mapping
if hasattr(listen, "mbid_mapping")
else listen.recording_mbid
)
if not mbid:
logger.info("Received listening that doesn't have a mbid. Skipping...")
try:
track = music_models.Track.objects.get(mbid=mbid)
except music_models.Track.DoesNotExist:
logger.info(
"Received listening that doesn't exist in fw database. Skipping..."
)
continue
user = user
fw_listen = history_models.Listening(
creation_date=datetime.datetime.fromtimestamp(
listen.listened_at, datetime.timezone.utc
),
track=track,
actor=user.actor,
source="Listenbrainz",
)
fw_listens.append(fw_listen)
history_models.Listening.objects.bulk_create(fw_listens)
@celery.app.task(name="listenbrainz.import_listenbrainz_favorites")
def import_listenbrainz_favorites(user, user_name, since):
client = liblistenbrainz.ListenBrainz()
response = client.get_user_feedback(username=user_name)
offset = 0
while response["feedback"]:
count = response["count"]
offset = offset + count
last_sync = min(
response["feedback"],
key=lambda obj: datetime.datetime.fromtimestamp(
obj["created"], datetime.timezone.utc
),
)["created"]
add_lb_feedback_to_db(response["feedback"], user)
if last_sync <= since or count == 0:
return
response = client.get_user_feedback(username=user_name, offset=offset)
def add_lb_feedback_to_db(feedbacks, user):
logger = PLUGIN["logger"]
for feedback in feedbacks:
try:
track = music_models.Track.objects.get(mbid=feedback["recording_mbid"])
except music_models.Track.DoesNotExist:
logger.info(
"Received feedback track that doesn't exist in fw database. Skipping..."
)
continue
if feedback["score"] == 1:
favorites_models.TrackFavorite.objects.get_or_create(
actor=user.actor,
creation_date=datetime.datetime.fromtimestamp(
feedback["created"], datetime.timezone.utc
),
track=track,
source="Listenbrainz",
)
elif feedback["score"] == 0:
try:
favorites_models.TrackFavorite.objects.get(
actor=user.actor, track=track
).delete()
except favorites_models.TrackFavorite.DoesNotExist:
continue
elif feedback["score"] == -1:
logger.info("Funkwhale doesn't support disliked tracks")
import json
from config import plugins
from .funkwhale_startup import PLUGIN
class MalojaException(Exception):
pass
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def submit_listen(listening, conf, **kwargs):
server_url = conf["server_url"]
api_key = conf["api_key"]
if not server_url or not api_key:
return
logger = PLUGIN["logger"]
logger.info("Submitting listening to Maloja at %s", server_url)
payload = get_payload(listening, api_key, conf)
logger.debug("Maloja payload: %r", payload)
url = server_url.rstrip("/") + "/apis/mlj_1/newscrobble"
session = plugins.get_session()
response = session.post(url, json=payload)
response.raise_for_status()
details = json.loads(response.text)
if details["status"] == "success":
logger.info("Maloja listening submitted successfully")
else:
raise MalojaException(response.text)
def get_payload(listening, api_key, conf):
track = listening.track
# See https://github.com/krateng/maloja/blob/master/API.md
payload = {
"key": api_key,
"artists": [artist.name for artist in track.artist_credit.get_artists_list()],
"title": track.title,
"time": int(listening.creation_date.timestamp()),
"nofix": bool(conf.get("nofix")),
}
if track.album:
if track.album.title:
payload["album"] = track.album.title
if track.album.artist_credit.all():
payload["albumartists"] = [
artist.name for artist in track.album.artist_credit.get_artists_list()
]
upload = track.uploads.filter(duration__gte=0).first()
if upload:
payload["length"] = upload.duration
return payload
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="maloja",
label="Maloja",
description="A plugin that allows you to submit your listens to your Maloja server.",
homepage="https://docs.funkwhale.audio/users/builtinplugins.html#maloja-plugin",
version="0.2",
user=True,
conf=[
{"name": "server_url", "type": "text", "label": "Maloja server URL"},
{"name": "api_key", "type": "text", "label": "Your Maloja API key"},
{
"name": "nofix",
"type": "boolean",
"label": "Skip server-side metadata fixing",
"default": False,
},
],
)
Scrobbler plugin
================
A plugin that enables scrobbling to ListenBrainz and Last.fm.
If you're scrobbling to last.fm, you will need to create an `API account <https://www.last.fm/api/account/create>`_
and add two variables two your .env file:
- ``FUNKWHALE_PLUGIN_SCROBBLER_LASTFM_API_KEY=apikey``
- ``FUNKWHALE_PLUGIN_SCROBBLER_LASTFM_API_SECRET=apisecret``
import hashlib
from config import plugins
from . import scrobbler
from .funkwhale_startup import PLUGIN
# https://listenbrainz.org/lastfm-proxy
DEFAULT_SCROBBLER_URL = "http://post.audioscrobbler.com"
LASTFM_SCROBBLER_URL = "https://ws.audioscrobbler.com/2.0/"
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def forward_to_scrobblers(listening, conf, **kwargs):
if not conf:
raise plugins.Skip()
username = conf.get("username")
password = conf.get("password")
url = conf.get("url", DEFAULT_SCROBBLER_URL) or DEFAULT_SCROBBLER_URL
if username and password:
session = plugins.get_session()
if (
PLUGIN["settings"]["lastfm_api_key"]
and PLUGIN["settings"]["lastfm_api_secret"]
and url == DEFAULT_SCROBBLER_URL
):
hashed_auth = hashlib.md5(
(username + " " + password).encode("utf-8")
).hexdigest()
cache_key = "lastfm:sessionkey:{}".format(
":".join([str(listening.actor.pk), hashed_auth])
)
PLUGIN["logger"].info("Forwarding scrobble to %s", LASTFM_SCROBBLER_URL)
session_key = PLUGIN["cache"].get(cache_key)
if not session_key:
PLUGIN["logger"].debug("Authenticating…")
session_key = scrobbler.handshake_v2(
username=username,
password=password,
scrobble_url=LASTFM_SCROBBLER_URL,
session=session,
api_key=PLUGIN["settings"]["lastfm_api_key"],
api_secret=PLUGIN["settings"]["lastfm_api_secret"],
)
PLUGIN["cache"].set(cache_key, session_key)
scrobbler.submit_scrobble_v2(
session=session,
track=listening.track,
scrobble_time=listening.creation_date,
session_key=session_key,
scrobble_url=LASTFM_SCROBBLER_URL,
api_key=PLUGIN["settings"]["lastfm_api_key"],
api_secret=PLUGIN["settings"]["lastfm_api_secret"],
)
else:
PLUGIN["logger"].info("Forwarding scrobble to %s", url)
session_key, now_playing_url, scrobble_url = scrobbler.handshake_v1(
session=session, url=url, username=username, password=password
)
scrobbler.submit_scrobble_v1(
session=session,
track=listening.track,
scrobble_time=listening.creation_date,
session_key=session_key,
scrobble_url=scrobble_url,
)
PLUGIN["logger"].info("Scrobble sent!")
else:
PLUGIN["logger"].debug("No scrobbler configuration for user, skipping")
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="scrobbler",
label="Scrobbler",
description=(
"A plugin that enables scrobbling to ListenBrainz and Last.fm. "
"It must be configured on the server if you use Last.fm."
),
homepage="https://docs.funkwhale.audio/users/builtinplugins.html#scrobbler-plugin", # noqa
version="0.1",
user=True,
conf=[
{
"name": "url",
"type": "url",
"allow_null": True,
"allow_blank": True,
"required": False,
"label": "URL of the scrobbler service",
"help": (
"Suggested choices:\n\n"
"- LastFM (default if left empty): http://post.audioscrobbler.com\n"
"- ListenBrainz: http://proxy.listenbrainz.org/\n"
"- Libre.fm: http://turtle.libre.fm/"
),
},
{"name": "username", "type": "text", "label": "Your scrobbler username"},
{"name": "password", "type": "password", "label": "Your scrobbler password"},
],
settings=[
{"name": "lastfm_api_key", "type": "text"},
{"name": "lastfm_api_secret", "type": "text"},
],
)