Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 1.4.1-upgrade-release
  • 1121-download
  • 1218-smartplaylist_backend
  • 1373-login-form-move-reset-your-password-link
  • 1381-progress-bars
  • 1481
  • 1518-update-django-allauth
  • 1645
  • 1675-widget-improperly-configured-missing-resource-id
  • 1675-widget-improperly-configured-missing-resource-id-2
  • 1704-required-props-are-not-always-passed
  • 1716-add-frontend-tests-again
  • 1749-smtp-uri-configuration
  • 1930-first-upload-in-a-batch-always-fails
  • 1976-update-documentation-links-in-readme-files
  • 2054-player-layout
  • 2063-funkwhale-connection-interrupted-every-so-often-requires-network-reset-page-refresh
  • 2091-iii-6-improve-visuals-layout
  • 2151-refused-to-load-spa-manifest-json-2
  • 2154-add-to-playlist-pop-up-hidden-by-now-playing-screen
  • 2155-can-t-see-the-episode-list-of-a-podcast-as-an-anonymous-user-with-anonymous-access-enabled
  • 2156-add-management-command-to-change-file-ref-for-in-place-imported-files-to-s3
  • 2192-clear-queue-bug-when-random-shuffle-is-enabled
  • 2205-channel-page-pagination-link-dont-working
  • 2215-custom-logger-does-not-work-at-all-with-webkit-and-blink-based-browsers
  • 2228-troi-real-world-review
  • 2274-implement-new-upload-api
  • 2303-allow-users-to-own-tagged-items
  • 2395-far-right-filter
  • 2405-front-buttont-trigger-third-party-hook
  • 2408-troi-create-missing-tracks
  • 2416-revert-library-drop
  • 2422-trigger-libraries-follow-on-user-follow
  • 2429-fix-popover-auto-close
  • 2448-complete-tags
  • 2451-delete-no-user-query
  • 2452-fetch-third-party-metadata
  • 2469-Fix-search-bar-in-ManageUploads
  • 2471-display-empty-playlist-in-playlist-widget
  • 2490-experiment-use-rstore
  • 2490-experimental-use-simple-data-store
  • 2490-fix-search-modal
  • 2490-search-modal
  • 2492-only-deliver-to-reachable-domains
  • 2501-fix-compatibility-with-older-browsers
  • 2502-drop-uno-and-jquery
  • 2506-fix-frontend-regressions
  • 2533-allow-followers-in-user-activiy-privacy-level
  • 623-test
  • 653-enable-starting-embedded-player-at-a-specific-position-in-track
  • activitypub-overview
  • album-sliders
  • arne/2091-improve-visuals
  • back-option-for-edits
  • chore/2406-compose-modularity-scope
  • develop
  • develop-password-reset
  • env-file-cleanup
  • feat/2091-improve-visuals
  • feature/2481-vui-translations
  • fix-amd64-docker-build-gfortran
  • fix-front-node-version
  • fix-gitpod
  • fix-plugins-dev-setup
  • fix-rate-limit-serializer
  • fix-schema-channel-metadata-choices
  • flupsi/2803-improve-visuals
  • flupsi/2804-new-upload-process
  • funkwhale-fix_pwa_manifest
  • funkwhale-petitminion-2136-bug-fix-prune-skipped-upload
  • funkwhale-ui-buttons
  • georg/add-typescript
  • gitpod/test-1866
  • global-button-experiment
  • global-buttons
  • juniorjpdj/pkg-repo
  • manage-py-reference
  • merge-review
  • minimal-python-version
  • petitminion-develop-patch-84496
  • pin-mutagen-to-1.46
  • pipenv
  • plugins
  • plugins-v2
  • plugins-v3
  • pre-release/1.3.0
  • prune_skipped_uploads_docs
  • refactor/homepage
  • renovate/front-all-dependencies
  • renovate/front-major-all-dependencies
  • schema-updates
  • small-gitpod-improvements
  • spectacular_schema
  • stable
  • tempArne
  • ui-buttons
  • update-frontend-dependencies
  • upload-process-spec
  • user-concept-docs
  • v2-artists
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.21
  • 0.21-rc1
  • 0.21-rc2
  • 0.21.1
  • 0.21.2
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
  • 1.0
  • 1.0-rc1
  • 1.0.1
  • 1.1
  • 1.1-rc1
  • 1.1-rc2
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.2.0
  • 1.2.0-rc1
  • 1.2.0-rc2
  • 1.2.0-testing
  • 1.2.0-testing2
  • 1.2.0-testing3
  • 1.2.0-testing4
  • 1.2.1
  • 1.2.10
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.6-1
  • 1.2.7
  • 1.2.8
  • 1.2.9
  • 1.3.0
  • 1.3.0-rc1
  • 1.3.0-rc2
  • 1.3.0-rc3
  • 1.3.0-rc4
  • 1.3.0-rc5
  • 1.3.0-rc6
  • 1.3.1
  • 1.3.2
  • 1.3.3
  • 1.3.4
  • 1.4.0
  • 1.4.0-rc1
  • 1.4.0-rc2
  • 1.4.1
  • 2.0.0-alpha.1
  • 2.0.0-alpha.2
200 results

Target

Select target project
  • funkwhale/funkwhale
  • Luclu7/funkwhale
  • mbothorel/funkwhale
  • EorlBruder/funkwhale
  • tcit/funkwhale
  • JocelynDelalande/funkwhale
  • eneiluj/funkwhale
  • reg/funkwhale
  • ButterflyOfFire/funkwhale
  • m4sk1n/funkwhale
  • wxcafe/funkwhale
  • andybalaam/funkwhale
  • jcgruenhage/funkwhale
  • pblayo/funkwhale
  • joshuaboniface/funkwhale
  • n3ddy/funkwhale
  • gegeweb/funkwhale
  • tohojo/funkwhale
  • emillumine/funkwhale
  • Te-k/funkwhale
  • asaintgenis/funkwhale
  • anoadragon453/funkwhale
  • Sakada/funkwhale
  • ilianaw/funkwhale
  • l4p1n/funkwhale
  • pnizet/funkwhale
  • dante383/funkwhale
  • interfect/funkwhale
  • akhardya/funkwhale
  • svfusion/funkwhale
  • noplanman/funkwhale
  • nykopol/funkwhale
  • roipoussiere/funkwhale
  • Von/funkwhale
  • aurieh/funkwhale
  • icaria36/funkwhale
  • floreal/funkwhale
  • paulwalko/funkwhale
  • comradekingu/funkwhale
  • FurryJulie/funkwhale
  • Legolars99/funkwhale
  • Vierkantor/funkwhale
  • zachhats/funkwhale
  • heyjake/funkwhale
  • sn0w/funkwhale
  • jvoisin/funkwhale
  • gordon/funkwhale
  • Alexander/funkwhale
  • bignose/funkwhale
  • qasim.ali/funkwhale
  • fakegit/funkwhale
  • Kxze/funkwhale
  • stenstad/funkwhale
  • creak/funkwhale
  • Kaze/funkwhale
  • Tixie/funkwhale
  • IISergII/funkwhale
  • lfuelling/funkwhale
  • nhaddag/funkwhale
  • yoasif/funkwhale
  • ifischer/funkwhale
  • keslerm/funkwhale
  • flupe/funkwhale
  • petitminion/funkwhale
  • ariasuni/funkwhale
  • ollie/funkwhale
  • ngaumont/funkwhale
  • techknowlogick/funkwhale
  • Shleeble/funkwhale
  • theflyingfrog/funkwhale
  • jonatron/funkwhale
  • neobrain/funkwhale
  • eorn/funkwhale
  • KokaKiwi/funkwhale
  • u1-liquid/funkwhale
  • marzzzello/funkwhale
  • sirenwatcher/funkwhale
  • newer027/funkwhale
  • codl/funkwhale
  • Zwordi/funkwhale
  • gisforgabriel/funkwhale
  • iuriatan/funkwhale
  • simon/funkwhale
  • bheesham/funkwhale
  • zeoses/funkwhale
  • accraze/funkwhale
  • meliurwen/funkwhale
  • divadsn/funkwhale
  • Etua/funkwhale
  • sdrik/funkwhale
  • Soran/funkwhale
  • kuba-orlik/funkwhale
  • cristianvogel/funkwhale
  • Forceu/funkwhale
  • jeff/funkwhale
  • der_scheibenhacker/funkwhale
  • owlnical/funkwhale
  • jovuit/funkwhale
  • SilverFox15/funkwhale
  • phw/funkwhale
  • mayhem/funkwhale
  • sridhar/funkwhale
  • stromlin/funkwhale
  • rrrnld/funkwhale
  • nitaibezerra/funkwhale
  • jaller94/funkwhale
  • pcouy/funkwhale
  • eduxstad/funkwhale
  • codingHahn/funkwhale
  • captain/funkwhale
  • polyedre/funkwhale
  • leishenailong/funkwhale
  • ccritter/funkwhale
  • lnceballosz/funkwhale
  • fpiesche/funkwhale
  • Fanyx/funkwhale
  • markusblogde/funkwhale
  • Firobe/funkwhale
  • devilcius/funkwhale
  • freaktechnik/funkwhale
  • blopware/funkwhale
  • cone/funkwhale
  • thanksd/funkwhale
  • vachan-maker/funkwhale
  • bbenti/funkwhale
  • tarator/funkwhale
  • prplecake/funkwhale
  • DMarzal/funkwhale
  • lullis/funkwhale
  • hanacgr/funkwhale
  • albjeremias/funkwhale
  • xeruf/funkwhale
  • llelite/funkwhale
  • RoiArthurB/funkwhale
  • cloo/funkwhale
  • nztvar/funkwhale
  • Keunes/funkwhale
  • petitminion/funkwhale-petitminion
  • m-idler/funkwhale
  • SkyLeite/funkwhale
140 results
Select Git revision
  • 170-federation
  • 594-navigation-redesign
  • 735-table-truncate
  • 839-donation-link
  • 865-sql-optimization
  • 890-notification
  • 925-flac-transcoding
  • add-new-shortcuts
  • develop
  • landing-page
  • limit-album-tracks
  • live-streaming
  • master
  • ollie/funkwhale-documentation-fixes
  • plugins
  • plugins-v2
  • vuln-testing
  • webdav
  • 0.1
  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 0.14.1
  • 0.14.2
  • 0.15
  • 0.16
  • 0.16.1
  • 0.16.2
  • 0.16.3
  • 0.17
  • 0.18
  • 0.18.1
  • 0.18.2
  • 0.18.3
  • 0.19.0
  • 0.19.0-rc1
  • 0.19.0-rc2
  • 0.19.1
  • 0.2
  • 0.2.1
  • 0.2.2
  • 0.2.3
  • 0.2.4
  • 0.2.5
  • 0.2.6
  • 0.20.0
  • 0.20.0-rc1
  • 0.20.1
  • 0.3
  • 0.3.1
  • 0.3.2
  • 0.3.3
  • 0.3.4
  • 0.3.5
  • 0.4
  • 0.5
  • 0.5.1
  • 0.5.2
  • 0.5.3
  • 0.5.4
  • 0.6
  • 0.6.1
  • 0.7
  • 0.8
  • 0.9
  • 0.9.1
68 results
Show changes
Showing
with 1371 additions and 38 deletions
from django.utils.deconstruct import deconstructible
import datetime
import hashlib
import logging
import os
import shutil
import uuid
import xml.etree.ElementTree as ET
from urllib.parse import parse_qs, urlencode, urlsplit, urlunsplit
from django.conf import settings
import bleach.sanitizer
import markdown
from django import urls
from django.conf import settings
from django.core.files.base import ContentFile
from django.db import models, transaction
from django.http import request
from django.utils import timezone
from django.utils.deconstruct import deconstructible
logger = logging.getLogger(__name__)
def batch(iterable, n=1):
has_entries = True
while has_entries:
current = []
for i in range(0, n):
try:
current.append(next(iterable))
except StopIteration:
has_entries = False
yield current
def rename_file(instance, field_name, new_name, allow_missing_file=False):
field = getattr(instance, field_name)
current_name, extension = os.path.splitext(field.name)
new_name_with_extension = "{}{}".format(new_name, extension)
new_name_with_extension = f"{new_name}{extension}"
try:
shutil.move(field.path, new_name_with_extension)
except FileNotFoundError:
......@@ -51,12 +71,16 @@ def set_query_parameter(url, **kwargs):
@deconstructible
class ChunkedPath(object):
class ChunkedPath:
def sanitize_filename(self, filename):
return filename.replace("/", "-")
def __init__(self, root, preserve_file_name=True):
self.root = root
self.preserve_file_name = preserve_file_name
def __call__(self, instance, filename):
self.sanitize_filename(filename)
uid = str(uuid.uuid4())
chunk_size = 2
chunks = [uid[i : i + chunk_size] for i in range(0, len(uid), chunk_size)]
......@@ -64,7 +88,7 @@ class ChunkedPath(object):
parts = chunks[:3] + [filename]
else:
ext = os.path.splitext(filename)[1][1:].lower()
new_filename = "".join(chunks[3:]) + ".{}".format(ext)
new_filename = "".join(chunks[3:]) + f".{ext}"
parts = chunks[:3] + [new_filename]
return os.path.join(self.root, *parts)
......@@ -114,7 +138,7 @@ def chunk_queryset(source_qs, chunk_size):
def join_url(start, end):
if end.startswith("http://") or end.startswith("https://"):
# alread a full URL, joining makes no sense
# already a full URL, joining makes no sense
return end
if start.endswith("/") and end.startswith("/"):
return start + end[1:]
......@@ -125,6 +149,17 @@ def join_url(start, end):
return start + end
def media_url(path):
if settings.MEDIA_URL.startswith("http://") or settings.MEDIA_URL.startswith(
"https://"
):
return join_url(settings.MEDIA_URL, path)
from funkwhale_api.federation import utils as federation_utils
return federation_utils.full_url(path)
def spa_reverse(name, args=[], kwargs={}):
return urls.reverse(name, urlconf=settings.SPA_URLCONF, args=args, kwargs=kwargs)
......@@ -149,7 +184,7 @@ def order_for_search(qs, field):
When searching, it's often more useful to have short results first,
this function will order the given qs based on the length of the given field
"""
return qs.annotate(__size=models.functions.Length(field)).order_by("__size")
return qs.annotate(__size=models.functions.Length(field)).order_by("__size", "pk")
def recursive_getattr(obj, key, permissive=False):
......@@ -192,7 +227,7 @@ def replace_prefix(queryset, field, old, new):
on a whole table with a single query.
"""
qs = queryset.filter(**{"{}__startswith".format(field): old})
qs = queryset.filter(**{f"{field}__startswith": old})
# we extract the part after the old prefix, and Concat it with our new prefix
update = models.functions.Concat(
models.Value(new),
......@@ -228,9 +263,227 @@ def get_updated_fields(conf, data, obj):
data_value = data[data_field]
except KeyError:
continue
if obj.pk:
obj_value = getattr(obj, obj_field)
if obj_value != data_value:
final_data[obj_field] = data_value
else:
final_data[obj_field] = data_value
return final_data
def join_queries_or(left, right):
if left:
return left | right
else:
return right
MARKDOWN_RENDERER = markdown.Markdown(extensions=settings.MARKDOWN_EXTENSIONS)
def render_markdown(text):
return MARKDOWN_RENDERER.convert(text)
SAFE_TAGS = [
"p",
"a",
"abbr",
"acronym",
"b",
"blockquote",
"br",
"code",
"em",
"i",
"li",
"ol",
"strong",
"ul",
]
HTMl_CLEANER = bleach.sanitizer.Cleaner(strip=True, tags=SAFE_TAGS)
HTML_PERMISSIVE_CLEANER = bleach.sanitizer.Cleaner(
strip=True,
tags=SAFE_TAGS + ["h1", "h2", "h3", "h4", "h5", "h6", "div", "section", "article"],
attributes=["class", "rel", "alt", "title", "href"],
)
# support for additional tlds
# cf https://github.com/mozilla/bleach/issues/367#issuecomment-384631867
ALL_TLDS = set(settings.LINKIFIER_SUPPORTED_TLDS + bleach.linkifier.TLDS)
URL_RE = bleach.linkifier.build_url_re(tlds=sorted(ALL_TLDS, reverse=True))
HTML_LINKER = bleach.linkifier.Linker(url_re=URL_RE)
def clean_html(html, permissive=False):
return (
HTML_PERMISSIVE_CLEANER.clean(html) if permissive else HTMl_CLEANER.clean(html)
)
def render_html(text, content_type, permissive=False):
if not text:
return ""
rendered = render_markdown(text)
if content_type == "text/html":
rendered = text
elif content_type == "text/markdown":
rendered = render_markdown(text)
else:
rendered = render_markdown(text)
rendered = HTML_LINKER.linkify(rendered)
return clean_html(rendered, permissive=permissive).strip().replace("\n", "")
def render_plain_text(html):
if not html:
return ""
return bleach.clean(html, tags=[], strip=True)
def same_content(old, text=None, content_type=None):
return old.text == text and old.content_type == content_type
@transaction.atomic
def attach_content(obj, field, content_data):
from . import models
content_data = content_data or {}
existing = getattr(obj, f"{field}_id")
if existing:
if same_content(getattr(obj, field), **content_data):
# optimization to avoid a delete/save if possible
return getattr(obj, field)
getattr(obj, field).delete()
setattr(obj, field, None)
if not content_data:
return
content_obj = models.Content.objects.create(
text=content_data["text"][: models.CONTENT_TEXT_MAX_LENGTH],
content_type=content_data["content_type"],
)
setattr(obj, field, content_obj)
obj.save(update_fields=[field])
return content_obj
@transaction.atomic
def attach_file(obj, field, file_data, fetch=False):
from . import models, tasks
existing = getattr(obj, f"{field}_id")
if existing:
getattr(obj, field).delete()
if not file_data:
return
if isinstance(file_data, models.Attachment):
attachment = file_data
else:
extensions = {"image/jpeg": "jpg", "image/png": "png", "image/gif": "gif"}
extension = extensions.get(file_data["mimetype"], "jpg")
attachment = models.Attachment(mimetype=file_data["mimetype"])
name_fields = ["uuid", "full_username", "pk"]
name = [
getattr(obj, field) for field in name_fields if getattr(obj, field, None)
][0]
filename = f"{field}-{name}.{extension}"
if "url" in file_data:
attachment.url = file_data["url"]
else:
f = ContentFile(file_data["content"])
attachment.file.save(filename, f, save=False)
if not attachment.file and fetch:
try:
tasks.fetch_remote_attachment(attachment, filename=filename, save=False)
except Exception as e:
logger.warn(
"Cannot download attachment at url %s: %s", attachment.url, e
)
attachment = None
if attachment:
attachment.save()
setattr(obj, field, attachment)
obj.save(update_fields=[field])
return attachment
def get_mimetype_from_ext(path):
parts = path.lower().split(".")
ext = parts[-1]
match = {
"jpeg": "image/jpeg",
"jpg": "image/jpeg",
"png": "image/png",
"gif": "image/gif",
}
return match.get(ext)
def get_audio_mimetype(mt):
aliases = {"audio/x-mp3": "audio/mpeg", "audio/mpeg3": "audio/mpeg"}
return aliases.get(mt, mt)
def update_modification_date(obj, field="modification_date", date=None):
IGNORE_DELAY = 60
current_value = getattr(obj, field)
date = date or timezone.now()
ignore = current_value is not None and current_value < date - datetime.timedelta(
seconds=IGNORE_DELAY
)
if ignore:
setattr(obj, field, date)
obj.__class__.objects.filter(pk=obj.pk).update(**{field: date})
return date
def monkey_patch_request_build_absolute_uri():
"""
Since we have FUNKWHALE_HOSTNAME and PROTOCOL hardcoded in settings, we can
override django's multisite logic which can break when reverse proxy aren't configured
properly.
"""
builtin_scheme = request.HttpRequest.scheme
def scheme(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_PROTOCOL
return builtin_scheme.fget(self)
builtin_get_host = request.HttpRequest.get_host
def get_host(self):
if settings.IGNORE_FORWARDED_HOST_AND_PROTO:
return settings.FUNKWHALE_HOSTNAME
return builtin_get_host(self)
request.HttpRequest.scheme = property(scheme)
request.HttpRequest.get_host = get_host
def get_file_hash(file, algo=None, chunk_size=None, full_read=False):
algo = algo or settings.HASHING_ALGORITHM
chunk_size = chunk_size or settings.HASHING_CHUNK_SIZE
hasher = hashlib.new(algo)
file.seek(0)
if full_read:
for byte_block in iter(lambda: file.read(chunk_size), b""):
hasher.update(byte_block)
else:
# sometimes, it's useful to only hash the beginning of the file, e.g
# to avoid a lot of I/O when crawling large libraries
hasher.update(file.read(chunk_size))
return f"{algo}:{hasher.hexdigest()}"
......@@ -6,7 +6,7 @@ from django.core.exceptions import ValidationError
from django.core.files.images import get_image_dimensions
from django.template.defaultfilters import filesizeformat
from django.utils.deconstruct import deconstructible
from django.utils.translation import ugettext_lazy as _
from django.utils.translation import gettext_lazy as _
@deconstructible
......@@ -72,7 +72,7 @@ class ImageDimensionsValidator:
@deconstructible
class FileValidator(object):
class FileValidator:
"""
Taken from https://gist.github.com/jrosebr1/2140738
Validator for files, checking the size, extension and mimetype.
......@@ -97,7 +97,7 @@ class FileValidator(object):
"MIME type '%(mimetype)s' is not valid. Allowed types are: %(allowed_mimetypes)s."
)
min_size_message = _(
"The current file %(size)s, which is too small. The minumum file size is %(allowed_size)s."
"The current file %(size)s, which is too small. The minimum file size is %(allowed_size)s."
)
max_size_message = _(
"The current file %(size)s, which is too large. The maximum file size is %(allowed_size)s."
......@@ -163,5 +163,5 @@ class DomainValidator(validators.URLValidator):
If it fails, we know the domain is not valid.
"""
super().__call__("http://{}".format(value))
super().__call__(f"http://{value}")
return value
import logging
import time
from django.conf import settings
from django.db import transaction
from drf_spectacular.utils import extend_schema
from rest_framework import (
exceptions,
generics,
mixins,
permissions,
response,
views,
viewsets,
)
from rest_framework.decorators import action
from rest_framework import exceptions
from rest_framework import mixins
from rest_framework import permissions
from rest_framework import response
from rest_framework import views
from rest_framework import viewsets
from config import plugins
from funkwhale_api.common.serializers import (
ErrorDetailSerializer,
TextPreviewSerializer,
)
from funkwhale_api.users.oauth import permissions as oauth_permissions
from . import filters
from . import models
from . import mutations
from . import serializers
from . import signals
from . import tasks
from . import throttling
from . import utils
from . import filters, models, mutations, serializers, signals, tasks, throttling, utils
logger = logging.getLogger(__name__)
class SkipFilterForGetObject:
......@@ -72,6 +76,7 @@ class MutationViewSet(
return super().perform_destroy(instance)
@extend_schema(operation_id="approve_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def approve(self, request, *args, **kwargs):
......@@ -101,6 +106,7 @@ class MutationViewSet(
)
return response.Response({}, status=200)
@extend_schema(operation_id="reject_mutation")
@action(detail=True, methods=["post"])
@transaction.atomic
def reject(self, request, *args, **kwargs):
......@@ -133,15 +139,16 @@ class MutationViewSet(
class RateLimitView(views.APIView):
permission_classes = []
throttle_classes = []
serializer_class = serializers.RateLimitSerializer
def get(self, request, *args, **kwargs):
ident = throttling.get_ident(request)
ident = throttling.get_ident(getattr(request, "user", None), request)
data = {
"enabled": settings.THROTTLING_ENABLED,
"ident": ident,
"scopes": throttling.get_status(ident, time.time()),
}
return response.Response(data, status=200)
return response.Response(serializers.RateLimitSerializer(data).data, status=200)
class AttachmentViewSet(
......@@ -163,12 +170,25 @@ class AttachmentViewSet(
@transaction.atomic
def proxy(self, request, *args, **kwargs):
instance = self.get_object()
if not settings.EXTERNAL_MEDIA_PROXY_ENABLED:
r = response.Response(status=302)
r["Location"] = instance.url
return r
size = request.GET.get("next", "original").lower()
if size not in ["original", "medium_square_crop"]:
if size not in [
"original",
"small_square_crop",
"medium_square_crop",
"large_square_crop",
]:
size = "original"
try:
tasks.fetch_remote_attachment(instance)
except Exception:
logger.exception("Error while fetching attachment %s", instance.url)
return response.Response(status=500)
data = self.serializer_class(instance).data
redirect = response.Response(status=302)
redirect["Location"] = data["urls"][size]
......@@ -181,3 +201,126 @@ class AttachmentViewSet(
if instance.actor is None or instance.actor != self.request.user.actor:
raise exceptions.PermissionDenied()
instance.delete()
class TextPreviewView(generics.GenericAPIView):
permission_classes = []
serializer_class = TextPreviewSerializer
@extend_schema(
operation_id="preview_text",
responses={200: TextPreviewSerializer, 400: ErrorDetailSerializer},
)
def post(self, request, *args, **kwargs):
payload = request.data
if "text" not in payload:
return response.Response(
ErrorDetailSerializer("Invalid input").data, status=400
)
permissive = payload.get("permissive", False)
data = TextPreviewSerializer(
utils.render_html(payload["text"], "text/markdown", permissive=permissive)
).data
return response.Response(data, status=200)
class PluginViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):
required_scope = "plugins"
serializer_class = serializers.serializers.Serializer
queryset = models.PluginConfiguration.objects.none()
def list(self, request, *args, **kwargs):
user = request.user
user_plugins = [p for p in plugins._plugins.values() if p["user"] is True]
return response.Response(
[
plugins.serialize_plugin(p, confs=plugins.get_confs(user=user))
for p in user_plugins
]
)
def retrieve(self, request, *args, **kwargs):
user = request.user
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if not user_plugin:
return response.Response(status=404)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def post(self, request, *args, **kwargs):
return self.create(request, *args, **kwargs)
def create(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
user_plugin = [
p
for p in plugins._plugins.values()
if p["user"] is True and p["name"] == kwargs["pk"]
]
if kwargs["pk"] not in confs:
return response.Response(status=404)
plugins.set_conf(kwargs["pk"], request.data, user)
return response.Response(
plugins.serialize_plugin(user_plugin[0], confs=plugins.get_confs(user=user))
)
def delete(self, request, *args, **kwargs):
user = request.user
confs = plugins.get_confs(user=user)
if kwargs["pk"] not in confs:
return response.Response(status=404)
user.plugins.filter(code=kwargs["pk"]).delete()
return response.Response(status=204)
@extend_schema(operation_id="enable_plugin")
@action(detail=True, methods=["post"])
def enable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], True, user)
return response.Response({}, status=200)
@extend_schema(operation_id="disable_plugin")
@action(detail=True, methods=["post"])
def disable(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
plugins.enable_conf(kwargs["pk"], False, user)
return response.Response({}, status=200)
@action(detail=True, methods=["post"])
def scan(self, request, *args, **kwargs):
user = request.user
if kwargs["pk"] not in plugins._plugins:
return response.Response(status=404)
conf = plugins.get_conf(kwargs["pk"], user=user)
if not conf["enabled"]:
return response.Response(status=405)
library = request.user.actor.libraries.get(uuid=conf["conf"]["library"])
hook = [
hook
for p, hook in plugins._hooks.get(plugins.SCAN, [])
if p == kwargs["pk"]
]
if not hook:
return response.Response(status=405)
hook[0](library=library, conf=conf["conf"])
return response.Response({}, status=200)
# -*- coding: utf-8 -*-
import logging
from config import plugins
from funkwhale_api.contrib.archivedl import tasks
from .funkwhale_startup import PLUGIN
logger = logging.getLogger(__name__)
@plugins.register_hook(plugins.TRIGGER_THIRD_PARTY_UPLOAD, PLUGIN)
def lauch_download(track, conf={}):
tasks.archive_download.delay(track_id=track.pk, conf=conf)
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="archivedl",
label="Archive-dl",
description="",
version="0.1",
user=False,
conf=[],
)
import asyncio
import hashlib
import logging
import os
import tempfile
import time
import urllib.parse
from datetime import timedelta
import requests
from django.core.files import File
from django.utils import timezone
from funkwhale_api.federation import actors
from funkwhale_api.music import models, utils
from funkwhale_api.taskapp import celery
logger = logging.getLogger(__name__)
class TooManyQueriesError(Exception):
pass
def check_existing_download_task(track):
if models.Upload.objects.filter(
track=track,
import_status__in=["pending", "finished"],
third_party_provider="archive-dl",
).exists():
raise TooManyQueriesError(
"Upload for this track already exist or is pending. Stopping task."
)
def check_last_third_party_queries(track, count):
# 15 per minutes according to their doc = one each 4 seconds
time_threshold = timezone.now() - timedelta(seconds=5)
if models.Upload.objects.filter(
third_party_provider="archive-dl",
import_status__in=["pending", "finished"],
creation_date__gte=time_threshold,
).exists():
logger.info(
"Last archive.org query was too recent. Trying to wait 2 seconds..."
)
time.sleep(2)
count += 1
if count > 3:
raise TooManyQueriesError(
"Probably too many archivedl tasks are queue, stopping this task"
)
check_last_third_party_queries(track, count)
def create_upload(url, track, files_data):
mimetype = f"audio/{files_data.get('format', 'unknown')}"
duration = files_data.get("mtime", 0)
filesize = files_data.get("size", 0)
bitrate = files_data.get("bitrate", 0)
service_library = models.Library.objects.create(
privacy_level="everyone",
actor=actors.get_service_actor(),
)
return models.Upload.objects.create(
mimetype=mimetype,
source=url,
third_party_provider="archive-dl",
creation_date=timezone.now(),
track=track,
duration=duration,
size=filesize,
bitrate=bitrate,
library=service_library,
from_activity=None,
import_status="pending",
)
@celery.app.task(name="archivedl.archive_download")
@celery.require_instance(models.Track.objects.select_related(), "track")
def archive_download(track, conf):
try:
check_existing_download_task(track)
check_last_third_party_queries(track, 0)
except TooManyQueriesError as e:
logger.error(e)
return
artist_name = utils.get_artist_credit_string(track)
query = f"mediatype:audio AND title:{track.title} AND creator:{artist_name}"
with requests.Session() as session:
url = get_search_url(query, page_size=1, page=1)
page_data = fetch_json(url, session)
for obj in page_data["response"]["docs"]:
logger.info(f"launching download item for {str(obj)}")
download_item(
item_data=obj,
session=session,
allowed_extensions=utils.SUPPORTED_EXTENSIONS,
track=track,
)
def fetch_json(url, session):
logger.info(f"Fetching {url}...")
with session.get(url) as response:
return response.json()
def download_item(
item_data,
session,
allowed_extensions,
track,
):
files_data = get_files_data(item_data["identifier"], session)
to_download = list(
filter_files(
files_data["result"],
allowed_extensions=allowed_extensions,
)
)
url = f"https://archive.org/download/{item_data['identifier']}/{to_download[0]['name']}"
upload = create_upload(url, track, to_download[0])
try:
with tempfile.TemporaryDirectory() as temp_dir:
path = os.path.join(temp_dir, to_download[0]["name"])
download_file(
path,
url=url,
session=session,
checksum=to_download[0]["sha1"],
upload=upload,
to_download=to_download,
)
logger.info(f"Finished to download item {item_data['identifier']}...")
except Exception as e:
upload.delete()
raise e
def check_integrity(path, expected_checksum):
with open(path, mode="rb") as f:
hash = hashlib.sha1()
hash.update(f.read())
return expected_checksum == hash.hexdigest()
def get_files_data(identifier, session):
url = f"https://archive.org/metadata/{identifier}/files"
logger.info(f"Fetching files data at {url}...")
with session.get(url) as response:
return response.json()
def download_file(path, url, session, checksum, upload, to_download):
if os.path.exists(path) and check_integrity(path, checksum):
logger.info(f"Skipping already downloaded file at {path}")
return
logger.info(f"Downloading file {url}...")
with open(path, mode="wb") as f:
try:
with session.get(url) as response:
f.write(response.content)
except asyncio.TimeoutError as e:
logger.error(f"Timeout error while downloading {url}: {e}")
with open(path, "rb") as f:
upload.audio_file.save(f"{to_download['name']}", File(f))
upload.import_status = "finished"
upload.url = url
upload.save()
return upload
def filter_files(files, allowed_extensions):
for f in files:
if allowed_extensions:
extension = os.path.splitext(f["name"])[-1][1:]
if extension not in allowed_extensions:
continue
yield f
def get_search_url(query, page_size, page):
q = urllib.parse.urlencode({"q": query})
return f"https://archive.org/advancedsearch.php?{q}&sort[]=addeddate+desc&rows={page_size}\
&page={page}&output=json"
import liblistenbrainz
import funkwhale_api
from config import plugins
from funkwhale_api.favorites import models as favorites_models
from funkwhale_api.history import models as history_models
from . import tasks
from .funkwhale_startup import PLUGIN
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def submit_listen(listening, conf, **kwargs):
user_token = conf["user_token"]
if not user_token and not conf["submit_listenings"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting listen to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
client.set_auth_token(user_token)
listen = get_lb_listen(listening)
client.submit_single_listen(listen)
def get_lb_listen(listening):
track = listening.track
additional_info = {
"media_player": "Funkwhale",
"media_player_version": funkwhale_api.__version__,
"submission_client": "Funkwhale ListenBrainz plugin",
"submission_client_version": PLUGIN["version"],
"tracknumber": track.position,
"discnumber": track.disc_number,
}
if track.mbid:
additional_info["recording_mbid"] = str(track.mbid)
if track.album:
if track.album.title:
release_name = track.album.title
if track.album.mbid:
additional_info["release_mbid"] = str(track.album.mbid)
mbids = [ac.artist.mbid for ac in track.artist_credit.all() if ac.artist.mbid]
if mbids:
additional_info["artist_mbids"] = mbids
upload = track.uploads.filter(duration__gte=0).first()
if upload:
additional_info["duration"] = upload.duration
return liblistenbrainz.Listen(
track_name=track.title,
listened_at=listening.creation_date.timestamp(),
artist_name=track.get_artist_credit_string,
release_name=release_name,
additional_info=additional_info,
)
@plugins.register_hook(plugins.FAVORITE_CREATED, PLUGIN)
def submit_favorite_creation(track_favorite, conf, **kwargs):
user_token = conf["user_token"]
if not user_token or not conf["submit_favorites"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting favorite to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
track = track_favorite.track
if not track.mbid:
logger.warning(
"This tracks doesn't have a mbid. Feedback will not be submitted to Listenbrainz"
)
return
client.submit_user_feedback(1, track.mbid)
@plugins.register_hook(plugins.FAVORITE_DELETED, PLUGIN)
def submit_favorite_deletion(track_favorite, conf, **kwargs):
user_token = conf["user_token"]
if not user_token or not conf["submit_favorites"]:
return
logger = PLUGIN["logger"]
logger.info("Submitting favorite deletion to ListenBrainz")
client = liblistenbrainz.ListenBrainz()
track = track_favorite.track
if not track.mbid:
logger.warning(
"This tracks doesn't have a mbid. Feedback will not be submitted to Listenbrainz"
)
return
client.submit_user_feedback(0, track.mbid)
@plugins.register_hook(plugins.LISTENING_SYNC, PLUGIN)
def sync_listenings_from_listenbrainz(user, conf):
user_name = conf["user_name"]
if not user_name or not conf["sync_listenings"]:
return
logger = PLUGIN["logger"]
logger.info("Getting listenings from ListenBrainz")
try:
last_ts = (
history_models.Listening.objects.filter(actor=user.actor)
.filter(source="Listenbrainz")
.latest("creation_date")
.values_list("creation_date", flat=True)
).timestamp()
except funkwhale_api.history.models.Listening.DoesNotExist:
tasks.import_listenbrainz_listenings(user, user_name, 0)
return
tasks.import_listenbrainz_listenings(user, user_name, last_ts)
@plugins.register_hook(plugins.FAVORITE_SYNC, PLUGIN)
def sync_favorites_from_listenbrainz(user, conf):
user_name = conf["user_name"]
if not user_name or not conf["sync_favorites"]:
return
try:
last_ts = (
favorites_models.TrackFavorite.objects.filter(actor=user.actor)
.filter(source="Listenbrainz")
.latest("creation_date")
.creation_date.timestamp()
)
except favorites_models.TrackFavorite.DoesNotExist:
tasks.import_listenbrainz_favorites(user, user_name, 0)
return
tasks.import_listenbrainz_favorites(user, user_name, last_ts)
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="listenbrainz",
label="ListenBrainz",
description="A plugin that allows you to submit or sync your listens and favorites to ListenBrainz.",
homepage="https://docs.funkwhale.audio/users/builtinplugins.html#listenbrainz-plugin", # noqa
version="0.3",
user=True,
conf=[
{
"name": "user_token",
"type": "text",
"label": "Your ListenBrainz user token",
"help": "You can find your user token in your ListenBrainz profile at https://listenbrainz.org/profile/",
},
{
"name": "user_name",
"type": "text",
"required": False,
"label": "Your ListenBrainz user name.",
"help": "Required for importing listenings and favorites with ListenBrainz \
but not to send activities",
},
{
"name": "submit_listenings",
"type": "boolean",
"default": True,
"label": "Enable listening submission to ListenBrainz",
"help": "If enabled, your listenings from Funkwhale will be imported into ListenBrainz.",
},
{
"name": "sync_listenings",
"type": "boolean",
"default": False,
"label": "Enable listenings sync",
"help": "If enabled, your listening from ListenBrainz will be imported into Funkwhale. This means they \
will be used along with Funkwhale listenings to filter out recently listened content or \
generate recommendations",
},
{
"name": "sync_favorites",
"type": "boolean",
"default": False,
"label": "Enable favorite sync",
"help": "If enabled, your favorites from ListenBrainz will be imported into Funkwhale. This means they \
will be used along with Funkwhale favorites (UI display, federation activity)",
},
{
"name": "submit_favorites",
"type": "boolean",
"default": False,
"label": "Enable favorite submission to ListenBrainz services",
"help": "If enabled, your favorites from Funkwhale will be submitted to ListenBrainz",
},
],
)
import datetime
import liblistenbrainz
from django.utils import timezone
from config import plugins
from funkwhale_api.favorites import models as favorites_models
from funkwhale_api.history import models as history_models
from funkwhale_api.music import models as music_models
from funkwhale_api.taskapp import celery
from funkwhale_api.users import models
from .funkwhale_startup import PLUGIN
@celery.app.task(name="listenbrainz.trigger_listening_sync_with_listenbrainz")
def trigger_listening_sync_with_listenbrainz():
now = timezone.now()
active_month = now - datetime.timedelta(days=30)
users = (
models.User.objects.filter(plugins__code="listenbrainz")
.filter(plugins__conf__sync_listenings=True)
.filter(last_activity__gte=active_month)
)
for user in users:
plugins.trigger_hook(
plugins.LISTENING_SYNC,
user=user,
confs=plugins.get_confs(user),
)
@celery.app.task(name="listenbrainz.trigger_favorite_sync_with_listenbrainz")
def trigger_favorite_sync_with_listenbrainz():
now = timezone.now()
active_month = now - datetime.timedelta(days=30)
users = (
models.User.objects.filter(plugins__code="listenbrainz")
.filter(plugins__conf__sync_listenings=True)
.filter(last_activity__gte=active_month)
)
for user in users:
plugins.trigger_hook(
plugins.FAVORITE_SYNC,
user=user,
confs=plugins.get_confs(user),
)
@celery.app.task(name="listenbrainz.import_listenbrainz_listenings")
def import_listenbrainz_listenings(user, user_name, since):
client = liblistenbrainz.ListenBrainz()
response = client.get_listens(username=user_name, min_ts=since, count=100)
listens = response["payload"]["listens"]
while listens:
add_lb_listenings_to_db(listens, user)
new_ts = max(
listens,
key=lambda obj: datetime.datetime.fromtimestamp(
obj.listened_at, datetime.timezone.utc
),
)
response = client.get_listens(username=user_name, min_ts=new_ts, count=100)
listens = response["payload"]["listens"]
def add_lb_listenings_to_db(listens, user):
logger = PLUGIN["logger"]
fw_listens = []
for listen in listens:
if (
listen.additional_info.get("submission_client")
and listen.additional_info.get("submission_client")
== "Funkwhale ListenBrainz plugin"
and history_models.Listening.objects.filter(
creation_date=datetime.datetime.fromtimestamp(
listen.listened_at, datetime.timezone.utc
)
).exists()
):
logger.info(
f"Listen with ts {listen.listened_at} skipped because already in db"
)
continue
mbid = (
listen.mbid_mapping
if hasattr(listen, "mbid_mapping")
else listen.recording_mbid
)
if not mbid:
logger.info("Received listening that doesn't have a mbid. Skipping...")
try:
track = music_models.Track.objects.get(mbid=mbid)
except music_models.Track.DoesNotExist:
logger.info(
"Received listening that doesn't exist in fw database. Skipping..."
)
continue
user = user
fw_listen = history_models.Listening(
creation_date=datetime.datetime.fromtimestamp(
listen.listened_at, datetime.timezone.utc
),
track=track,
actor=user.actor,
source="Listenbrainz",
)
fw_listens.append(fw_listen)
history_models.Listening.objects.bulk_create(fw_listens)
@celery.app.task(name="listenbrainz.import_listenbrainz_favorites")
def import_listenbrainz_favorites(user, user_name, since):
client = liblistenbrainz.ListenBrainz()
response = client.get_user_feedback(username=user_name)
offset = 0
while response["feedback"]:
count = response["count"]
offset = offset + count
last_sync = min(
response["feedback"],
key=lambda obj: datetime.datetime.fromtimestamp(
obj["created"], datetime.timezone.utc
),
)["created"]
add_lb_feedback_to_db(response["feedback"], user)
if last_sync <= since or count == 0:
return
response = client.get_user_feedback(username=user_name, offset=offset)
def add_lb_feedback_to_db(feedbacks, user):
logger = PLUGIN["logger"]
for feedback in feedbacks:
try:
track = music_models.Track.objects.get(mbid=feedback["recording_mbid"])
except music_models.Track.DoesNotExist:
logger.info(
"Received feedback track that doesn't exist in fw database. Skipping..."
)
continue
if feedback["score"] == 1:
favorites_models.TrackFavorite.objects.get_or_create(
actor=user.actor,
creation_date=datetime.datetime.fromtimestamp(
feedback["created"], datetime.timezone.utc
),
track=track,
source="Listenbrainz",
)
elif feedback["score"] == 0:
try:
favorites_models.TrackFavorite.objects.get(
actor=user.actor, track=track
).delete()
except favorites_models.TrackFavorite.DoesNotExist:
continue
elif feedback["score"] == -1:
logger.info("Funkwhale doesn't support disliked tracks")
import json
from config import plugins
from .funkwhale_startup import PLUGIN
class MalojaException(Exception):
pass
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def submit_listen(listening, conf, **kwargs):
server_url = conf["server_url"]
api_key = conf["api_key"]
if not server_url or not api_key:
return
logger = PLUGIN["logger"]
logger.info("Submitting listening to Maloja at %s", server_url)
payload = get_payload(listening, api_key, conf)
logger.debug("Maloja payload: %r", payload)
url = server_url.rstrip("/") + "/apis/mlj_1/newscrobble"
session = plugins.get_session()
response = session.post(url, json=payload)
response.raise_for_status()
details = json.loads(response.text)
if details["status"] == "success":
logger.info("Maloja listening submitted successfully")
else:
raise MalojaException(response.text)
def get_payload(listening, api_key, conf):
track = listening.track
# See https://github.com/krateng/maloja/blob/master/API.md
payload = {
"key": api_key,
"artists": [artist.name for artist in track.artist_credit.get_artists_list()],
"title": track.title,
"time": int(listening.creation_date.timestamp()),
"nofix": bool(conf.get("nofix")),
}
if track.album:
if track.album.title:
payload["album"] = track.album.title
if track.album.artist_credit.all():
payload["albumartists"] = [
artist.name for artist in track.album.artist_credit.get_artists_list()
]
upload = track.uploads.filter(duration__gte=0).first()
if upload:
payload["length"] = upload.duration
return payload
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="maloja",
label="Maloja",
description="A plugin that allows you to submit your listens to your Maloja server.",
homepage="https://docs.funkwhale.audio/users/builtinplugins.html#maloja-plugin",
version="0.2",
user=True,
conf=[
{"name": "server_url", "type": "text", "label": "Maloja server URL"},
{"name": "api_key", "type": "text", "label": "Your Maloja API key"},
{
"name": "nofix",
"type": "boolean",
"label": "Skip server-side metadata fixing",
"default": False,
},
],
)
Scrobbler plugin
================
A plugin that enables scrobbling to ListenBrainz and Last.fm.
If you're scrobbling to last.fm, you will need to create an `API account <https://www.last.fm/api/account/create>`_
and add two variables two your .env file:
- ``FUNKWHALE_PLUGIN_SCROBBLER_LASTFM_API_KEY=apikey``
- ``FUNKWHALE_PLUGIN_SCROBBLER_LASTFM_API_SECRET=apisecret``
import hashlib
from config import plugins
from . import scrobbler
from .funkwhale_startup import PLUGIN
# https://listenbrainz.org/lastfm-proxy
DEFAULT_SCROBBLER_URL = "http://post.audioscrobbler.com"
LASTFM_SCROBBLER_URL = "https://ws.audioscrobbler.com/2.0/"
@plugins.register_hook(plugins.LISTENING_CREATED, PLUGIN)
def forward_to_scrobblers(listening, conf, **kwargs):
if not conf:
raise plugins.Skip()
username = conf.get("username")
password = conf.get("password")
url = conf.get("url", DEFAULT_SCROBBLER_URL) or DEFAULT_SCROBBLER_URL
if username and password:
session = plugins.get_session()
if (
PLUGIN["settings"]["lastfm_api_key"]
and PLUGIN["settings"]["lastfm_api_secret"]
and url == DEFAULT_SCROBBLER_URL
):
hashed_auth = hashlib.md5(
(username + " " + password).encode("utf-8")
).hexdigest()
cache_key = "lastfm:sessionkey:{}".format(
":".join([str(listening.actor.pk), hashed_auth])
)
PLUGIN["logger"].info("Forwarding scrobble to %s", LASTFM_SCROBBLER_URL)
session_key = PLUGIN["cache"].get(cache_key)
if not session_key:
PLUGIN["logger"].debug("Authenticating…")
session_key = scrobbler.handshake_v2(
username=username,
password=password,
scrobble_url=LASTFM_SCROBBLER_URL,
session=session,
api_key=PLUGIN["settings"]["lastfm_api_key"],
api_secret=PLUGIN["settings"]["lastfm_api_secret"],
)
PLUGIN["cache"].set(cache_key, session_key)
scrobbler.submit_scrobble_v2(
session=session,
track=listening.track,
scrobble_time=listening.creation_date,
session_key=session_key,
scrobble_url=LASTFM_SCROBBLER_URL,
api_key=PLUGIN["settings"]["lastfm_api_key"],
api_secret=PLUGIN["settings"]["lastfm_api_secret"],
)
else:
PLUGIN["logger"].info("Forwarding scrobble to %s", url)
session_key, now_playing_url, scrobble_url = scrobbler.handshake_v1(
session=session, url=url, username=username, password=password
)
scrobbler.submit_scrobble_v1(
session=session,
track=listening.track,
scrobble_time=listening.creation_date,
session_key=session_key,
scrobble_url=scrobble_url,
)
PLUGIN["logger"].info("Scrobble sent!")
else:
PLUGIN["logger"].debug("No scrobbler configuration for user, skipping")
from config import plugins
PLUGIN = plugins.get_plugin_config(
name="scrobbler",
label="Scrobbler",
description=(
"A plugin that enables scrobbling to ListenBrainz and Last.fm. "
"It must be configured on the server if you use Last.fm."
),
homepage="https://docs.funkwhale.audio/users/builtinplugins.html#scrobbler-plugin", # noqa
version="0.1",
user=True,
conf=[
{
"name": "url",
"type": "url",
"allow_null": True,
"allow_blank": True,
"required": False,
"label": "URL of the scrobbler service",
"help": (
"Suggested choices:\n\n"
"- LastFM (default if left empty): http://post.audioscrobbler.com\n"
"- ListenBrainz: http://proxy.listenbrainz.org/\n"
"- Libre.fm: http://turtle.libre.fm/"
),
},
{"name": "username", "type": "text", "label": "Your scrobbler username"},
{"name": "password", "type": "password", "label": "Your scrobbler password"},
],
settings=[
{"name": "lastfm_api_key", "type": "text"},
{"name": "lastfm_api_secret", "type": "text"},
],
)
import hashlib
import time
# https://github.com/jlieth/legacy-scrobbler
from .funkwhale_startup import PLUGIN
class ScrobblerException(Exception):
pass
def handshake_v1(session, url, username, password):
timestamp = str(int(time.time())).encode("utf-8")
password_hash = hashlib.md5(password.encode("utf-8")).hexdigest()
auth = hashlib.md5(password_hash.encode("utf-8") + timestamp).hexdigest()
params = {
"hs": "true",
"p": "1.2",
"c": "fw",
"v": PLUGIN["version"],
"u": username,
"t": timestamp,
"a": auth,
}
PLUGIN["logger"].debug(
"Performing scrobbler handshake for username %s at %s", username, url
)
handshake_response = session.get(url, params=params)
# process response
result = handshake_response.text.split("\n")
if len(result) >= 4 and result[0] == "OK":
session_key = result[1]
nowplaying_url = result[2]
scrobble_url = result[3]
elif result[0] == "BANNED":
raise ScrobblerException("BANNED")
elif result[0] == "BADAUTH":
raise ScrobblerException("BADAUTH")
elif result[0] == "BADTIME":
raise ScrobblerException("BADTIME")
else:
raise ScrobblerException(handshake_response.text)
PLUGIN["logger"].debug("Handshake successful, scrobble url: %s", scrobble_url)
return session_key, nowplaying_url, scrobble_url
def submit_scrobble_v1(session, scrobble_time, track, session_key, scrobble_url):
payload = get_scrobble_payload(track, scrobble_time)
PLUGIN["logger"].debug("Sending scrobble with payload %s", payload)
payload["s"] = session_key
response = session.post(scrobble_url, payload)
response.raise_for_status()
if response.text.startswith("OK"):
return
elif response.text.startswith("BADSESSION"):
raise ScrobblerException("Remote server says the session is invalid")
else:
raise ScrobblerException(response.text)
PLUGIN["logger"].debug("Scrobble successful!")
def submit_now_playing_v1(session, track, session_key, now_playing_url):
payload = get_scrobble_payload(track, date=None, suffix="")
PLUGIN["logger"].debug("Sending now playing with payload %s", payload)
payload["s"] = session_key
response = session.post(now_playing_url, payload)
response.raise_for_status()
if response.text.startswith("OK"):
return
elif response.text.startswith("BADSESSION"):
raise ScrobblerException("Remote server says the session is invalid")
else:
raise ScrobblerException(response.text)
PLUGIN["logger"].debug("Now playing successful!")
def get_scrobble_payload(track, date, suffix="[0]"):
"""
Documentation available at https://web.archive.org/web/20190531021725/https://www.last.fm/api/submissions
"""
upload = track.uploads.filter(duration__gte=0).first()
data = {
f"a{suffix}": track.get_artist_credit_string,
f"t{suffix}": track.title,
f"l{suffix}": upload.duration if upload else 0,
f"b{suffix}": (track.album.title if track.album else "") or "",
f"n{suffix}": track.position or "",
f"m{suffix}": str(track.mbid or ""),
f"o{suffix}": "P", # Source: P = chosen by user
}
if date:
data[f"i{suffix}"] = int(date.timestamp())
return data
def get_scrobble2_payload(track, date, suffix="[0]"):
"""
Documentation available at https://web.archive.org/web/20190531021725/https://www.last.fm/api/submissions
"""
upload = track.uploads.filter(duration__gte=0).first()
data = {
"artist": track.get_artist_credit_string,
"track": track.title,
"chosenByUser": 1,
}
if upload:
data["duration"] = upload.duration
if track.album:
data["album"] = track.album.title
if track.position:
data["trackNumber"] = track.position
if track.mbid:
data["mbid"] = str(track.mbid or "")
if date:
offset = upload.duration / 2 if upload.duration else 0
data["timestamp"] = int(int(date.timestamp()) - offset)
return data
def handshake_v2(username, password, session, api_key, api_secret, scrobble_url):
params = {
"method": "auth.getMobileSession",
"username": username,
"password": password,
"api_key": api_key,
}
params["api_sig"] = hash_request(params, api_secret)
response = session.post(scrobble_url, params)
if 'status="ok"' not in response.text:
raise ScrobblerException(response.text)
session_key = response.text.split("<key>")[1].split("</key>")[0]
return session_key
def submit_scrobble_v2(
session,
track,
scrobble_time,
session_key,
scrobble_url,
api_key,
api_secret,
):
params = {
"method": "track.scrobble",
"api_key": api_key,
"sk": session_key,
}
scrobble = get_scrobble2_payload(track, scrobble_time)
PLUGIN["logger"].debug("Scrobble payload: %s", scrobble)
params.update(scrobble)
params["api_sig"] = hash_request(params, api_secret)
response = session.post(scrobble_url, params)
if 'status="ok"' not in response.text:
raise ScrobblerException(response.text)
def hash_request(data, secret_key):
string = ""
items = data.keys()
items = sorted(items)
for i in items:
string += str(i)
string += str(data[i])
string += secret_key
string_to_hash = string.encode("utf8")
return hashlib.md5(string_to_hash).hexdigest()
# -*- coding: utf-8 -*-