Don’t know what are the changes since 7807 (which this one obsoletes) but this article helped me quickly understand the first one, hopefully it’s still somewhat relevant.
https://lakitna.medium.com/understanding-problem-json-adf68e5cf1f8
I don’t think RSS is suited for getting more than just the latest entries in a feed.
What you’re looking for is handled by the API which includes pagination.
Ah, then no, the last thing I knew about it you can’t migrate accounts from one server to another, which is what you’re trying to do here.
As I mentioned if you were able to move the keys which identify your account it would be easy for someone to impersonate you.
Also, your public keys are shared among all the instances you’ve interacted, so this might break your interactions there.
Do you still have the old database? You should be able to move your instance around as long as you have a dump of your DB, that’s where all the keys of each community and user in your instance are. Those are the ones telling other instances you’re actually you, if you loose those I don’t know what can be done so other instances flush your old content and treat you as a new account. But I would count on thi s being a feature since it could lead to people impersonating someone else if they get a hold of the domain without the DB.
EDIT: amm, maybe I didn’t understand correctly, are you trying to move to a new domain? Or to a new server with the same domain?
What’s re-home?
Yeah, I just searched a bit and found this https://stackoverflow.com/questions/28348678/what-exactly-is-the-info-hash-in-a-torrent-file
The torrent file contains the hashes of each piece and the hash of the information about the files with the hashes of the pieces, so they have complete validation of the content and amount of files being received.
I wonder if the clients only validate when receiving or also when sending the data, this way maybe the seeding can be stopped if the file has been corrupted instead of relaying on the tracker or other clients to distrust someone that made a mistake like the OP of that post.
How torrents validate the files being served?
Recently I read a post where OP said they were transcoding torrents in place and still seeding them, so their question was if this was possible since the files were actually not the same anymore.
A comment said yes, the torrent was being seeded with the new files and they were “poisoning” the torrent.
So, how this can be prevented if torrents were implemented as a CDN?
An in general, how is this possible? I thought torrents could only use the original files, maybe with a hash, and prevent any other data being sent.
I’m just annoyed by the regions issues, you’ll get pretty biased results depending in what region you select.
If you try to search for something specific to a region with other selected you’ll find sometime empty results, which shows you won’t get relevant results about a search if you don’t properly select the region.
Probably this is more obvious with non technical searches, for example my default region is canada-en and if I try “instituto nacional electoral” I only get a wiki page, an international site and some other random sites with no news, only when I change the region I get the official page ine.mx and news. For me this means kagi hides results from other regions instead of just boosting the selected region’s ones.
It’s regarding appropriate handling of user information.
I’m not sure it includes PII. Basically it’s a ticketing system.
The pointers I got are: the software is secure and reliable to store the data and be able to be queried to understand the updates the data had.
It’s just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you
As if it didn’t happen already
IIRC most stuff can be done with vanilla JS in any modern browser.
Although, I’ve been doing little front-end work, and mostly for personal projects, nothing fancy nor production ready, so someone might have another opinion about using jQuery.
Not OP, but I’m thinking about the example in vs code: https://code.visualstudio.com/docs/editor/userdefinedsnippets
Some boilerplate code for libraries and frameworks I constantly use.
I’d be more interested in syncing the VS code snippets as they are automatically available in a file for each language and have the autocomplete stops.
I want instances that block as few other users as possible so I can decide for myself what content I see.
Then you want to selfhost, otherwise you’ll always be at the will of someone else to decide which instances they want to federate with.
Even then, you’ll still want to have in mind instances known for spam, bots, or shady content have been blocked.
The last time I checked postgres gets big becouse of a log activity table used for deduplication, it stores the data of 6 months. The devs mentioned you could be deleting it up to some point (IIRC they said 3 months, but confirm first).
As for pictrs, lemmy caches a lot of stuff, so it copies a lot of data from other instances even when it’s advertised only media from your instance is stored in your server.
My solution was to disable pictrs since I don’t upload media.
Other solutions I’ve heard about are to ask users of your instance to upload media to any other media hosting service, the images uploaded to lemmy are just seen as urls, so it wouldn’t be any different.
Several from here https://github.com/awesome-selfhosted/awesome-selfhosted and here https://github.com/awesome-foss/awesome-sysadmin
The most recent one was vikunja to manage to-do’s without cluttering my calendar.
That’s exactly what 1:a:0
does, from the first stream, from the audio streams, select the first stream.
In this case since the audio is the second stream 1:a:0
is the same as 1:1
I just tried it the other way, moving the audio from the mkv to the mp4 and it works properly.
Probably I can try to bundle the video of the mkv into an mp4 since Jellyfin is going to be doing it anyway when I try to stream to most devices.
Kind of, but just because I deployed it xP
Thanks for all the information and advises!
So in theory basic auth is enough when sent through HTTPS, right?
If this is the case then the user would need to handle their password and my API can keep storing just the hash.
In another comment JWT was suggested, maybe this could also be a solution?
I’m thinking the user can worry about generating and signing the token and we could only be storing the public key , which requires less strictness when handling it, this way we can validate the token has been signed by who we expect and the user will worry about the private key.
Oh I’ve only used JWTs with OIDC so I didn’t thought about using them directly.
It could be a good solution since the user can generate them on their own and we can validate them with the correct information (secret or public key).
About the issue of long lived or not expiring JWT, maybe a custom restriction of valid tokens with lifespans of more than X amount of minutes are rejected?
Yeah, the token could be a valid one but we could say the payload is invalid for our API.
Completely agree with you, I made that comment, but most people agreed with the client '-.-
Oh boy I feel this one.
My API is meant for scripting (i.e. it’s for developers and the errors are for developers), but the UI team uses it and they just straight display the error from their HTTP request for none technical people which might also not get to know all the parameters actually needed for the request.
And even when the error is in fact in my code, and I sent all the data I need to debug and replicate the error, the users can’t tell me because the UI truncates the response, so the user only sees something like
Error in pe1uca's API: {"error":"bad request","message":"Your request has an error, please check th... (truncated)
. So the message gets truncated and the link to the documentation is also never shown .-.