this post was submitted on 12 Mar 2024
32 points (100.0% liked)
Reddthat Announcements
641 readers
1 users here now
Main Announcements related to Reddthat.
- For all support relating to Reddthat, please go to [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LemmyWorld -> Reddthat
What if I told you the problems LW -> Reddthat have is due to being geographically distant on a scale of 14000km?
Problem: Activities are sequential but requires external data to be validated/queried that doesn't come with the request. Server B -> A, says here is an activity. In that request can be a like/comment/new post. An example of a new post would mean that Server A, to show the post metadata (such as subtitle, or image) queries the new post.
Every one of these outbound requests that the receiving server does are:
Actual Problem
So every activity that results in a remote fetch delays activities. If the total activities that results in more than 1 per 0.6s, servers physically cannot and will never be able to catch up. As such our decentralised solution to a problem requires a low-latency solution. Without intervention this will evidently ensure that every server will need to exist in only one region. EU or NA or APAC (etc.) (or nothing will exist in APAC, and it will make me sad) To combat this solution we need to streamline activities and how lemmy handles them.
A Possible Solution?
Batching, parallel sending, &/or moving all outbound connections to not be blocking items. Any solution here results in a big enough change to the Lemmy application in a deep level. Whatever happens, I doubt a fix will come super fast
Relevant traces to show network related issues, for those that are interested
Trace 1:
Lemmy has to verify a user (is valid?). So it connects to a their server for information. AU -> X (0.6) + time for server to respond = 2.28s but that is all that happened.
Trace 2:
Similar to the previous trace, but after it verfied the user, it then had to do another
from_json
request to the instance itself. (No caching here?) As you can see 0.74 ends up being the server on the other end responding in a super fast fashion (0.14s) but the handshake + travel time eats up the rest.Trace 3:
Fetching external content. I've seen external servers take upwards of 10 seconds to report data, especially because whenever a fediverse link is shared, every server refreshes it's own data. As such you basically create a mini-dos when you post something.
Trace 4:
Sometimes a lemmy server takes a while to respond for comments.
Notes:
[1] - Metrics were gathered by using https://github.com/LemmyNet/lemmy/compare/main...sunaurus:lemmy:extra_logging patch. and getting the data between two logging events. These numbers may be off by 0.01 as I rounded them for brevity sake.
Relevant Pictures!
How far behind we are now:
The rate at which activities are falling behind (positive) or if we are catching up (negative)
I thought Lemmy supported horizontal scaling for federation activities.
Wondering how many instances of the federation server reddthat is running and if increasing the number does anything for the issue.
Edit: It seems that 0.19 started synchronizating activities, so horizontal scaling is out of the question.
https://github.com/LemmyNet/lemmy/issues/4529
It does. But only outbound. So we have 2 containers for federating outbound. It splits the list of servers in 2. Then each process sends out the queries. This is good for when you have a lot of servers federating with you.
But currently when accepting activities from 1 instance it is only 1 thread and it is only sequential/chronological order.
Just FYI, I have suggested to the moderation team of [email protected] to consider moving the community to another instance so that you guys can still participate.
The technical issue might still be there, but I guess that can be a way to avoid overworking LW
Thank you so much for the detailed answer!
This could be it!