Lemmy scaling, replication, it's a mess #5
RocketDerp
started this conversation in
General
Replies: 1 comment 3 replies
-
Unless they implement some major SQL caching or SQL table changes rapidly, I expect defederation for performance reasons alone will make sense if users are actually able to get in and create more content. I still view the quantity of comments as 'not very high', lemmy.ml had the largest at only 1 million comments (reached last week) in all history of the database. It isn't scaling, it's a mess. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm at the point of giving up looking at how poorly lemmy.ml is delivering message.
This asklemmy@Lemmy.ml posting was the last to deliver to my test instance, 19 hours ago. The topic is about scaling: https://lemmy.world/post/914622
All of June there were signs and symptoms of just how much trouble scaling was gong to be with federation being so chatty over HTTP, one single vote, comment, post at a time. I've raised my voice as loud as I can that this design wasn't gong to scale the way people use Reddit with so many comments...
Bulk replication, lemmy to lemmy, using the front-end API is probably the only hope that could be implemented in days or weeks by all the big servers. Offload the routine federation of votes, comments, posts to a more efficient process.
Adding intelligent data caching within the lemmy_server Rust application keeps being ignored. But someone did find a drop-in cache system, ReadySet LemmyNet/lemmy#3432 - something drastic like that makes more sense than using CloudFare caching.
Beta Was this translation helpful? Give feedback.
All reactions