this post was submitted on 23 Nov 2023
100 points (98.1% liked)
Fediverse
27910 readers
3 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to [email protected]!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You don't.
There is no place that says that a client needs to process every message that is received on an actor inbox. It doesn't mean that one client should support only one specific type of activity, or even servers for that matter.
Maybe I don't understand your position then.
Fediverse doesn't make any claims for SSO or shared user accounts between server types. And servers aren't required to interoperate with servers of other types. And clients aren't required to interoperate with multiple server types.
It's nice when servers and client do Interop between types (what I'm calling networks for lack of better word), but that's not really fundamental to the fediverse, and is pretty rare. Afaict the only requirement is that servers of the same type can interoperate with eachother and user accounts from other servers of the same type are addressable.
That is the problem. Assuming that we need different "server types" is a mistake made by Mastodon that benefitted them in the short term but screwed the developers who were looking at activitypub as a simple protocol for bidirectional exchange of data.
What we need is smarter clients and let the servers be completely dumb relays. Instead of thinking of "Mastodon API" or "Pixelfed API" or "Lemmy API", we could be looking at a single browser extension that could talk Activity Stream directly with the server, let the client be responsible for signing messages and know how to present the context when/how to serve the different activity types.
Isn't this just the difference between an API and a protocol?
The payload of a message for one social network will be different based on the capabilities of that type. There are API architectures that are discoverable, like HATEOAS, but that only gets you so far (and that example is based on HTTP not Activitypub).
I don't really see anything wrong, in the absence of a standard body, for each social network to define its own activity type, since they typically have some degree of unique capabilities anyways.
Maybe? I don't know. Is that a relevant distinction on a decentralized system where the application logic can live on whatever side of the network?
Because they are constrained by the "client-server" paradigm. If you spend some time working with decentralized apps that assume that data is available to any nodes on the network, all your "protocol" really needs to do is to provide the primitives to query, pull and push the data around. I kinda got to write about it on an old blog post
I think it's still relevant. I mean... It's turtles all the way down, but applications on equivalent layers need to share a common API.
I don't think it's reasonable to ask voluntary instance hosts to pay for bandwidth and storage for networks they don't want to host, so mirroring the all Activitypub on all servers doesn't seem reasonable, especially if any of the networks take off on popularity. Imagine if every single fediverse instance of any type needed to be twitter-scale just because some instance of mastadon took off.
I think it's correct for servers of a specific network/type to only subscribe to messages of the type they care about, as a purely practical matter. It'd be nice if there was a fediverse standard used to announce capabilities, along with standards for common capabilities and restrictions, but there is none that I'm aware of.
In my dream world, servers are only relays. They don't store anything, unless it wants to keeps a copy for one of its clients, like POP3.
for the same reason that ISPs don't solve the need for servers and serverside storage, moving all your storage to the edge is usually a bad idea. You're basically describing a serverless P2P social network, but with it comes all of the pitfalls of strictly-p2p apps. mainly, searching becomes prohibitively expensive, and if your client goes offline (eg you need to go on an airplane or your phone runs out of batteries) reliably catching up can be problematic. How would this work for PeerTube, for example. would ever client that cared about peertub need to keep a copy of every peertube video on every peertube server, just in case you wanted to search it? My phone would fill up instantly. Would my phone just save an address to look up the video from the original author's personal device? not only does that sound like a security nightmare, but also RIP to the author's data usage caps if they published from their mobile device.
I think that servers are needed. IDK if we need servers to partially mirror eachother like mastadon does, but i think that hosting the content on the servers themselves is the right practical move. and given that we're more or less boxed into a federated server-client architecture, then I think that we're getting it as good as we're going to get, until we choose some standards body to govern how to expose capabilities.
I do think that the right approach is to have a discoverable API where clients can discover what capabilities a certain piece of content has, and what those capabilities mean. Just like how javascript feature detection is far better than user agent detection, servers can integrate with any social network that supports some minimum set of capabilities, and clients can present all capabilities to the user (while ignoring unsupported capabilities) regardless of originating social network. but we're not there yet, we need that standard first, and major players need to agree on it.
No, that sounds exactly like Nostr, which is a lot more practical and cheap to run that a Mastodon server and actually scales quite well.
No. You just need to move the application state to the edge. Storage itself can still be in content-addressable data servers, like IPFS, magnet links or plain-old (S)FTP servers.
When someone posts a picture on Mastodon, the picture itself is not replicated, just a link to it. Now, imagine that your "smart client" version of Mastodon (or Peertube, or Lemmy) wants to post a picture. How would it work?
If by "servers" you mean "nodes in the network that are more stable and have stronger uptime/performance guarantees", I agree 100%. If by "servers" you mean "centralized nodes responsible for application logic" then I'd say you can be easily be proven wrong by actual examples of distributed apps.
Looking at nostr, I generally like the architecture, although the it's very similar in broad strokes.
I like the simplification and separation of the responsibilities. I don't like using self signing as an identification mechanism for a social network.
But crucially it seems like it has the same problem we're discussing here, wrt different social networks based on that protocol, having different message schemas and capabilities, making them incompatible.