Hello! It seems you have made it to our donation post.
Thankyou
We created Reddthat for the purposes of creating a community, that can be run by the community. We did not want it to be something that is moderated, administered and funded by 1 or 2 people.
Current Recurring Donators on OpenCollective:
Current Total Amazing People on OpenCollective:
Background
In one of very first posts titled "Welcome one and all" we talked about what our short term and long term goals are.
In 7 days since starting, we have already federated with over 700 difference instances, have 24 different communities and over 250 users that have contributed over 550 comments. So I think we've definitely achieved our short term goals, and I thought this was going to take closer to 3 months to get these types of numbers!
We would like to first off thank everyone for being here, subscribing to our communities and calling Reddthat home!
Donation Links (Updated 2024-08)
- Open Collective: https://opencollective.com/reddthat
- (best for recurring donations)
- Stripe 3% + $0.3
- Ko-Fi: https://ko-fi.com/reddthat
- (best for once off donations)
- Stripe 3% + $0.3
- 5% on all recurring (Unless I pay them $8/month for 0% fee)
- Crypto:
- XMR Directly:
4286pC9NxdyTo5Y3Z9Yr9NN1PoRUE3aXnBMW5TeKuSzPHh6eYB6HTZk7W9wmAfPqJR1j684WNRkcHiP9TwhvMoBrUD9PSt3
- BTC Directly:
bc1q8md5gfdr55rn9zh3a30n6vtlktxgg5w4stvvas
- Crypto Mining Pool: Pool Info
Host: donate.reddthat.com, Port: 3333
Current Plans:
- Create our own production Lemmy builds with cherry picked commits to help with the long load times.
- In April 2025 we are renewing our server hosting for the next 12 months. At that point in time we will evaluate if we can scale down to a more cost effective instance.
Annual Costings:
Our current costs are
- Domain: 15 Euro (~$25 AUD)
- Server: $897.60 Usd (~$1365 AUD)
- EU Server: 39 Euro (~$64 AUD)
- Wasabi Object Storage: $72 Usd (~$111 AUD)
- Total: ~1565 AUD per year (~$130.42/month)
That's our goal. That is the number we need to achieve in funding to keep us going for another year.
Cheers,
Tiff
PS. Thank you to our donators! Your names will forever be remembered by me:
Last updated on 2024-08-08
Current Recurring Gods (๐)
- Nankeru (๐)
- souperk (๐)
- Incognito (x3 ๐๐๐)
- ThiccBathtub (๐)
- Bryan (๐)
- Guest (x2 ๐๐)
- Ashley (๐)
- Alex (๐)
- MentallyExhausted (๐)
Once Off Heroes
- Guest(s) x13
- souperk
- MonsiuerPatEBrown
- Patrick x4
- Stimmed
- MrShankles
- RIPSync
- Alexander
- muffin
- Dave_r
- Ed
- djsaskdja
- hit_the_rails
A quick question related to the DB, is the data broken into many smaller tables or is most data in one or two tables? If it is all in one, we may run into performance issues as soon as the DB becomes to large as queries run against whole tables unless promised really well.
Great question! The data is broken up across a fair amount of tables so I think it's pretty much fine in that regard. Though the database is the current bottleneck and the Lemmy Devs have said they need to fix it as they are front end people they are not the best with databases.
Unfortunately we are at the behest of the Lemmy devs at the moment, and I'm sure there are issues as the big instances are really struggling.
But that isn't something we will really need to worry about until we 10x our db.
Thank you for the answer. I have delt with scaling DBs with tons of data so the alarm bells were ringing. DBs tend to be fine up to a point and then fall over as soon as the isn't enough ram to cache the data and mask issues of the DB architecture. With the exponential growth of both users and content to cache, my gut tells me this will become a problem quickly unless some excellent coding is done on the back end to truncate remote instance data quickly.
Sadly I am better at breaking systems in wonderful ways than building systems for use, so I can't be too helpful other than to voice concerns about issues I have ran into before.
Hahahaha, are you me?
Yeah the db on the filesystem is double the size of it in memory, the current memory usage is about 200-300mb active for postgre. So I'm not worried about it too much.
There are big wins for sure in the database, and I'm for sure looking at the activity table when I get a few hours to myself. It keeps a log of; every like, from every user that the server federates with, and I think it's not efficient at all. Each row contains a huge amount of data when I'd expect it to be quite small.
But I digress. I haven't delved into the intricacies and I'm sure it's not as simple as I make it out to be. There will be lots of QoL work in 0.18 for sure so stay tuned. I'll be using this Announcement community for all of our big milestones and to keep everyone updated, as well as to crowd source solutions!