admin

joined 1 year ago
MODERATOR OF
 

Firstly, I wanted to apologize for the silence over the past couple weeks. Work, life took over administering this instance.

Onto the good stuff.

As some of you may have noticed we skipped 0.18.0 because of some unforeseen issues but we're now on 0.18.1. In my extremely minimal testing, the upgrade seems to have gone through largely smoothly! Please do let me know if you see any weirdness. (Some old themes might be borked, please update your own interface accordingly).

I am aware that Jerboa was completely broken, hopefully it works now (I can't test it since I don't have access to Android).

Time for some stats:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           392M  1.7M  390M   1% /run
/dev/vda1        94G   21G   69G  23% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           392M  4.0K  392M   1% /run/user/1002
$ free -m
               total        used        free      shared  buff/cache   available
Mem:            3911         475         142         140        3294        3020
Swap:           2399          88        2311

Another important thing I did was upgrade the instance to a mid-tier vultr plan (we did run out of disk space on the old one). Here's the new plan:

AMD High Performance 2 vCPU, 4096 MB RAM, 100 GB NVMe, 5.00 TB Transfer

And last month's vultr stats:

This brings our yearly costs to (there's occasional bumps because of some vultr snapshot nonsense) regardless:

domain: $12/year
vultr: $24 (instance)  + $4.8 (backups) = $28.8/month = $345.60 / year
email: still free tier on zoho, woo!
total: $357 / year

Let me know questions/concerns, bugs you've noticed after the upgrade.

Cheers!

[–] [email protected] 1 points 1 year ago

Done. Update post coming shortly.

[–] [email protected] 5 points 1 year ago

Gonna assume this was a test post. It worked!

[–] [email protected] 5 points 1 year ago

Oh wow, this seems like a fantastic addition. One of these days I really gotta switch from my decade old tmux workflow to zellij!

[–] [email protected] 1 points 1 year ago (1 children)

So there's two options (assuming /r/bevy even wants to consider migrating here):

  1. We setup a dedicated Bevy community
  2. /r/bevy could potentially use the existing Rust: Game Development community we have here

I don't really have preference either way. I do think that project-specific communities could certainly become a thing if people prefer to keep discussions laser focused.

And yeah, unfortunately I can't really make it free-for-all community creation simply because of potential DDOS/spam attacks and things rampantly becoming out of hand (part of the reason the instance is also application based).

[–] [email protected] 1 points 1 year ago (1 children)

I think TWiR can certainly belong in Rust: News along with any other relevant potentially recurring content (I'm thinking rustfmt, useful pkg update(s), news articles etc perhaps).

I've noticed its kinda challenging finding communities for folks and it's also kinda hard to make an actual announcement of sorts. I do, however, try to "pin" posts which I believe to be particularly important for the members to know. Honestly though, I too am figuring this out along with y'all :)

[–] [email protected] 1 points 1 year ago

Their pricing and reputation is certainly good. However, it seems like they don't have simple full server snapshot support. I think that's mandatory at this point in case of potentially irrecoverable failures.

[–] [email protected] 9 points 1 year ago

Sigh, I tried it one more time. Same problem. Downtime this time was probably < 5 mins though so hopefully not noticeable. I am wondering whether the database is in some bad shape (no idea why/how). I guess we wait and retry with 0.18.1.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Well, the lemmy container kept running into:

lemmy  | 2023-06-24T23:28:52.716586Z  INFO lemmy_server::code_migrations: Running apub_columns_2021_02_02
lemmy  | 2023-06-24T23:28:52.760510Z  INFO lemmy_server::code_migrations: Running instance_actor_2021_09_29
lemmy  | 2023-06-24T23:28:52.763723Z  INFO lemmy_server::code_migrations: Running regenerate_public_keys_2022_07_05
lemmy  | 2023-06-24T23:28:52.801409Z  INFO lemmy_server::code_migrations: Running initialize_local_site_2022_10_10
lemmy  | 2023-06-24T23:28:52.803303Z  INFO lemmy_server::code_migrations: No Local Site found, creating it.
lemmy  | thread 'main' panicked at 'couldnt create local user: DatabaseError(UniqueViolation, "duplicate key value violates unique constraint \"local_user_person_id_key\"")', crates/db_schema/src/impls/local_user.rs:157:8

despite the fact that:

lemmy=# select id, site_id from local_site;
 id | site_id
----+---------
  1 |       1
(1 row)

So you can see that it was unconditionally trying to create a local_site and running into a DB constraint error. I further narrowed it down to this piece of code:

///
/// If a site already exists, the DB migration should generate a local_site row.
/// This will only be run for brand new sites.
async fn initialize_local_site_2022_10_10(
  pool: &DbPool,
  settings: &Settings,
) -> Result<(), LemmyError> {
  info!("Running initialize_local_site_2022_10_10");

  // Check to see if local_site exists
  if LocalSite::read(pool).await.is_ok() {
    return Ok(());
  }
  info!("No Local Site found, creating it.");

At this point I gave up because I couldn't really tell why LocalSite::read(pool).await.is_ok() was, well...not ok.

35
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

In case anyone was wondering, yes we were down for ~2 hours or so. I apologize for the inconvenience.

We had a botched upgrade path from 0.17.4 -> 0.18.0. I spent some time debugging but eventually gave up and restored a snapshot (taken on Saturday Jun 24, 2023 @ 11:00 UTC).

We'll likely stick to 0.17.4 till I can figure out a safe path to upgrade to a bigger (and up-to-date) instance and carry over all the user data. Any help/advice welcome. Hopefully this doesn't occur again!

[–] [email protected] 2 points 1 year ago (1 children)

Will you update to Lemmy 0.18.0 (announced today)? The new HTTP API is allegedly more lightweight.

Of course I will! I was just waiting for a few more instances to bump theirs before I update this one.

By the way, I updated my dark theme published here for lemmy-ui 0.18.0 :)

Most excellent, I have been using your CSS with Stylebot :) You could also submit a PR to lemmy-ui repo btw with your custom theme so it gets even easier for people to try.

[–] [email protected] 3 points 1 year ago (6 children)

I think it’s likely to spike again when the api goes dark

That's a fair point. I think I might bump up the specs of this instance over the weekend (will make an announcement prior).

I haven’t been back there, is any consensus forming regarding whether people will use /r/rust, [email protected], or [email protected] going forward?

I am still of the opinion that both can (and probably should) co-exist. This particular instance will always be Rust focused, federated and have rust dedicated micro-communities. Whether it continues to grow only time will tell I suppose.

 

Hey all,

It's been slightly over two weeks since lemmyrs started. It's been pretty fun watching the community grow!

Some instance stats for you:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            97M  1.7M   96M   2% /run
/dev/vda1        24G   15G  7.1G  68% /
tmpfs           485M     0  485M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            97M  4.0K   97M   1% /run/user/1002
$ free -mh
               total        used        free      shared  buff/cache   available
Mem:           969Mi       445Mi        77Mi       134Mi       445Mi       240Mi
Swap:          2.3Gi       571Mi       1.8Gi
lemmy=# select count(id) from local_user;
 count
-------
   294
(1 row)

We're cutting it pretty close in terms of RAM and Disk usage, the user growth rate has mostly flat-lined though once r/rust came back online so I'm not too concerned. When (If) it's time I'll likely bump up the Vultr instance plan to something which will continue to serve us for the foreseeable future.

Previous relevant posts:

[–] [email protected] 0 points 1 year ago

I can somewhat relate. I mostly do something like this (instead of the exact dependency version):

chrono = {version = "0", features = ["serde"]}
clap = {version = "4", features = ["derive"]}
anyhow = "1"

I do, however, typically write application code instead of library, so it's probably less critical for me. Occasionally do run into dependency hell here and there, but nothing too bad so far!

[–] [email protected] 2 points 1 year ago

Right, that's certainly possible. Is it enough for us to start thinking about de-federating, maybe? Arguably this has been true since forever on any public forum. I think what's needed is better mod tools on Lemmy in general.

Personally, I think it's best for us to be patient while things settle before we consider any action. FWIW, we don't see much incoming traffic here anyway since the subreddits started coming back online after the 48 hr blackout.

If things start getting out of hand and this instance starts getting bombarded with just bots/troll, we will likely have to consider de-federation (or some other alternative).

 

Hey all,

Just thought I'd share an update. I have added a few new communities and renamed the existing communities to have slightly more consistent naming throughout this instance. Icons are primarily from Wikimedia Commons (replacements welcome as long as there are no copyright issues).

Added:

- Rust: Web Development
- Rust: Game Development
- Rust: Embedded Systems

Renamed:

- Memes to Rust: Memes
- News to Rust: News
- Support to Rust: Support
- Meta to Rust: Meta

PS: The identifiers for the renamed communities remain the same. Open to any suggestions/thoughts on this change or otherwise.

Cheers!

 

Hey everyone, thought I'd post some stats since we're one week old now!

From Vultr (instance is hosted through them):

Total applications: 116
Denied applications: 4 (one person asked to change username, 3 others gave one word answer to the application question)
Accepted applications: 112

docker stats (snapshot):

CONTAINER ID   NAME       CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
7f365c848236   caddy      0.19%     43.22MiB / 969.4MiB   4.46%     7.23GB / 7.65GB   631MB / 146MB     8
d9421a5d930a   lemmy-ui   0.00%     49.62MiB / 969.4MiB   5.12%     1.51GB / 3.32GB   869MB / 1.26GB    11
e8850c310380   lemmy      0.08%     52.53MiB / 969.4MiB   5.42%     5.67GB / 5.86GB   942MB / 582MB     8
7ebb13fde277   postgres   0.02%     304.2MiB / 969.4MiB   31.38%    908MB / 2.97GB    3.82GB / 14.4GB   12
9b471baacf84   pictrs     0.05%     10.32MiB / 969.4MiB   1.06%     53.5MB / 1.18GB   653MB / 360MB     14

df -h:

Filesystem      Size  Used Avail Use% Mounted on
tmpfs            97M  1.7M   96M   2% /run
/dev/vda1        24G   12G   11G  53% /
tmpfs           485M     0  485M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            97M  4.0K   97M   1% /run/user/1002
 

Please participate in the poll. Question is whether we should migrate control, maintenance, community operations etc to Nivenly (Hackyderm) foundation.

4
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I've had a few questions about trouble accessing other communities from here.

First and foremost, I request you to be patient, lemmy is...alpha software at best imho. There's 200+ issues on github right now and very few maintainers. No one expected things to take the turn they did within a matter of days, but here we are :)

Biggest known issues:

Websocket support is being reworked

This is out of my hand and I can confirm that its busted. I tested locally and the current main lemmy backend branch is incompatible with lemmy-ui branch. Can't even login if you set everything up locally.

Accessing communities from other instances is flaky

Good news is that there is shoddy workaround. Say you want to access c/gaming from beehaw.org. Enter the full url https://beehaw.org/c/gaming in your search, it won't show up, click search ~~a couple times~~ then wait a sec, then enter just gaming and it pops up magically.

No high quality mobile apps

There's jerboa for Android and mlem for iOS. Both are under heavy development. Thankfully the website works fine on mobile...mostly.

PS: I'm not a lemmy maintainer, just a hobbyist self-hoster and professional Rust developer trying the fediverse as much as y'all are :)

5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I have noticed some questions around whether lemmyrs.org will continue to be up and running for a long time. I'm hopeful that it does.

For full transparency, here's what I'm currently personally paying for:

  1. lemmyrs.org domain: $12/year, bought on Google domain
  2. lemmyrs.org vultr instance $7/month:
AMD High Performance 1 vCPU, 1024 MB RAM, 25 GB NVMe, 2.00 TB Transfer 
  1. Emails are sent using zoho mail free tier. If absolutely necessary an additional $2 for two users (admin and noreply) /month would be added.

Total cost (yearly): $96 + (some tax).

As things stand right now ~$100/year is easily affordable but as the number of users grow, it largely boils down to egress and storage costs. I can personally bear most of it, but if it starts booming then I'll have to rethink about options.

Rest assured, we will be here for the long run!

 

I'm just one person here, if this gains traction I'm gonna need some help with moderation and administration. Keeping this open to discuss the future possibilities!

 

Welcome all Reddit refugees, rustaceans and everyone else. Let's keep it civil and check out the fediverse together!

 

Hopefully, I'm not breaking any rules by posting this here!

I thought that instead of every community being on the main lemmy.ml instance I'd host a different (dedicated) instance for refugee rustaceans to get a hang of the fediverse.

It's listed on join-lemmy/instances and the link is lemmyrs.org, everyone is welcome!

view more: next ›