sleepybear

joined 1 year ago
MODERATOR OF
[–] [email protected] 2 points 6 months ago* (last edited 6 months ago) (1 children)

One way is to run Pi-hole’s admin interface on a different port. That’s configured in:

/etc/lighttpd/external.conf

Set:

server.port := 8000

Then your URL is http://IP:8000/admin

[–] [email protected] 1 points 6 months ago

I worked out this was odd behavior on my OPNSense firewall NAT rules.

For some reason some syncing worked (eg. beehaw.org) but new connections failed. I'm not sure why. Maybe established sessions were kept alive.

Those rules haven't changed in months and months, so I'll chalk that up to "weirdness".

[–] [email protected] 1 points 6 months ago

Yeah, I’ve tried that a couple of times too.

And run through all the federation troubleshooting steps in the docs.

[–] [email protected] 2 points 6 months ago (2 children)

I thought that, and in the past I’ve been off for a day or two and always caught up.

This time I haven’t and it’s been a week or two since coming back online.

 

Hey all,

My personal home-hosted server ran out of disk space and so went offline while I was away and I didn't notice it for a week or two.

This meant that federation requests (or subscriptions requests) went offline and now most of the servers I'm federated with are lagging. I'm only getting updates from a couple.

Is there a way to trigger federated servers back to life so I get the subscription updates? Federation does seem to be working, given some servers seem to federate fine and this post was via federation and has worked.

[–] [email protected] 2 points 1 year ago

This is more complex than you'd think because the USB spec has changed many times over the years, with updates in the connectors used, along with other sub-category changes to cables too. So there's USB versions 1, 2, 3, and 4 (and sub-versions too), along with different types of connector, eg. USB-A comes in regular and V3 (blue inside), and USB-C which is the later. Newer specs can transfer much larger amounts of data. Power Delivery (PD) is another sub-set of specification, which currently allows up to 240W of power with USB4, that's a lot, enough to charge multiple laptops at once, vastly more then the 2.5W allowed for USB 3. For more confusion there is also USB Power Delivery Programmable Power Supply, which is a sub-set to help devices negotiate charging speeds.

Another challenge - USB-C connectors can also support Thunderbolt, which gives it a whole other set of capabilities. This depends on both the cable and the port.

This explains that mess that is USB-C: https://www.androidauthority.com/state-of-usb-c-870996/

Key part:

The latest USB data speed protocols are split into several standards. There are legacy USB 1.0 and 2.0, USB 3.0, USB 3.1, USB 3.2, and the latest USB 4.0, all of which can be supported over USB-C. Confusing enough, but these have since been revised and updated to include various sub-standards, which have encompassed USB 3.1 Gen 1, USB 3.1 Gen 2, and USB 3.2 Gen 2, along with the more recent USB 3.2 Gen 1×1, USB 3.2 Gen 1×2, and USB 3.2 Gen 2×2 revisions. Good luck deciphering the differences without a handbook. Hopefully, the graph below helps.

You'd hope USB4 fixes it, but no. USB4 already boasts Gen 2×1, Gen 2×2, Gen 3×1, Gen 3×2, and Gen 4 variations, with data speeds ranging from 10 to 80 Gbps.

Cable lengths can also have an impact. The spec only allows for a specific length after which you need active cables, which include chips in them to strengthen the signal.

Several years ago a Google engineer started buying USB-C cables from Amazon and reviewing them in a lot of detail: https://www.amazon.com/gp/profile/amzn1.account.AFLICGQRF6BRJGH2RRD4VGMB47ZA

If you read some you'll see there are plenty of manufacturers who just don't even stick by the rules, so it's not always clear what you'll actually get. It doesn't help either that some products also don't play by the rules and have custom sockets that need specific vendor cables. I've had keyboards, for example, that only work with their specific vendor cables, not general USB-C ones.

This means you need to stick to a reputable set of brands, or the cables that came with the product. Decide if you need to charge something serious with it - eg. laptop, vs just a phone, watch, or small device, or whether you need data connectivity.

As another poster mentioned, just buy Anker, they're well made come with a reputable warranty, and aren't actually that expensive. Don't buy the cables you find by the supermarket/CVS checkout, or some ultra-cheap site. They might work, they might not.

Oh, and the Google engineer had his laptop fried by bad cables: https://www.engadget.com/2016-02-03-benson-leung-chromebook-pixel-usb-type-c-test.html

9
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Hey,

On my local lemmy I noticed that after trying out Tailscale I borked my federation connectivity (at least I think that was it).

I've rolled back changes, but noticed that most of my federation updates aren't flowing, and I can't even subscribe to a local community.

However, I can subscribe to a remote one, but only one of quite a few I was previously connected to.

No errors in the logs, and everything seems to be working otherwise.

Any ideas of where to search?

Activity updates from logs:

lemmy_1     | 2023-09-11T22:07:35.820559Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 41, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820579Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 42, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820602Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 43, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820622Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 44, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820653Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 45, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820674Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 46, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820696Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 47, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820717Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 48, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:07:35.820738Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 49, running: 0, retries: 0, dead: 0, complete: 0
lemmy_1     | 2023-09-11T22:09:26.317267Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 1, running: 0, retries: 14, dead: 0, complete: 35
lemmy_1     | 2023-09-11T22:09:27.658199Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 1, running: 0, retries: 14, dead: 0, complete: 36
lemmy_1     | 2023-09-11T22:09:29.009600Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 1, running: 0, retries: 14, dead: 0, complete: 37
lemmy_1     | 2023-09-11T22:09:29.899976Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 1, running: 0, retries: 14, dead: 0, complete: 38
lemmy_1     | 2023-09-11T22:10:00.253091Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 1, running: 0, retries: 14, dead: 0, complete: 39
lemmy_1     | 2023-09-11T22:10:02.139038Z  INFO send:send_lemmy_activity: activitypub_federation::activity_queue: Activity queue stats: pending: 1, running: 0, retries: 14, dead: 0, complete: 40

That pending: 1 never clears. Not sure how to identify it.

This was posted through federation, from my local instance - so, obviously bits and pieces are working just fine.

[–] [email protected] 4 points 1 year ago

Last time my dishwasher died I just had to take it had and clean the pump underneath. Basically the connections apart under and had to just scrub them out. One tiny bit of plastic was gumming it up, causing some checks to fail. Stopped it running.

They’re surprisingly simple machines.

For Samsung I always buy the extended warranty. For our washer and dryer Assurion must have spent a fortune keeping them running. A lot more than I ever did to guy them. They’re only 8 years old too. It’s sad, but for Samsung they work nicely but fail frequently,

For your next one but Bosche. They’re all good, get a base model and it’ll clean well and reliably.

[–] [email protected] 2 points 1 year ago (1 children)

Given this is !privacy and the advertise as front page features both “works will all your messaging apps” and “end to end encryption”, it seems important to flag currently those aren’t mutually compatible.

It’s not their fault the apps don’t have e2e APIs, it’s a tough problem, but the secrecy and privacy guarantee is just “trust us to stick to our policy”. And they’re a start-up, tooling isn’t perfect (or even exist), mistakes happen, etc

Their self-hosting looks interesting, but then it said to use your own clients too, which took the fun out of that.

[–] [email protected] 16 points 1 year ago* (last edited 1 year ago) (4 children)

“For example, if you send a message from Beeper to a friend on WhatsApp, the message is encrypted on your Beeper client, sent to the Beeper web service, which decrypts and re-encrypts the message with WhatsApp's proprietary encryption protocol.”

So, not really end to end for most common use-cases.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

curl ifconfig.io works too

2
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I've written up a post on how I added Voyager (formerly wefwef) to my lemmy docker setup.

https://lemmy.myspamtrap.com/post/11986

 

I've added a local voyager (previously wefwef) instance to my set of Lemmy docker containers.

#NGINX:

Added in this to the default from Lemmy:

    # Define where we send voyager traffic
    upstream voyager {
        server "voyager:5314";
    }


    server { 
    # Rewrite requests to 80 to https on 443
    listen 80;
    server_name voyager.mydomain.com;
    root /nowhere; 
    rewrite ^ https://$server_name$request_uri permanent;
    }

    server { 
    # Rewrite requests to 80 to https on 443
    listen 80;
    server_name lemmy.mydomain.com;
    root /nowhere; 
    rewrite ^ https://$server_name$request_uri permanent;
    }

    # Listen on 443 for voyager and send it to our upstream
    server {
        listen 443 ssl;
        server_name voyager.mydomain.com;
     
        ssl_certificate      /certs/voyager/fullchain.pem;
        ssl_certificate_key  /certs/voyager/key.pem;
        include              /certs/options-ssl-nginx.conf;
     
        location / {
            proxy_pass http://voyager;
        }
    }

I also added an http (80) --> https (443) redirect as well. This accounts for browsers like Safari that don't automatically try HTTPS.

Here we're listening on port 80 for each hostname (in our case lemmy and voyager), then sending a redirect to a URL made up of the same server name and path, with https:// on the front.

For voyager on 443 we then send it to the defined upstream, after using our SSL certs to auth the https request with the client.

#Docker Compose

For our docker compose we add in our voyager section to spin up that container:

  voyager:
    image: ghcr.io/aeharding/voyager:latest
    hostname: voyager
    ports:
      - "5314:5314"
    restart: always
    logging: *default-logging
    environment:
      - CUSTOM_LEMMY_SERVERS=lemmy.mydomain.com
    depends_on:
      - lemmy
      - lemmy-ui
    dns:
       - 192.168.1.1

Here we're exposing 5314 as our port, which maps to the 5314 port in our upstream in nginx, so nginx proxies to it.

You can define which lemmy servers you want to point to in the default sign-in dialog. Here we define our own, but you could make this a list of whichever other lemmy instances you want to. It's comma-delimited (from memory)

After that we can just:

docker-compose up -d

And it'll start-up the new container, and nginx will proxy to it. That's it.

##NOTES You'll need new certs for your voyager hostname in whichever directory you map your certs to in the proxy part of the docker-compose, in addition to specific lemmy certs.

See my previous post for more details there.

 

Just a basic guide on how I implemented Lemmy and the issues I ran into

 

Just a basic guide on how I implemented Lemmy and the issues I ran into

1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Intro

I might as well use this thing now I've stood it up, so here's a post for that.

Given that Lemmy is a federated platform, and my own control freak tendencies, it only seemed right to engage with Lemmy via my own federated instance. I can control it completely, and then use a single account on that instance to interact with all the other Lemmy instances out there.

I chose to run the Docker images rather than the the supplied ansible as I already have a pattern for ansibleizing things here, and would rather just run images myself. If you spun up your own fresh VM, I'd try the supplied ansible first.

So, how did I do that?

Prerequisites

A place to run docker instances

I already manager services at home through Ansible, and try to use docker+docker compose to keep things portable and recreatable. Further, I try to use always edit the Ansible config locally on my MBP, then ansible-playbook it out to make changes. This keeps me safe and sane.

Public DNS

I already use Cloudflare, and then manage that from OpnSense's DynamicDNS service. Could also use ddclient locally too. But, you need a way to resolve your hostname for other servers to talk to you when you cross-search using federation.

A hostname

I already own one (myspamtrap.com) that I use for anonymous emails. Hosted on Fastmail it means I can generate arbitrary accounts for every service online, and I own the domain, so I can't be taken offline. Or, it's harder anyway. naturally then I have: lemmy.myspamtrap.com

Read the docs

The lemmy docs are a good place to start, so give them a read. Then hopefully this doc will fill in any gaps based on the problems I ran into.

Certs

I use the ACME service in OPNSense to generate my certs then copy them to the server. Validation is done via Cloudflare. I then have a cron that copies certs to each service that needs them. In this case into our $HOME/certs folder, which is then mapped into the docker container.

Cron / Scheduling

Cron runs nightly to update certs and rebuild containers with the latest versions. Using docker compose I just need a job to chronic /usr/bin/docker-compose pull; chronic /usr/bin/docker-compose up -d nightly, which is easy and keeps everything up to date. Yes, this can cause issues, but it also keeps things patched. Chronic keeps things quiet unless it breaks.

Dockering it up

I used the docker compose file from the docs with some tweaks:

For the proxy I'm using my own certs, which are mapped into the docker container from a certs folder:

services:
  proxy:
    image: nginx:1-alpine
    ports:
      # actual and only port facing any connection from outside
      # Note, change the left number if port 1236 is already in use on your system
      # You could use port 80 if you won't use a reverse proxy
      - "{{ lemmy_port }}:8536"
    volumes:
      - ./nginx_internal.conf:/etc/nginx/nginx.conf:ro,Z
      - ./certs:/certs:ro
    restart: always
    logging: *default-logging
    depends_on:
      - pictrs
      - lemmy-ui

You need a key.pem and fullchain.pem file in here, which LetsEncrypt should give you if you can get that working. Well outside the scope of this post, but plenty of docs online and certbot works nicely.

NOTE: here it took me a while to realize that 1236 was going to be the main port exposed for access. This is actually 443 for me, because I want HTTPS, and I'm passing that port through the firewall and forwarding it here.

For the lemmy container (which is the main backend), I have a couple of tweaks too:

  lemmy:
    image: {{ lemmy_docker_image }}
    hostname: lemmy-server
    restart: always
    logging: *default-logging
    environment:
      - RUST_LOG=info
      - LEMMY_CORS_ORIGIN=https://lemmy.myspamtrap.com:443
    volumes:
      - ./lemmy.hjson:/config/config.hjson:Z
    depends_on:
      - postgres
      - pictrs
    dns:
       - 1.1.1.1

Firstly, I'm calling it lemmy-server, for clarity, and it kept me saner in the nginx config.

RUST_LOG is set to info rather than WARN so I get a little more logging. DEBUG was helpful too during setup.

LEMMY_CORS_ORIGIN - this took the longest to debug, and turned out to be a combination of hostname and port changes. At this point I'm not actually sure it's necessary, but if you have different hostnames or ports between the UI and the server you'll need to set this with whatever the front-end server is. I've since disabled it, and I'm fine, but including it here since it might be necessary.

DNS - federation was broken until I declared a DNS server. This is required for docker to use DNS in certain situations and federation requires making DNS calls for both incoming and outgoing federation requests. This bit fixed federation for me.

Then for our lemmy-ui we tell it to talk to Lemmy-server instead.

  lemmy-ui:
    image: {{ lemmy_docker_ui_image }}
    environment:
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy-server:8536
      - LEMMY_UI_LEMMY_EXTERNAL_HOST={{ lemmy_domain }}
      - LEMMY_UI_HTTPS=True
    volumes:
      - ./volumes/lemmy-ui/extra_themes:/app/extra_themes
    depends_on:
      - lemmy
    restart: always
    logging: *default-logging

nginx

To use HTTPS and the certs that we're putting into /certs inside our proxy container you need to tweak the default nginx config a bit. Obviously use whatever path you're mapping into your container, but I used /certs so that's used here too.

...
    upstream lemmy {
        # this needs to map to the lemmy (server) docker service hostname
        server "lemmy-server:8536";
    }
...
    server {
        # this is the port inside docker, not the public one yet
        listen 443 ssl;
        listen 8536 ssl;
        
        ssl_certificate      /certs/fullchain.pem;
        ssl_certificate_key  /certs/key.pem;
        include              /certs/options-ssl-nginx.conf;

        # change if needed, this is facing the public web
        server_name lemmy.myspamtrap.com;
        server_tokens off;
...

First, we point to lemmy-server rather than lemmy in our upstream, since we renamed it before in our docker-compose file.

Then we're telling it to listen for SSL only connections on our ports, and we're telling it where the certs are. Finally, the server name is the external DNS name. This gets us HTTPS working. The include file comes from LetsInclude, and is their defaults. I'm using a slightly different setup that their automation so I include it in my ansible role.

Ansible

I have an ansible role that:

  • Creates a lemmy group
  • Creates a lemmy user
  • Creates the lemmy home dir
  • Creates the lemmy certs dir
  • Creates a backup directory
  • Installs docker and docker-compose
  • Copies and renders the docker-compose jinja template
  • Copies the nginx config file
  • Copies the LetsEncrypt nginx include
  • Copies and renders the lemmy jinja template
  • Tears down the docker services
  • Creates fresh services
  • Starts the services
  • Creates crons for nightly container updates and cert updates
  • Opens firewall ports

This isn't all necessary, but I like to segregate out services into their own accounts and home directories for cleanliness. I think it makes it easier to move them around across servers too, if I need to move where it's running. Everything the docker container needs is located in /opt/service. Security? Meh, given dockers root needs, I'm not going to say it's that much better than just using a shared account, but I try to minimize root access/root usage.

I have an ansible playbook that can just run a lemmy tag to do all the above on the chosen server:

ansible-playbook playbooks/main.yml -l my server -t "lemmy"

Then it does everything and you get a working Lemmy. I tear down and recreate the docker services to enforce some sort of idempotency. Everything is new each time.

Set it up

From here you can browse to your Lemmy on whatever domain you created, and it'll prompt you to setup an admin user.

For external access you'll need to punch holes in your firewall to get in from the outside world. In my case port forwarding on the OPNSense box.

If you got everything working above, Federation works OOB too.

So, create a new basic account, and enjoy yourself.

Federation Testing

You can test federation both ways:

Inbound

Create a local community on your new instance then try to find it remotely. Find another lemmy instance and search for !your_new_community_name@your_new_lemmy_hostname

I find tailing the logs helps see what's going on: docker logs -f lemmy_lemmy_1 or docker logs -f lemmy_proxy_1

Find another lemmy instance and try to search it. For example, to find this community: search [[email protected]](/c/[email protected]) in the search box.

It should populate it. You can then see if you're connected by checking the "instances" link at the bottom of the page (or any lemmy page).

Again, I find tailing the logs helps see what's going on: docker logs -f lemmy_lemmy_1 or docker logs -f lemmy_proxy_1

It's pretty exciting to see other lemmy instances appear in your instances page and see how the Fediverse connects.

Other Considerations

User Registration

I keep my Lemmy instance closed to new registrations. - it's for my use to federate out. I'll share content like this, and allow open federation, but no sign-ups. I don't want to cause spam anywhere.

Security

Obviously you're opening ports and running something exposed to the internet. Be safe and think through what you're doing. Don't do something you're not sure of. Be careful in how you configure your firewall, and research the changes you make.

I explicitly have nightly updates turned on so I get patched, and take that risk. It could introduce more bugs, it could definitely break the service, but I expect it'll cause more fixes than breaks.

Challenges

  • CORS: This took a while to work out, and I was trying various hostname and port settings. Having forwarded ports through my firewall, changing them, then changing them through docker, I think I was screwing myself. Then I re-read the config and realized that 1236 was being used. Focusing on the cert/HTTPS setup helped me work through the issues here, but I faced lot "origin not allowed" type issues, and spent a while in the Firefox dev console trying to workout what was being passed through.
  • Ports: for federation with other servers, including ports in the name seemed to be causing issues. This could be me hallucinating, but is partly why I moved to run everything off 443 by default.
  • CERTS: I couldn't find any Lemmy docs here, so just implemented a basic nginx setup.
  • Federation / DNS: It looks like federation calls both in and out use DNS look-ups, and so things were broken both ways till I enabled DNS in the lemmy container.
view more: next ›