SpiderUnderUrBed

joined 1 month ago
[–] [email protected] 2 points 1 day ago

I solved the issue, the jellyfin pod for some reason was connecting to the wrong endpoint for the internal kube-dns service, I fixed that, and also made it use the internal pods FQDN and it works.

 
  --image=nicolaka/netshoot \
  --restart=Never \
  -- /bin/bash
If you don't see a command prompt, try pressing enter.
network-tools:~# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
network-tools:~# 

DNS does not work in my k8s cluster. I dont know how to debug this, this is all my logs are in Coredns and kubedns:

[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server

This probably isnt enough, but what more can I do to debug this? I dont think its anything to do with my CNI, I am using calico, 1.1.1.1 as a nameserver or any nameserver works, but the issue is that internal to external dns mappings do not work, dns cannot resolve outside. Maybe not inside either according to this:

spiderunderurbed@raspberrypi:~/k8s $ kubectl run -it --rm network-tools-2   --image=nicolaka/netshoot   --restart=Never   -- /bin/bash
If you don't see a command prompt, try pressing enter.
network-tools-2:~# ping traefik.com
ping: traefik.com: Try again
network-tools-2:~# 

the services for kubedns and coredns does not work, but the logs as I sent above, dont show me much.

[–] [email protected] 1 points 2 days ago (2 children)

It does not work, as long as it goes to a cloudflare domain, the is a io timeout because of some DNS issue, any other suggestions?

 

I find virt-manager hard to use, and not easily configurable, XML is the easiest but I don't always want to configure my vms in either a command-line or XML directly, is there any graphical alternative to virt-manager that uses the entire or part of the stack?

2
submitted 2 days ago* (last edited 2 days ago) by [email protected] to c/nix
 

https://pastebin.com/30Bh23EV and this:

  DVfio.configuration = {
        systemd.tmpfiles.rules = [
          "f /tmp/enable-vfio-switch 0644 spiderunderurbed users -"
        ];
        environment.variables = {
          KWIN_DRM_DEVICES = lib.mkForce "";
        };
        environment.extraInit = ''
                export KWIN_DRM_DEVICES=$(${vfio}/bin/vfio)
        '';
  };

So there is a issue with my configuration, so, you dont really need to understand nix, just like, look at my qemu hook script, its in plain sh, and the stuff above might be self explanitory, the issue is, my nvidia drivers are still being used, despite, setting KWIN_DRM_DEVICES to card0 so the logs of libvirtd looks something like this: https://pastebin.com/TaKrsY9S if setting kwin_drm_devices to my gpu card does not work, i dont know what does and can use help

 
[spiderunderurbed@daspidercave:~]$ distrobox enter debian
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "tmpfs" to rootfs at "/sys/fs/selinux": create mountpoint for /sys/fs/selinux mount: mkdirat /var/lib/docker/overlay2/21421daf7f99a368b01031a78a899d0a459f341e9e942698981d2499a9aa042c/merged/sys/fs/selinux: operation not permitted: unknown
Error: failed to start containers: debian
[ble: exit 1]

How do i fix? This was created normally, the container, I dont know what more information to add

[–] [email protected] 1 points 3 days ago (2 children)

Thats weird, because I clearly have free space:

spiderunderurbed@raspberrypi:~ $ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/mmcblk0p2  235G  184G   39G  83% /
spiderunderurbed@raspberrypi:~ $ df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/mmcblk0p2  235G  184G   39G  83% /home
spiderunderurbed@raspberrypi:~ $ 

any idea of things I can try to fix/debug this?

 

My cluster has been showing my raspberrypi node as "Ready" but according to the description of the node, the last log was "NodeNotReady" all debug guides say look for any pressure, like disk, pid, or so on, but there is no pressure, no absence of network. Here is the logs of my pi and pi status: https://pastebin.com/UULz6Hcy My pods are stuck in unknown (except jellyfin which is awaiting another node to come on): https://pastebin.com/vw2masAC A description of one of my pods if that helps: https://pastebin.com/s5W03s0E

also i already tried re-installing k3s

 

So I need help with a split dns approach, or a direct fix, normally when running my tunnel on the simplest configuration I get this error:


Couldn't resolve SRV record &{region1.v2.argotunnel.com. 7844 1 1}: lookup region1.v2.argotunnel.com. on 10.43.0.10:53: read udp 172.16.91.156:54443->10.43.0.10:53: i/o timeout

When I tried to change the nameserver to cloudflare to make it accessible I get this error:

2025-04-07T10:06:38Z ERR  error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: dial tcp: lookup traefik on 1.1.1.1:53: no such host" connIndex=3 event=1 ingressRule=3 originService=http://traefik/
2025-04-07T10:06:38Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: dial tcp: lookup traefik on 1.1.1.1:53: no such host" connIndex=3 dest=https://nextcloud.spidershomelab.xyz/index.php/204 event=0 ip=198.41.200.233 type=http
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tunnel
  labels:
    app: tunnel
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tunnel
  template:
    metadata:
      labels:
        app: tunnel
    spec:
      dnsPolicy: None
      dnsConfig:
        nameservers:
          - 1.1.1.1
          - 10.43.0.10
#        searches:
#          - default.svc.cluster.local
      hostNetwork: true
      containers:
        - name: tunnel
          image: cloudflare/cloudflared:latest
          args:
            - tunnel
            - --no-autoupdate
            - run
          env:
            - name: TUNNEL_TOKEN
              valueFrom:
                configMapKeyRef:
                  name: env
                  key: CLOUDFLARE_TUNNEL_TOKEN
      restartPolicy: Always

Anyone know why cf tunnels is asking the wrong DNS server? I know i specified 1.1.1.1 but it should have also asked kubedns as I specified its ip I do have to specify its nameserver or else it does not work, it wont be able to connect to their argotunnel domain without going through 1.1.1.1


kube-dns   ClusterIP   10.43.0.10   <none>        53/UDP,53/TCP,9153/TCP   12d

also its the correct ip I would like it, if you cant give direct advice, to try this deployment, and add a custom dns server that idk, configures it so that correct ip queries goes to 1.1.1.1 and the rest kubedns, i dried coredns, and other dns servers and I couldnt get anything to work I am trying the nameserver 1.1.1.1 because otherwise I get the error mentioned above. and no, I am not running a firewall nor anything that should block it outside of k8s, as it runs perfectly fine on the host.

 
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: strip-first-prefix
  namespace: default
spec:
#  replacePathRegex:
#    regex: "^/[^/]+(.*)"
#    replacement: "$1"
  stripPrefix:
    prefixes:
      #- "/dashboard"
      #- "/api"
      - "/gitea"
      - "/wordpress"
      - "/vaultwarden"
      - "/pdns"
      - "/glance"
      - "/immich"

So I have a issue. whenever I accessed all of my services via 192.168.1.22/wordpress for example. it forwarded that /wordpress to the actual wordpress domain, leading to page not found, however when i strip the initial proefix, i can access the base page, however, when lets say wordpress wants any css or assets, it will look at 192.168.1.22/assets which wont work, so basically, I need a way for sort of, emulate the url paths, so it wont take actual queries to places that dont exist and tries to access resources the incorrect way, i know siteURL exists for WP, but i want a catchall solution which helps my other services.