moonpiedumplings

joined 2 years ago
[–] moonpiedumplings 1 points 4 days ago (1 children)

I don't think so, now. You'll have to do those yourself.

[–] moonpiedumplings 1 points 4 days ago* (last edited 4 days ago) (3 children)

Which means my distro-morphing idea should work in theory with OpenStack

I also don't recommend doing a manual install though, as it's extremely complex compared to automated deployment solutions like kolla-ansible (openstack in docker containers), openstack-ansible (host os/lxc containers), or openstack-helm/genestack/atmosphere (openstack on kubernetes). They make the install much more simpler and less time consuming, while still being intensely configurable.

[–] moonpiedumplings 2 points 4 days ago (5 children)

Personally, I think Proxmox is somewhat unsecure too.

Proxmox is unique from other projects, in it's much more hacky, and much of the stack is custom rather than standards. Like for example: For networking, they maintain a fork of the Linux's older networking stack, called ifupdown2, whereas similar projects, like openstack, or Incus, use either the standard Linux kernel networking, or a project called openvswitch.

I think Proxmox is definitely secure enough, but I don't know if I would really trust it for higher value usecases due to some of their stack being custom, rather than standard and mantained by the wider community.

If I end up wanting to run Proxmox, I’ll install Debian, distro-morph it to Kicksecure

If you're interested in deploying a hypervisor on top of an existing operating system, I recommend looking into Incus or Openstack. They have packages/deployments than can be done on Debian or Red Hat distros, and I would argue that they are designed in a more secure manner (since they include multi tenancy) than Proxmox. In addition to that, they also use standard tooling for networking, like both can use Linux Bridge (in-kernel networking) for networking operations.

I would trust Openstack the most when it comes to security, because it is designed to be used as a public cloud, like having your own AWS, and it is deployed with components publicly accessible in the real world.

[–] moonpiedumplings 3 points 4 days ago* (last edited 4 days ago)

Again, this is distracting from the original argument to make some kind of tertiary argument unrelated to the original one: Is ssh secure to expose to the internet?

You said no. That is the argument being contested.

[–] moonpiedumplings 3 points 4 days ago (2 children)

This is moving the goal posts. You went from "ssh is not fine to expose" to "VPN's add security". While the second is true, it's not what was being argued.

Never expose your SSH port on the public web,

Linux was designed as a multi user system. My college, Cal State Northridge, has an ssh server you can connect to, and put your site up. Many colleges continue to have a similar setup, and by putting stuff in your homedir you can have a website at no cost.

There are plenty of usecases which involve exposing ssh to the public internet.

And when it comes to raw vulnerabilities, ssh has had vastly less than stuff like apache httpd, which powers wordpress sites everywhere but has had so many path traversal and RCE vulns over the years.

[–] moonpiedumplings 4 points 4 days ago* (last edited 4 days ago) (7 children)

Firstly, Xen is considered by secure by Qubes — but that's mainly the security of the hypervisor and virtualization system itself. They make a very compelling argument that escaping a Xen based virtual machine is going to be more difficult than a KVM virtual machine.

But threat model matters a lot. Qubes aims to be the most secure OS ever, for use cases like high profile journalists or other people who absolutely need security, because they will literally get killed without it.

Amazon moved to KVM because, despite the security trade off's, it's "good enough" for their usecase, and KVM is easier to manage because it's in the Linux kernel itself, meaning you get it if you install Linux on a machine.

In addition to that, security is about more than just the hypervisor. You noted that Promox is Debian, and XCP-NG is Centos or a RHEL rebuild similar to Rocky/Alma, I think. I'll get to this later.

Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox)

I did some research on this, and was planning to make a blogpost and never got around to making it. But I still have the draft saved.

Name Summary Full Article Notes
Performance Evaluation and Comparison of Hypervisors in a Multi-Cloud Environment Compares WSL (kind of Hyper-V), VirtualBox, and VMWare-Workstation. springer.com, html Not honest comparison, since WSL is likely using inferior drivers for filesystem access, to promote integration with host.
Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop Benchmarks Compares Xen, KVM, and an unnamed commercial hypervisor, simply referred to as CVM. pdf
Hypervisors Comparison and Their Performance Testing (2018) Compares Hyper-V, XenServer, and vSphere springer.com, html
Performance comparison between hypervisor- and container-based virtualizations for cloud users (2017) Compares xen, native, and docker. Docker and native have neglible performance differences. ieee, html
Hypervisors vs. Lightweight Virtualization: A Performance Comparison (2015) Docker vs LXC vs Native vs KVM. Containers have near identical performance, KVM is only slightly slower. ieee, html
A component-based performance comparison of four hypervisors (2015) Hyper-V vs KVM vs vSphere vs XEN. ieee, html
Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal (2021) VMWare workstation vs KVM vs XEn springer, html Most rigorous and in depth on the list. Workstation, not esxi is tested.

The short version is: it depends, and they can fluctuate slightly on certain tasks, but they are mostly the same in performance.

default PROXMOX and XCP-NG installations.

What do you mean by hardening? If you are talking about hardening the management operating system (Proxmox's Debian or XCP's RHEL-like), or the hypervisor itself?

I agree with the other poster about CIS hardening and generally hardening the base operating system used. But I will note that XCP-NG is more designed to be an "appliance" and you're not really supposed to touch it. I wouldn't be suprised if it's immutable nowadays.

For the hypervisor itself, it depends on how secure you want things, but I've heard that at Microsoft Azure datacenters, they disable hyperthreading because it becomes a security risk. In fact, Spectre/Meltdown can be mitigated by disabling hyper threading. Of course, their are other ways to mitigate those two vulnerabilities, but by disabling hyper threading, you can eliminate that entire class of vulnerabilities — at the cost of performance.

[–] moonpiedumplings 2 points 5 days ago

Their license is not a free software/content license, as it has a non-commercial clause.

I'm frustrated with non-commercial as a clause because it feels difficult to define. Even though selling the content is pretty clear cut, there are so many ways to reuse content that indirectly make money, in a society where everything is business. If I use this content on my resume and then that gets me a job, was it a commercial usecase?

[–] moonpiedumplings 2 points 5 days ago

the licence is still in the spirit of open source

that's the problem. The license is only good in spirit, and simply doesn't work in practice.

For example, a corporation could run a subsidiary business which doesn't make enough money to violate the license, which then rents use of the software to the the big corporation. Google used to use a similar scheme, to shift money around and essentially evade taxes.

Although in a legal system where money is a win button, you can't really win going to win even if they just decided to violate the license.

Anyway, if you don't want big corporations to use it, just use the AGPL.

Google basically bans use of the AGPL internally — you can't even install AGPL apps!

[–] moonpiedumplings 1 points 6 days ago

I just did a quick test with quarto, which uses pandoc markdown and pandoc for conversions, and it looks like pandoc doesn't recognize #nospace as a header (although this could be a quarto specific thing).

A quick look at the python library op is using and it seems that that is what they are using to convert to html, rather than pandoc.

[–] moonpiedumplings 2 points 1 week ago* (last edited 1 week ago)

I don't know how to retire a car but my dad has guided me through replacing a few bits of the engine mount, so does that count?

[–] moonpiedumplings 3 points 1 week ago* (last edited 1 week ago)

Now, I don't write code. So I can't really tell you if this is the truth or not — but:

I've heard from software developers on the internet that OpenCL is much more difficult and less accessible to write than CUDA code. CUDA is easier to write, and thus gets picked up and used by more developers.

In addition to that, someone in this thread mentions CUDA "sometimes" having better performance, but I don't think it's only sometimes. I think that due to the existence of the tensor cores (which are really good at neural nets and matrix multiplication), CUDA has vastly better performance when taking advantage of those hardware features.

Tensor cores are not Nvidia specific, but they are the "most ahead". They have the most in their GPU's, and probably most importantly: CUDA only supports Nvidia, and therefore by extension, their tensor cores.

There are alternative projects, like how leela chess zero mentions tensorflow for google's Tensor Processing Units, but those aren't anywhere near as popular due to performance and software support.

[–] moonpiedumplings 4 points 1 week ago* (last edited 1 week ago) (1 children)

AFIK it’s only NVIDIA that allows containers shared access to a GPU on the host.

This cannot be right. I'm pretty sure that it is possible to run OpenCL applications in containers that are sharing a GPU.

I should test this if I have time. My plan was to use a distrobox container since that shares the GPU by default and run something like lc0 to see if opencl acceleration works.

Now where is my remindme bot? (I won't have time).

 

See title

 

See title

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

27
Introducing Incus 6.7 (www.youtube.com)
submitted 4 months ago by moonpiedumplings to c/linux
 

Incus is a virtual machine platform, similar to Proxmox, but with some big upsides, like being packaged on Debian and Ubuntu as well, and more features.

https://github.com/lxc/incus

Incus was forked from LXD after Canonical implemented a Contributor License Agreement, allowing them to distribute LXD as proprietary software.

This youtuber, Zabbly, is the primary developer of Incus, and they livestream lots of their work on youtube.

11
Cuttle (en.m.wikipedia.org)
 

This card game looks really good. There also seems to be a big, open source server: https://github.com/cuttle-cards/cuttle

 

Source: https://0x2121.com/7/Lost_in_Translation/

Alt Text: (For searchability): 3 part comic, drawn in a simple style. The first, leftmost panel has one character yelling at another: "@+_$^P&%!. The second comic has them continue yelling, with their hands in an exasperated position: "$#*@F% $$#!". In the third comic, the character who was previously yelling has their hands on their head in frustration, to which the previously silent character responds: "Sorry, I don't speak Perl".

Also relevant: 93% of paint splatters are valid perl programs

 

https://security-tracker.debian.org/tracker/CVE-2024-47176, archive

As of 10/1/24 3:52 UTC time, Trixie/Debian testing does not have a fix for the severe cupsd security vulnerability that was recently announced, despite Debian Stable and Unstable having a fix.

Debian Testing is intended for testing, and not really for production usage.

https://tracker.debian.org/pkg/cups-filters, archive

So the way Debian Unstable/Testing works is that packages go into unstable/ for a bit, and then are migrated into testing/trixie.

Issues preventing migration: ∙ ∙ Too young, only 3 of 5 days old

Basically, security vulnerabilities are not really a priority in testing, and everything waits for a bit before it updates.

I recently saw some people recommending Trixie for a "debian but not as unstable as sid and newer packages than stable", which is a pretty bad idea. Trixie/testing is not really intended for production use.

If you want newer, but still stable packages from the same repositories, then I recommend (not an exhaustive list, of course).:

  • Opensuse Leap (Tumbleweed works too but secure boot was borked when I used it)
  • Fedora

If you are willing to mix and match sources for packages:

  • Flatpaks
  • distrobox — run other distros in docker/podman containers and use apps through those
  • Nix

Can get you newer packages on a more stable distros safely.

 

cross-posted from: https://programming.dev/post/18069168

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky. > > Linux emulation isn't too polished.

 

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky.

Linux emulation isn't too polished.

view more: next ›