I wish I understood what virtually any of all that means. Sounds fascinating
Bug Bounties
Does your OSS project have an issue that needs fixing? Post a bounty here!
Rules:
- Title must state bounty amount in USD, EUR, or BTC. Crypto bounties are allowed, just list rough USD/EUR/BTC equivalent amount as well. Crypto bounties must be paid out in a Top 20 market cap coin.
- OSS projects only
- Limit one post per bounty per month
- Your bounty must state who it is open to. If open to all, it can be in the body, if restricted by country it must be in the title.
- Nothing illegal or morally questionable
- No links to bountysource due to their ongoing payment issues.
We do not vouch for any projects posting bounties here or their ability to pay, you are responsible for evaluating risks yourself.
Related sites:
boss.dev - Post and find bounties, only some countries and currencies eligible
algora.io - Post and find bounties, supports more countries than boss.dev, roughly 14% fee.
LVM
is the Linux Volume Manager. In short it's kind of like a partitioner inside the OS (but with lots of cool features, like encryption, snapshots and restores, and caches, RAID)
So you add all your drives, potentially with different groups like NVME, SSD. Then in those groups you create a volume (think partition).
Examples:
-
For example my laptop has one drive, and one volume group, but I have a separate volume for home so I can take snapshot (which are small if things haven't changed much!) and keep my home direecty when installing a new distro. I also make a separate volume for a VM to keep my machine clean.
-
My server, however, has 2 NVME drives and 12 spinning rust drives in 3 USB enclosures. Each USB drive is set as its own VG. USB is slow though, LVM to the rescue.
I set the rest of the space on the 1st NVME and all the space on the 2nd NVME to work as a cache for each of the external enclosures.
Now writes are NVME speeds and it will write back to the spinning rust at USB speed. Reads from the usb enclosures if cached are at NVME If it's in the read cache I get is at NVME speeds, otherwise it reads off the drive. At this point my read cache since creation is 82% and continuing to climb. So less than 1/5th of the reads actually went over the USB. At the rate it's climbing the current hit rate must be in the mid to high 90s.
A pre-seed
file is basically the answers to all the questions the installer would normally ask, like how to partition the drives, what mirror to use, software to install, settings to make, etc. default user accounts, etc. Now you can run that installer on a machine and walk away until it's done.
I never thought such a thoughtful and detailed reply would leave me even more confused than I was to begin with. I guess I learned that possibility existed so TIL
Hard drives are divided into partitions
. Once they're made they're (mostly) static, it's just a division, no other features.
LVM
(Linux Volume Manager) makes it's own "partitions"
with hookers and blackjack. Since it's done in the OS and not on the drive it's a LOT more flexible.
It takes disk(s) and/or partitions
and combines them into a volume group
(VG
) and then lets you create it's own divisions, called [logical] volumes
(LV
), to split up the storage. Think of this as a "virtual hard drive" that has a TON of features.
VGs
can include multiple drives and are easy to grow or shrink, add, remove, or replace physical drives, cache another volume, encrypt, make snapshots and roll back (eg: snapshot before update, restore if update borks something). Just so much
You can even set the RAID
level for each volume! RAID
controls how many copies are kept on different drives. RAID1
(or raid10
) has 2 drives hold the data) for important things so even if one drive fails you still have a working copy.
RAID0
only stores it on one device. There's RAID5
(3 copies) but it's mostly obsolete at this point as the rebuild process is painfully slow and adds addition wear on the other drives.
Let's say you have 4x 4TB drives, for 16TB of raw space (raid0
). Making it a raid1
would give you 8TB of space (since two copies are stored on different drives). But if you only need 1TB as a raid1
and the rest is raid0
you end up with 14TB of space left over! That's a lot more than 8TB!
There's a brazillion different options and useful things it can do. Mostly I find it useful for working with raids on servers. But I've stated leaving a few hundred gigs on my laptop to create volumes as need, such as an encrypted volume that's not unlocked on login to store passwords, keys, and ~~porn~~ tokens.
So it's like if I had a bag of candy and my wife wants me to share I could create a 2nd copy that she doesn't get to see. Share what she can see and keep the rest for myself?
What, exactly, is the current LVM setup?
does sda1p2_crypt
need to be mounted and/or preserved at that point in the script?
What is the full command being run that fails?
The easiest answer might just be to remove it from the group and then add it back
pvremove /dev/sda1p2_crypt
your code
vgextend crypt /dev/sda1p2_crypt
Check the linked post for the full info and background about the issue including results of previous attempts at fixes
If you can write a script to successfully do this with cubic and preseed, you are welcome to claim the bounty!
Is this an EFI or BIOS boot? You might need EFI.
1 1 1 free \
$bios_boot{ } \
method{ biosgrub } \
. \
256 40 256 fat32 \
$primary{ } \
$lvmignore{ } \
method{ efi } \
format{ } \
Add something like that to the start of your expert script should set up a grub partition AND and efi partition.
so did it work?