this post was submitted on 13 Jun 2023
30 points (100.0% liked)

Free and Open Source Software

17901 readers
3 users here now

If it's free and open source and it's also software, it can be discussed here. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

I am using duplicati and thinking of switching to Borg. What do you use and why?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 1 year ago (2 children)

There is no such thing as the objectively best solution. Each tool has advantages and disadvantages. And every user has different preferences and requirements.

Personally, I am using Borg for years. And I have had to restore data several times, which has worked every time.

In addition to Borg, you can also look at Borgmatic. This wrapper extends the functionality and makes some things easier.

And if you want to use a graphical user interface, you can have a look at Vorta or Pika.

[–] [email protected] 2 points 1 year ago

Agree. Should say 'best for you'. Cool thanks. I know of Vorta which I intended of using. Gonna read up on the other ones.

load more comments (1 replies)
[–] [email protected] 7 points 1 year ago (1 children)

Using borg backup, just because there are some nice frontends for the gnome ecosystem (when I am using gnome, I love to use gnome apps), and it has a nice cmd for scripting when using something else (using it on servers)

[–] [email protected] 2 points 1 year ago (1 children)

And there is a nice graphical frontend for it too: Vorta

load more comments (1 replies)
[–] [email protected] 7 points 1 year ago (1 children)

I use restic. For local backups, Timeshift.

load more comments (1 replies)
[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

I don't have backups. :/

And I will regret it some day.

I use github for code so that's backed up though.

[–] [email protected] 7 points 1 year ago

There are two kinds of people.
Those who make backups and those who will.

[–] [email protected] 6 points 1 year ago (7 children)

Kopia has served me great. I back up to my local Ceph S3 storage and then keep a second clone of that on a raid.

Kopiahas good performance and miltiple hosts can back up tp it concurrently while preserving deduplication -- unlike borgbackup.

load more comments (6 replies)
[–] [email protected] 5 points 1 year ago (4 children)

I use btrfs snapshots and btrbk

btrfs is a great filesystem and btrbk complements it easily. Switching between snapshots is also really easy if something goes wrong and you need to restore.

Archwiki docs for btrfs: https://wiki.archlinux.org/title/Btrfs#Incremental_backup_to_external_drive

[–] [email protected] 4 points 1 year ago

This is the way !

[–] [email protected] 4 points 1 year ago

Oh interesting! I might take a look at btrbk

[–] [email protected] 3 points 1 year ago

Thanks. Heard a lot about it. Will chack it out.

[–] [email protected] 2 points 1 year ago

This is what I do. Btrfs snapshots and use send/receive with my NAS.

[–] [email protected] 4 points 1 year ago

I started using Timeshift when it was included with a distro I was using and haven't had reason to shift away from it. Have already used it once to do a full restore.

[–] [email protected] 4 points 1 year ago (1 children)
  • Btrfs for local system backups based on snapshots
  • Photoprism for photos
  • Syncthing for other media
[–] [email protected] 2 points 1 year ago (1 children)

You will reconsider calling strategy a backup should the filesystem get corrupted for whatever reason.

I've tested my full system backup restore once with btrfs. Worked out fine.

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago

I've been using restic. It has built-in dedup & encryption and supports both local and remote storage. I'm using it to back up to a local restic-server (pointing to a USB drive) and Backblaze B2.

Restores for single or small sets of files is easy: restic -r $REPO mount /mnt Then browse through the filesystem view of your snapshots and copy just like any other filesystem.

[–] [email protected] 3 points 1 year ago

Rsync is great but if you want snapshots and file history rsnapshot works pretty well. It's based on rsync but for every sync it creates shortcuts for existing files and only copies changes and new files. It saves space and remains transparent for the user. FreeFileSync is also amazing

[–] [email protected] 3 points 1 year ago

I just use rsync to backup my home folder to my NAS.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

I've used borg for a while and like it a lot. I would say your best option for pure linux is borg+borgmatic/vorta just because borg is battle-tested.

If you run any other OSs and don't mind a relative newcomer, I've found kopia to be easy to recommend to my windows friends. At this point kopia has been around long enough (~4 years of actual beta) that I think it's safe to trust its integrity with personal data. It has all the important features from borg in a cross-platform solution, so it's also a viable alternative for borg on linux if you don't like borg's frontends for whatever reason.

[–] [email protected] 3 points 1 year ago (2 children)

I use NixOS so all my system configuration is already saved in my NixOS configs, which I save on GitHub. For dotfiles that aren't managed by NixOS I use syncthing to sync them between my devices, but no real backup cause I can just remake them if I need to, and things like my Neovim and VSCode configs are managed by my NixOS configs so they're backed up as well.

load more comments (2 replies)
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

What problem are you trying to solve? Please think about that, and about your backup strategy, before you decide on any specific tools.

For example, here are several scenarios that I guard against in my backup strategy:

  • Accidentally delete a file, I want to recover it quickly (snapshots);
  • Entire drive goes kablooie, I want my system to continue running without downtime (RAID)
  • User data drive goes kablooie, I want to recover (many many options)
  • Root drive goes kablooie, I want to recover (baremetal recovery tools)
  • House burns down or computer is damaged/stolen (offsite backups)
[–] [email protected] 3 points 1 year ago

Multiple. Locally I have Timeshift doing btrfs snapshots every so often. This is mostly to roll back to a snapshot if something breaks. I've never had to use it (and probably should).

I use Pika backup every once in a while for a local backup to an external drive. Mostly because it's easy to restore quickly.

I have duplicacy doing backups to a cloud provider. I used to use duplicati for this, and it was fine - although I didn't like that it seems to be forever in beta. I like that duplicacy can do deduplication between backups of different machines which most other solutions I've seen cannot. I like its selection of cloud providers vs Borg/Vorta and some others.

[–] [email protected] 3 points 1 year ago

Just a reminder. Consider and test your restore process as well. Backups without restore testing are kind of questionable. Also think how the restore will go. Do you want to do a bare metal restore, or will you just reinstall, and restore certain things for example. Lot of these backup methods will not get a true bare metal restore set, nor can file system backups be "perfect" if they are done on a running system. Databases and things like cryptfs mounts for example can be problematic for example. Nor do all tools necessarily backup the full structure of the file system.

Not saying these are always issues, just be aware of them.

[–] [email protected] 2 points 1 year ago

I'm currently using TimeShift to backup my desktop onto an external hard drive (the why is because of how simple it is to use) and I'll be making a copy of anything I upload to my jellyfin server onto the external hard drive as well. I hope to eventually have a dedicated backup server and have a duplicate of it at a friend's house for offside backup too

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I just use a script on an systemd timer. Well two scripts on two timers really - one running daily, one weekly for different data. It's just a bunch of rsync commands copying folders to an hdd in my system and I reroute the output into a simple log file, mainly to verify if it ran at all. I am a bit paranoid about that. I can also run it manually whenever I want. Oh and some of the data I also rsync again to a smb cloud drive from Hetzner. I do not keep multiple versions and I delete remote files that have been deleted locally. It's just a 1:1 copy.

[–] [email protected] 2 points 1 year ago (1 children)

I'm currently working on a disaster recovery plan using fsarchiver. I have very limited experience with it so far, but it had the features and social proof I was looking for.

I have so far used it to create offline filesystem backups of two volumes, one was LUKS encrypted (has to be manually "opened" with cryptsetup).

It can backup live filesystems which was important to me.

It's early days for my experience with this, but I'm sure others have used it and might chime in.

[–] [email protected] 3 points 1 year ago

Just one warning. If doing live, think about state and test your restores. Just mention because things like databases and ecryptfs will not properly archive live. There are various ways around, but consider if you have concerns regarding getting really good complete backups taken at one point in time and on live systems.

[–] [email protected] 2 points 1 year ago

ZFS snapshots and Borg(matic).

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (2 children)

I use my own scripts with rsync etc, I don't back up my OS itself since I have installing it automated with scripts as well. I just back up specific things I need with my scripts.

load more comments (2 replies)
[–] [email protected] 2 points 1 year ago (1 children)

I am old school. I just use GNU Tar with the Pax format and multiple external detachable encypted hard drives. Reason is it is simple and a well known tool that is very common with a standard archive format.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (2 children)

I'm curious - how much data are you backing up with that method and how frequently are you doing your backups? Doesn't sound like it would scale well, but I'm also wondering if maybe this is perfect and I've just been over thinking it.

[–] [email protected] 3 points 1 year ago (1 children)

There is not a size limit. Lot of these other methods actually use GNU Tar behind the scenes anyway. More then that GNU tar has been used for decades for this purpose. Pull out any Unix book from 2 decades ago and you will see "tar", "cpio", and "dump/restore" as the way. The new tool out there is Pax and in fact GNU Tar supports the new "pax" format. Moreover GNU Tar with Pax format can backup almost full disk structure including hard links, ACLs, and extended attributes which a lot of tools do not do. It is still useful to archive some things at a lower level like your partition table, and boot blocks of course. You also have to decide what run-level (such as rescue) you want to archive in, and/or what services you should stop, or provide separate to file system dumps for depending on your system. Databases, and things like ecryptfs take some special thought (thought it does for any tool). It is also good to do test restores to verify your disaster plan.

I use tar on many systems. My workstation is about 1TB of data. Backup is about 11 hours though I think it could be faster if I disabled compression (I currently use the standard gzip compression which is not optimal). I think the process is CPU bound by the compression at the moment. Going to uncompressed or using parallel gzip at level 2 is probably the fastest you can do and should really speed things up by 4X or more. I have played with this some for my wife and her raw backup is a lot faster now. My wife uses USB 3 external drives specifically plugged into USB 3 ports (the one with the SS symbol and the blue interior), and with a USB 3 related cable. I use 6TB naked SATA drives I insert into a hot mount enclosure and store in storage boxes. My backup system can theoretically do incrementals too, but it has some issues since I have moved to BTRFS so I do not use that at the moment. Did always use before. I have an idea how to fix, but need to debug and test incrementals now.

How often: I backup monthly. When my incrementals were working I use to do it weekly or whenever I got nervous. Other option for the BTRFS file systems would be to use their native backup tools. Not sure though, I like to use generic stuff. Lot to be said for generic.

Big downside of tar is the mind numbing man page. Getting the options correct takes some real thought. You also have to be comfortable with the shell and Bash scripting. Big upside you can customize exactly what you want.

[–] [email protected] 2 points 1 year ago (1 children)

tar dates all the way back to the 70s.

[–] [email protected] 3 points 1 year ago (1 children)

Yes, I actually did not know how far back, thanks. Wikipedia seems to say 1979. I know my system admin book dated 1992 talks about it and it was common then. I think my brother use to use it in the early 1980s for his job and maybe I did too a few times. Wikipedia says GNU Tar is newer and traces back to 1987. The formats have changed some and there are several. The PAX format is much newer which I think was standardized in 2001 but GNU Tar would have taken time to implement it. I do not know that date.

People seem to forget that tar worked well back then and still does.

[–] [email protected] 2 points 1 year ago (1 children)

I had the chance to play with late 70s Unix for a bit a few years ago. (Hardware on loan from a museum.) VERY minimal, but still recognizable. (Well, my Unix reflexes are old - I started in the mid 80s.)

[–] [email protected] 2 points 1 year ago (1 children)

Interesting. About then I was using a VAX. Somehow I spend most of my time on other stuff until I switched to Linux around 2000.

[–] [email protected] 2 points 1 year ago (1 children)

My first Unix was 4.3BSD on a VAX-11/750. (There was another 11/750 running VMS, but I didn't like that nearly as much.)

[–] [email protected] 2 points 1 year ago (3 children)

Yes VMS. That was what I was using. Unix. I did use it for something a few times. The university had one of those mini-supper computers that were a thing for awhile.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 2 points 1 year ago

I use FreeFileSync. It's the only GUI tool I found that let's me sync folders while omitting file deletions. It lets you create batch files from the GUI that I execute with crontab multiple times per day.

[–] [email protected] 2 points 1 year ago (1 children)

I like pikabackup it’s based on borg

load more comments (1 replies)
[–] [email protected] 2 points 1 year ago

I've tried alternatives but I've stuck with LuckyBackup even though there have not been any updates for a while:

  1. It's rsync based - which is updated
  2. It has masses of GUI options including various include/exclude options, pre- and post-commands, etc.
  3. It's simple - I can browse inside the backed files and see what is going on, or just restore back one or two files.
  4. It updates cron itself.
load more comments
view more: next ›