I use motherboard ports first and don’t install a HBA unless needed, because it consumes a lot of power and prevents cpu idle
Data Hoarder
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time (tm) ). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
Yep; unless you need SAS support I would recommend onboard SATA first.
OP, I have the same HBA card as you, it gets toasty even just idling, and even hotter once you throw a load onto it. I measured ~10W power use just idling (no drives attached to HBA). I almost guarantee using onboard SATA will be more power efficient.
Even better, is if you can physically remove the HBA card until you need it.
on my mobo and cpu, I use the hba first as theres more bandwidth. mobo sata last as one of my two nvme ports runs through chipset and chipset to cpu is limited bandwidth, roughly 4GB/s total so sata and one nvme competes for that.
if you have a cpu/mobo thats made in the last year or two you would have more speed for that interconnect and its not that big of a worry.
I've heard a few horror stories of LSI HBA's causing some serious data corruption. Most of these cases were due to insuffiencient airflow. When it comes to data integrity I wonder if LSI HBA's in IT mode have more or less ability to detect errors or increase/decrease the risk of data corruption?
I've heard that overheating is more of an issue with the SAS 12Gbit HBAs, not the older 6Git ones like yours
Depends on your configuration.
If you do not need to passthrough the whole HBA to a VM for example, then, just go with the most convenient way for you.
I trust MB SATA more in terms of reliability. HBAs tend to overheat too.
However, if RAID topology allows, I'd try to spread the drives such that either one of MB or HBA failing completely would not bring the array down (RAID10 with 1 HBA, or RAID5/6 with 2 HBAs).
I bought a cheap HBA because then I can virtualise the PCI-e card to a VM and use it in ZFS.
Granted I could probably do the same with an onboard SATA controller, but I have more faith in a dedicated controller for my array.