this post was submitted on 05 Jul 2023
32 points (100.0% liked)
Stable Diffusion
4304 readers
17 users here now
Discuss matters related to our favourite AI Art generation technology
Also see
Other communities
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The link in the header post includes the methodology used to gather the data, but I don't think it fully answers your first question. I imagine there are a few important variables and a few ways to get variation in results for otherwise identical hardware (e.g. running odd versions of things or having the wrong/no optimizations selected). I tried to mitigate that by using medians and only taking cards with 5+ samples, but it's certainly not perfect. At least things seem to trend as you'd expect.
I'm really glad vladmandic made the extension & data available. It was super tough to find a graph like this that was remotely up-to-date. Maybe I'll try filtering by some other things in the near future, like optimization method, benchmark age (e.g. eliminating stuff prior to 2023), or VRAM amount.
For your last question, I'm not sure the host bus configuration is recorded -- you can see the entirety of what's in a benchmark dataset by scrolling through the database, and I don't see it. I suspect PCIE config does matter for a card on a given system, but that its impact is likely smaller than the choice of GPU itself. I'd definitely be curious to see how it breaks down, though, as my MoBo doesn't support PCIE 4.0, for example.
I too would be interested to see how that breaks down. I'm sure that interface does have an impact on raw speed, but actually seeing what that looks like in terms of real life impact would be great to know. While I doubt there are many people who are specifically interested in running SD using an external GPU via Thunderbolt or whatever, for the sake of human knowledge it'd be great to know whether that is actually a viable approach.
I've always loved that external GPUs exist, even though I've never been in a situation where they were a realistic choice for me.
Hey thanks again for sharing this. I spent all afternoon playing with the raw data. I just finished doing a spreadsheet of all of the ~700 entries for Linux hardware excluding stuff like LSFW. I spent way too long trying to figure out what the hash and HIPS numbers corelate to and trying to decipher which GPU's are likely from laptops. Maybe I'll get that list looking a bit better and post it on paste bin tomorrow. I didn't expect to see so many AMD cards on that list, or just how many high end cards are used. The most reassuring aspect for me to see are all of the generic kernels being used with Linux. There were several custom kernels, but most were the typical versions shipped with a LTS distro.
I'm a Windows caveman over here, but you should definitely post your Linux findings on here when you're ready! Also, if you have suggestions for how to slice this, I'd be happy to take a stab at it. This was a quick thing I did while ogling a GPU upgrade, so it's not my best work haha.
I made a post on c/Linux about this today (https://lemmy.world/post/1112621) and added a basic summary of the Linux specific data. I'm still trying to figure everything out myself. I had to brute force the github data for a few hours in Libra Calc (FOSS excel) to distil it down to something I could understand, but I still have a ton of questions.
Ultimately I've just been trying to figure out what the most powerful laptop graphics cards are and which ones have the most RAM. So far, it looks like the RTX3080Ti came in some laptops and has 16GB of RAM. Distilling out the full spectrum of laptop options is hard. Anyone really into Linux likely also wants to avoid nvidia as much as possible so navigating exactly where AMD is at right now is a big deal. (The AMD stuff is open source with good support from AMD while nvidia is just a terrible company that does not give a rats ass about the end user of their products.) I do not care if SD on AMD takes twice as long to do the same task as nvidia, so long as it can still do the same task. Open source means full ownership to me and that is more important in the long term.