this post was submitted on 29 Jul 2023
39 points (100.0% liked)

Stable Diffusion

4304 readers
17 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

Hello everyone!

My name's Benjamin, I'm the developer of ENFUGUE, a self-hosted Stable Diffusion Web UI that's built around an intuitive canvas interface, while still trying to deliver the power and deep customization of the popular tab-and-slider web UI's.

I'm taking it out of Alpha and into Beta with the v0.2 release, which brings SDXL support while still maintaining most of the feature set of 1.5 by allowing you to configure multiple checkpoints for various diffusion plans. It also has a ton of changes since 0.1 as suggested by other users, like the the ability to point ENFUGUE to the directories of other Web UI installations to share models and other files.

This is not monetized software in any way; I simply built the tool I wanted to use, and wanted to share. Thanks you taking a look!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

Now knowing where to look, I did some fixing by myself! Main issue is that I had CUDA 10 and 12, no 11. Then after going insane about that tiny difference... I landed on something I lack the knowledge to decipher: "PyInstallerImportError: Failed to load dynlib/dll 'C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-8.6.1.6\lib\nvinfer_plugin.dll'. Most likely this dynlib/dll was not found when the application was frozen."

All I can say is that the file is there.

[–] [email protected] 2 points 1 year ago (2 children)

Hey! I am able to reproduce that error by using the CUDA 12 version of TensorRT.

PyInstallerImportError: Failed to load dynlib/dll 'C:\\TensorRT-8.6.1.6\\lib\\nvinfer_plugin.dll'. Most likely this dynlib/dll was not found when the application was frozen.

Please make sure you downloaded the top file here, not the bottom.

I was able to modify my PATH and point to the right TensorRT, then restart the server, and it worked for me (no machine restart needed.)

Please let me know if that works for you :)

[–] [email protected] 1 points 1 year ago (1 children)

My request is dumb, the UI is glitching a little but hot damn 12 iterations per second! Impressive.

[–] [email protected] 2 points 1 year ago (1 children)

YOU GOT IT WORKING?

You are the first person to stick through to the end and do it. Seriously. Thank you so much for confirming that it works on some machine besides mine and monster servers in the cloud.

The configuration is obviously a pain point, but we're running along the cutting edge with TensorRT on Windows at all. I'm hoping Nvidia makes it easier soon, or at least relaxes the license so I'm not running afoul if I redistribute required dll's (for comparison, Nvidia publishes TensorRT binary libraries for Linux directly on pip, no license required.)

It's also a pain that 11.7 is the best CUDA version for Stable Diffusion with TensorRT. I couldn't even get 11.8, 12.0 or 12.1 to work at all on Windows with TensorRT (they work fine on their own.) On Linux, they would work, but would at best give me the same speed as regular GPU inference, and at worst would be slower, completely defeating the point.

[–] [email protected] 1 points 1 year ago

Not going to lie, I almost gave up a few times. But I can also be stubborn... anyway since this is apparently the first confirmation it works, it's probably be helpful if I mention that it's a 12gb 3060. :)

[–] [email protected] 1 points 1 year ago

... I'll check later, but I do remember grabbing the "right one" as I had version 12, so this might very well be it.