So far, running LLMs has required a large amount of computing resources, mainly GPUs. Running locally, a simple prompt with a typical LLM takes on an average Mac ...
In this fork of the original ControlNet repo, we evaluated the possibility of using lighter backbones for ControlNet model instead of Stable Diffusion encoder. Our ...