Generate images in one second on your Mac using a latent consistency model

by bfirshon 10/27/23, 4:37 PMwith 71 comments
by herpdyderpon 10/27/23, 5:27 PM

32GB M1 Max is taking 25 seconds on the exact same prompt as in the example.

Edit: it seems the "per second" requires the `--continuous` flag to bypass the initial startup time. With that, I'm now seeing the ~1 second per image time (if initial startup time is ignored).

by m3kw9on 10/27/23, 7:17 PM

Everytime i execute: python main.py \ "a beautiful apple floating in outer space, like a planet" \ --steps 4 --width 512 --height 512

It redownloads 4 gigs worth of stuff every execution. Can't you have the script save, and check if its there, then download it or am I doing something wrong?

by simple10on 10/27/23, 6:55 PM

This is awesome! It only takes a few minutes to get installed and running. On my M2 mac, it generates sequential images in about a second when using the continuous flag. For a single image, it takes about 20 seconds to generate due to the initial script loading time (loading the model into memory?).

I know what I'll be doing this weekend... generating artwork for my 9 yo kid's video game in Game Maker Studio!

Does anyone know any quick hacks to the python code to sequentially prompt the user for input without purging the model from memory?

by naeton 10/27/23, 9:07 PM

Well, how do they look? I've seen some other image generation optimizations, but a lot of them make a significant tradeoff in reduced quality.

by oldstrangerson 10/27/23, 5:13 PM

Interesting timing because part of me thinks Apple's Spooky Fast event has to do with generative AI.

by hackthemackon 10/27/23, 7:27 PM

If you want to run this on a linux machine and use the machine's cpu.

Follow the instructions. Before actually running the command to generate an image.

Open up main.py Change line 17 to model.to(torch_device="cpu", torch_dtype=torch.float32).to('cpu:0')

Basically change the backend from mps to cpu

by zorgmonkeyon 10/28/23, 12:26 AM

It is very easy to tweak this to generate images quickly on a nvidia GPU:

* after `pip install -r requirements.txt` do `pip3 install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121`

* on line 17 of main.py change torch.float32 to torch.float16 and change mps:0 to cuda:0

* add a new line after 17 `model.enable_xformers_memory_efficient_attention()`

The xFormers stuff is optional, but it should make it a bit faster. For me this got it generating images in less than second [00:00<00:00, 9.43it/s] and used 4.6GB of VRAM.

by agloe_dreamson 10/27/23, 5:26 PM

This....but a menu item that does it for you.

by tobron 10/27/23, 6:46 PM

What will be possible to do once these things run at interactive frame rates? It’s a little mind boggling to think about what types of experiences this will allow not so long from now.

by LauraMediaon 10/27/23, 8:25 PM

Thought it was too good to be true, tried it with an M2 Pro MacBook Pro.

Generation takes 20-40 seconds, when using "--continuous" it takes 20-40 seconds once and then keeps generating every 3-5 seconds.

by simple10on 10/27/23, 7:24 PM

Does anyone know of other image generation models that run well on a M1/M2 mac laptop?

I'd like do to some comparison testing. The model in the post is fast but results are hit or miss for quality.

by m3kw9on 10/27/23, 7:52 PM

Is fast but only if you go 512 512 res will generate an image from start script to finish in 5 seconds, but if you up it to 1024 it takes 10x as long

This on an M2 Max 32gb

by firechickenbirdon 10/27/23, 6:59 PM

Quality of these LCM is not the best though

by latchkeyon 10/28/23, 2:19 AM

The speed is impressive, but the output is honestly not. It feels like DALLE3 is light years ahead of it.

by ForkMeOnTinderon 10/27/23, 6:31 PM

Why bother with the safety checker if the model is running locally? I wonder how much faster it would be if the safety checks were skipped.

by grandpa_yetion 10/27/23, 5:13 PM

Seeing this kind of image generation limited to M series Macs just goes to show how far ahead Apple is in the notebook GPU game.

by AIorNoton 10/27/23, 5:01 PM

Awesome