Dubai Diaries: Running LLMs & Stable Diffusion locally on a gaming laptop

I previously wrote about the second device that I got about coming to Dubai, but not much about the first one which was a gaming laptop. So here’s a bit about the laptop which also doubles as a local AI driver thanks to the Nvidia GPU (the RTX3060).

Soon after getting it back in 2022, I tried running the Stable Diffusion models and it was quite a bit of an upgrade over my original attempt on a plain GPU-less Windows machine. The generation times came down to 10s or so, and has gotten even faster as the models and tools have been optimised over the last couple of years. There are quite a few projects available on GitHub if you want give it a try – AUTOMATIC1111 and easydiffusion are among the more popular options. Nvidia has also got a TensorRT extension to further improve performance.

With that out of the way, I also discovered LM Studio which allows you to run LLMs locally with a chat like interface thrown in, and you can access a bunch of models like Meta’s LLama. The response times are of course not as fast as the freely available online options like ChatGPT, Claude, Gemini and the likes, but you effectively get unlimited access to the model.

Here’s an example from a conversation I had with LLama regarding the coffee meme from Ace Attorney game series:


Discover more from AB's Reflections

Subscribe to get the latest posts sent to your email.

Discover more from AB's Reflections

Subscribe now to keep reading and get access to the full archive.

Continue reading