Running Local LLMs (“AI”) on Old AMD GPUs and Laptop iGPUs (Arch Linux Guide)

Running Local LLMs (“AI”) on Old AMD GPUs and Laptop iGPUs (Arch Linux Guide)

A straightforward guide on how to compile llama.cpp with Vulkan support on Arch Linux (and Arch-based distros like CachyOS, EndeavourOS, etc). This lets you run models on old, officially unsupported AMD cards and Intel iGPUs.

The same steps work on Debian/Ubuntu, but the package names are different.

Here’s how I’m running models on 3 × AMD Radeon RX 580 8 GB (24 GB VRAM total) without ROCm in 2025.

Read More

My CV in Markdown (on GitHub) 📄✨

Hey there!

If you’re curious about who I am, what I do, and what I’ve been up to in life 💻🔧 — my CV is now publicly available on GitHub!

👉 Check it out here: https://github.com/albinhenriksson/cv

It’s written in pure Markdown – clean, readable, version-controlled, and 100% fluff-free. Perfect if you live in the terminal, use git, and like things tidy.


💬 Got feedback?
📬 Want to hire me?
🧩 Have a cool project I should be part of?

Read More