Blog
Time to read: 1 Minute, 51 Seconds

Stable Diffusion on Debian with AMD and uv

I have been exploring on my own, and for a client, how to self host AI systems, including Ollama and OpenwebUI, text to speech and image generation. So far I have OpenwebUI working okay with Ollama and Lemonade for AMD systems.

Today's project has been getting Stable Diffusion web ui working on my Debian 13 PC with an AMD RX 9060 XT 16GB GPU.

Assumptions

  • Using Debian 13 Trixie (may work on other Linux distros)
  • Using rocm drivers From AMD

Using uv instead of pip

The problem with using pip on Debian is that it is more difficult to manage different Python versions. With uv it is very simple.

1. First, install UV (Python package manager)

curl -LsSf https://astral.sh/uv/install.sh | sh

2. Clone Stable Diffusion

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui && cd stable-diffusion-webui

3. Install Python 3.10 and packages with uv

uv python install 3.10
uv venv --python 3.10
source .venv/bin/activate
uv pip install -r requirements.txt
uv pip install "setuptools<70" # important

4. Install PyTorch for AMD

This ensures it will use your AMD hardware, if you have AMD hardware

uv pip install --upgrade torch torchvision torchaudio \
  --index-url https://download.pytorch.org/whl/rocm7.1

5. Github repository fix

If you encounter an issue where you have to enter your Github username and password, the repository URL in the setup script is incorrect.

export STABLE_DIFFUSION_REPO="https://github.com/w-e-w/stablediffusion.git"

6. Set correct ROCM GFX version

export HSA_OVERRIDE_GFX_VERSION=12.0.0

7. Other environment variables which may work

With some help from Gemini.. These may or may not be necessary

# Clear the allocator
export LD_PRELOAD=""

# Crucial: Disable SDMA which causes hangs on kernel 6.12
export HSA_ENABLE_SDMA=0

8. Run!

This will run with --listen for remote access, remove it if not necessary.

python3 launch.py --skip-torch-cuda-test --listen \
  --opt-sub-quad-attention \
  --no-half-vae \
  --disable-nan-check \
  --use-pytorch-cross-attention

Overall, a bit more involved than just cloning the Github repository and running ./webui.sh as seen on the GitHub Readme page. It is an old repository, however it seems I am still able to get it to work on a modern Linux AMD system. Performance on my system was quite poor, about the same speed on CPU versus GPU. But just another step on the journey on self-hosting LLM AI systems.

Previous Post

Page views: 33