Stable Diffusion web UI
-
Updated
May 3, 2025 - Python
Stable Diffusion web UI
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Stable Diffusion web UI
Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
PALLAIDIUM — a generative AI movie studio, seamlessly integrated into the Blender Video Editor, enabling end-to-end production from script to screen and back.
Beautiful and Easy to use Stable Diffusion WebUI
Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.
A colab friendly toolkit to generate 3D mesh model / video / nerf instance / multiview images of colourful 3D objects by text and image prompts input, based on dreamfields.
T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Open reproduction of MUSE for fast text2image generation.
web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
[ICLR2023] Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation (CDCD).
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP
Stable Diffusion UI: Diffusers (CUDA/ONNX)
Yet Another Stable Diffusion Discord Bot
An Efficient Text-to-Image Generation Pretrain Pipeline
Local image generation using VQGAN-CLIP or CLIP guided diffusion
Official code repo for "Editing Implicit Assumptions in Text-to-Image Diffusion Models"
Add a description, image, and links to the text2image topic page so that developers can more easily learn about it.
To associate your repository with the text2image topic, visit your repo's landing page and select "manage topics."