With an innovative next-frame prediction neural network architecture, FramePack continuously generates videos by compressing input frame context to a fixed length, making the generation workload independent of video length.
Loading... 7s
FramePack employs a next-frame prediction neural network structure to generate videos continuously by compressing input context to a fixed length, enabling length-invariant generation.
Generate 60-second, 30fps (1800 frames) videos with a 13B model using only 6GB VRAM. Laptop GPUs can handle it easily.
As a next-frame prediction model, you'll directly see the generated frames, getting plenty of visual feedback throughout the entire generation process.
Compresses input contexts to a constant length, making generation workload invariant to video length and supporting ultra-long video generation.
Provides a feature-complete desktop application with minimal standalone high-quality sampling system and memory management.
anime.mp4
girl2.mp4
boy.mp4
boy2.mp4
girl3.mp4
girl4.mp4
foxpink.mp4
girlflower.mp4
girl.mp4
Clone FramePack from GitHub and install all dependencies in your environment.
Upload an image or generate one from a text prompt to start your video sequence.
Describe the desired movement and action in natural language to guide the video generation.
FramePack generates your video frame by frame with impressive temporal consistency. Download and share your results.
No credit card required. Start creating amazing videos today.
### Manual Installation on Windows
1. Create a folder and open Command Prompt
git clone https://github.com/lllyasviel/FramePack.git
cd FramePack
2. Create and activate a Python virtual environment (Python 3.10 recommended)
python -m venv venv
venv\Scripts\activate.bat
3. Upgrade pip and install dependencies
python -m pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt
4. Install Triton and Sage Attention
pip install triton-windows
pip install https://github.com/woct0rdho/SageAttention/releases/download/v2.1.1-windows/sageattention-2.1.1+cu126torch2.6.0-cp312-cp312-win_amd64.whl
※Adjust the URL according to your CUDA or Python version
5. Optional: Install Flash Attention
pip install packaging ninja
set MAX_JOBS=4
pip install flash-attn --no-build-isolation
6. Launch the Gradio UI
python demo_gradio.py
# We recommend having an independent Python 3.10 environment
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt
# Start the GUI
python demo_gradio.py
### Online Run on Windows (GUI)
1. Clone the repository:
git clone https://github.com/lllyasviel/FramePack.git
cd FramePack
2. Create and activate a Python virtual environment:
python -m venv venv
venv\Scripts\activate.bat
3. Install dependencies:
python -m pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt
4. Launch the Gradio GUI:
python demo_gradio.py
5. Open in browser:
http://localhost:7860
FramePack is a revolutionary video generation technology that compresses input contexts to a constant length, making the generation workload invariant to video length. Learn about our methods, architecture, and experimental results in detail.