ComfyUI and Animate Diff Evolve Installation Guide

comfyui tutorial And installation guide

Hello there, Prompt Muse here! In this comprehensive guide, I’ll walk you through the essentials of setting up ComfyUI and Animate Diff Evolve.

1. Introduction

• ComfyUI offers a node-based layout, allowing for a streamlined workspace tailored to your needs.mm
• Drag and drop features for images and workflows enhance ease of use.
• This tutorial aims to make you feel comfortable with ComfyUI, ensuring you make the most of its unique features.

2. System Requirements

• Nvidia RTX Graphics card is preferable (Works with AMD cards and Macs click here)
• Preferred: 12 GB of VRAM for processing videos.
• Allocate 50-80 GB of storage for outputs.

Tip: If your PC doesn’t meet these requirements, consider using Shadow Tech. It’s a remote PC service I personally use, providing an NVIDIA 1080 GPU, ample storage, and 12 GB VRAM for about $40/month.

3. Software Dependencies

• Git: Allows you to pull extensions from GitHub. Download here.
• FFmpeg: Essential for encoding videos. Download here.

4. Installing ComfyUI & comfyUIManager

1. Visit the ComfyUI GitHub page.
2. Under the “Installing” section, select the “Direct link to download” for the standalone portable version.
3. Once downloaded, extract the files to your chosen directory.
4. For ComfyUI models:
• Checkpoints: Download from civitai.com and place in the checkpoints folder.
• VAE: Download from Stability AI’s hugging face website and place in the vae folder.
5. Install ComfyUI Manager from the civitAI page for easy updates and add-on installations.

5. Basic Workflow Overview

• ComfyUI operates like a circuit board, with nodes representing each process.
• Start with the Load Checkpoints node, input your positive and negative prompts, and proceed to the K Sampler.
• The Latent Image Node determines your image dimensions and batch size.
• The VAE Decode node processes the final image.

Pro Tip: Images created in ComfyUI can be dragged and dropped back into the system to load their respective node layouts.

Here’s the continuation of the tutorial based on the provided transcript:

## **6. Installing Motion Models**

1. In ComfyUI Manager, go to **Install Models**.
2. Type “mm” in the search box, which stands for motion models.
3. Download and install **Motion Model version 14**, **version 15**, and **version 15 uncore 2**. The high and mid variants are also recommended.
4. After downloading, you should see a confirmation message.

## **7. Installing Control Net**

1. Still in the ComfyUI Manager, search for “control net”.
2. Locate and install **Stable Diffusion 1.5**. You might need to scroll down the list.
3. Install other control nets like **Line Artarts** and **Open Pose** as needed, ensuring you have enough storage.

**Note:** After installing, close all instances of the manager and ComfyUI. When reopening ComfyUI, it will automatically install the nodes and extensions you’ve selected.

## **8. Utilizing Pre-Made Templates**

1. Visit **cinka dink’s GitHub page**. He offers pre-made templates for Animate Evolve.
2. Download a workflow, drag, and drop it into ComfyUI to load the node layout.
3. If you encounter missing nodes (highlighted in red), simply go to the manager and click on **Install Missing Nodes**.

## **9. Running Workflows**

1. Use the **Load Checkpoints** node, followed by **Clip Setting**, **Positive Prompt**, and **Negative Prompt**.
2. The **K Sampler** determines sampling, while **Animate Diff Loader** adds motion.
3. Adjust settings such as image dimensions, batch size, and motion model as needed.
4. For advanced users, experiment with nodes like **Animate Diff Uniform Context Options** to enhance animations.

## **10. Adding Laura for Enhanced Details**

1. Double-click on an empty space in ComfyUI and search for **Laura**.
2. Select the **Laur Loader** and connect it as shown in the workflow.
3. Adjust the **Strength Model** to reduce the risk of artifacts.
4. Run the prompt and observe the enhanced details in the generated animation.

## **11. Video to Video Animations Workflow**

1. Use **Inner Reflections’** guide as a template for video-to-video animations.
2. For video inputs, use the **Video Load Node**.
3. Adjust settings like frame load cap, skip first frames, and select every M frame to customize the video output.
4. Use **Load Checkpoint** to select your desired model.
5. Implement control nets, like **Line Art**, to guide the animation style.
6. Experiment with different nodes and settings to achieve unique video outputs.

## **12. Exploring Other Templates**

1. Another great template to explore is **Prompt Scheduling**, which allows dynamic prompt changes over time.
2. This feature can be used to create evolving animations, adding depth to your projects.

**Conclusion**

Thank you for following along! With ComfyUI, the possibilities are vast, allowing for creative freedom and intricate animations. A written version of this tutorial is available on the Prompt Muse website. Feel free to reach out on social media @promptMuse for further assistance.

Remember: Art is a journey, and every masterpiece begins with the decision to try. Dive in and explore the world of ComfyUI and Animate Evolve!

Join The Prompt Muse Gang!

Get the latest A.I News, Reviews, and tutorials, hand-picked by A.I depending on your preference and sent directly to you.

A newsletter featuring two pictures of a man and a woman highlighting 'prompt news vs muse'.

*We promise not to spam, sell or do any other naughty things with your details – simply keep you updated when we launch new awesome tutorials and news.