This video is a step-by-step tutorial in how to build from scratch the Wan Animate 2.2. workflow. Learn how to build a complicated workflow with image and 'controlnet' references, including masking and background isolation. The perfect workflow to practice these techniques is using Wan Animate model, which combines the character animation generation and character replacement. The goal of this tutorial is not at how to understand the node mechanics of ComfyUI, not to provide a 'new' workflow (as a template is already available). This practice will provide learnings and skills how to better use ComfyUI and create yourself amazing workflows for artificial image and video generation. ================================================================= 👉 The workflow made in this tutorial FREE here: https://ko-fi.com/s/7c96ee77b7 🚨 Use this link to use RunPod: https://get.runpod.io/e0pn9e5rpbr4 ⭐ Runpod Template for RTX5090/P6000: https://tinyurl.com/KN-Blackwell-temp... =========================================================== 00:00 Intro - Contents 00:16 Opening Wan Animate Template and downloading the six models 00:47 Installing Missing Custom Nodes and Video Helper Suite 01:42 Wan Animate: description of the process - basic explanation 02:22 Empty Wan Animate Canvas: start with a K Sampler 02:36 Decoding the latents and connecting to the Video Combine node 03:16 Adding the models group: Wan Animate diffusion model, loras, SD3 shift 04:09 Adding the Wan Animate to Video Node 04:28 Adding the text encode and clip vision model load nodes 05:00 Connecting and encoding the reference image (VAE and clip vision) 05:45 Creating and connecting the face and poses (skeletong, DWPose 'controlnet' preprocessors) from the animation video reference 08:00 Completing the workflow settings and running the Animate Workflow 10:06 Correcting the reference image in video output: trim latent node 10:46 Using subgraphs to concatenate sets of sampled videos for extension of videos 12:40 Correcting the second animation subgraph to concatenate the videos (batch images) 13:13 Correcting repeated (continue motion) frames with Image From Batch and trim image 14:36 Wan Animate replacement: creating a mask over the character and isolating the background 17:27 Using the BF16 Wan Animate model in comfyui 17:55 Installing ComfyUI-GGUF to load GGUF (quantized) Wan Animate models Useful links: 👉 Wan Animate: https://wan.video/blog/wan2.2-animate 👉 ComfyUI Runpod Direct: https://github.com/MadiatorLabs/Comfy... 👉 Quantstack Wan Animate GGUD: https://huggingface.co/QuantStack/Wan... 🎥 Check out my tutorial how to run Runpo: • Runpod ComfyUI in 5 min (Install with Netw... #wan #comfyui #wananimate ============================================================ 💪 Support this channel with a Super Thanks or a ko-fi! https://ko-fi.com/koalanation ☕ Amazing ComfyUI workflows: https://tinyurl.com/y9v2776r 🚨 Use Runpod and access powerful GPUs for best ComfyUI experience at a fraction of the price. https://tinyurl.com/58x2bpp5 🤗 ☁️ Starting in ComfyUI? Run it on the cloud without installation, very easy! ☁️ 👉 RunDiffusion: https://tinyurl.com/ypp84xjp 👉15% off first month with code 'koala15' 👉 ThinkDiffusion: https://tinyurl.com/4nh2yyen 🤑🤑🤑 FREE! Check my runnable workflows in OpenArt.ai: https://tinyurl.com/3j4z6xwf ============================================================ CREDITS =========================================================== ✂️ Edited with Canva, and ClipChamp. I record the stuff in powerpoint. ======================================================== © 2025 Koala Nation #comfyui #wan #videoai