Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :
Oh Snap!

It looks like you're using an adblocker. Adblockers make us sad. We use ads to keep our content happy and free. If you like what we are doing please consider supporting us by whitelisting our website. You can report badly-behaved ads by clicking/tapping the nearby 'Advertisement' X text.

Generative AI for VFX with ComfyU & InvokeAI Taught by Eran Dinur

Generative AI for VFX with ComfyU & InvokeAI Taught by Eran Dinur

/

Author:Eran Dinur

Actual Duration:4h 27m

Release date:2025, December

Publisher:FXPHD

Skill level:Intermediate

Language:English

Exercise files:Yes

Software:ComfyUI, InvokeAI, Stable Diffusion, Maya, Nuke, Photoshop

Course URL:https://www.fxphd.com/details/713/

Learn to build custom generative AI workflows for VFX and visualization using ComfyUI and InvokeAI.

This course dives deep into Stable Diffusion for VFX and visualization artists, focusing on the powerful node-based applications ComfyUI and InvokeAI. You’ll learn to construct custom workflows for generating everything from seamless textures and matte painting elements to fully-textured 3D assets. We’ll break down the core components of the generative process, explore various models, and get hands-on with tools like ControlNets, IP Adapters, and advanced inpainting and upscaling techniques. By the end, you’ll have the skills to design your own generative AI pipelines and integrate them into your VFX workflow.

🎯 What you’ll learn

  • Build custom generative AI workflows using ComfyUI and InvokeAI.
  • Generate seamless, tileable textures for 3D applications.
  • Create elements for matte painting and compositing.
  • Produce fully-textured 3D assets using generative AI.
  • Utilize ControlNets, IP Adapters, and Stable Diffusion models effectively.
  • Master inpainting and upscaling techniques for VFX.

βœ… Requirements

  • Skills: Basic knowledge of Nuke, Maya, and Photoshop recommended.
  • Tools: Computer capable of running ComfyUI and InvokeAI.
  • Hardware: A dedicated GPU with sufficient VRAM is highly recommended for optimal performance.

πŸ“ Description

This course offers a deep dive into the practical application of generative AI for VFX and visualization artists. You’ll gain hands-on experience with two of the most powerful node-based Stable Diffusion interfaces: ComfyUI and InvokeAI. We’ll systematically build custom workflows from the ground up, covering everything from generating photorealistic, seamless textures to creating fully-textured 3D assets. The curriculum explores the fundamental building blocks of the Stable Diffusion process, including various models, ControlNets, IP Adapters, and advanced techniques like inpainting and upscaling. By focusing on practical workflow construction, this course empowers you to leverage generative AI as a powerful assistant in your creative pipeline.

πŸ§‘β€πŸŽ“ Who this course is for

  • VFX artists looking to integrate generative AI into their workflow.
  • Visualization artists seeking to create assets and textures more efficiently.
  • 3D artists interested in generating textured 3D models using AI.
  • Anyone curious about building custom Stable Diffusion pipelines with ComfyUI and InvokeAI.

πŸ§‘β€πŸ« About the Author

Eran Dinur is a seasoned VFX Supervisor with credits on major films like “Marty Supreme,” “Hereditary,” “The Wolf of Wall Street,” and “Uncut Gems.” He is also the author of “The Filmmaker’s Guide to Visual Effects” and “The Complete Guide to Photorealism,” establishing him as a leading voice in visual effects education and practice.

🏁 Final Result

By the end of this course, you will have a portfolio of custom-built generative AI workflows and the ability to create seamless textures, matte painting elements, and fully-textured 3D assets, ready to be integrated into professional VFX projects.

Curriculum

πŸ“‹ Course content

  1. Class 1:  Introduction and Setup
    • Advantages and challenges of ComfyUI and InvokeAI
    • Setting up a comprehensive ComfyUI environment
  2. Class 2:  Essential SD Building Blocks
    • Building a first generative workflow in ComfyUI
    • Exploring fundamental Stable Diffusion components (Checkpoint models, clips, samplers, latent noise, VAE decoder)
    • Understanding CFG, seed, and steps
  3. Class 3:  Seamless Textures Tool: Part 1
    • Creating a dedicated workflow for tileable textures
    • Using InvokeAI nodes and split prompts
  4. Class 4:  Seamless Textures Tool: Part 2
    • Adding ControlNet nodes and preprocessors for image guidance
    • Exploring SDXL checkpoint model features
  5. Class 5:  Using CG Elements as Guidance: Part 1
    • Creating a villa environment using Maya, Nuke, and ComfyUI
    • Guiding AI generation with beauty, Z depth, and normal passes
  6. Class 6:  Using CG Elements as Guidance: Part 2
    • Using Cryptomattes for regional prompting
    • Applying IP adapters for lighting, style, and mood
  7. Class 7:  Generating 3D Assets Part 1
    • Building a workflow for generating textured 3D models with the HunYuan model
    • Image-to-mesh and de-lighting sections
  8. Class 8:  Generating 3D Models Part 2
    • AI UV mapping, multi-camera setup, and texture projections
    • Exporting textured mesh using multi-view renderer and CV2 Inpaint
  9. Class 9:  Inpainting with InvokeAI
    • Exploring InvokeAI’s layer-based inpainting interface
    • Techniques for matte painting and compositing
  10. Class 10:  Segmentation and Upscaling Techniques
    • Image-to-language with the Florence model
    • Examining different upscaling techniques and tools
Watch online or Download for Free
Generative AI for VFX with ComfyU & InvokeAI Taught by Eran Dinur

Join us on
  Channel    and      Group

Leave a Comment

Your email address will not be published. Required fields are marked *