Stefano Flore

Galleria AI

  • Instagram
  • Threads
  • LinkedIn
  • Bluesky
  • GitHub
  • Home
  • Chi sono
  • Galleria foto
  • Galleria AI
  • Articoli
  • Curriculum

Galleria AI

ai_wiz_art

I miei lavori IA su Instagram

#ai #comfyui #aiart #aiartwork #artificialintellig #ai #comfyui #aiart #aiartwork #artificialintelligence #portrait
Built entirely on a Windows PC, RTX 4060 Ti 16GB R Built entirely on a Windows PC, RTX 4060 Ti 16GB RAM, 68GB RAM.

Character: Flux Krea Nunchaku
Audio: Chatterbox voice cloning
Video: WAN 2.1 Infinite Talk
Upscale: Flash SVR

#ai #comfyui #aiart #flux #portrait #artificialintelligence #wan #video #aivideo
WAN 2.1 ATI – Motion Control I'd like to point ou WAN 2.1 ATI - Motion Control

I'd like to point out a particularly useful tutorial on YouTube that clearly illustrates a workflow for using WAN 2.1 ATI, integrating an advanced motion control technique.
The video shows how, using a dedicated node, you can insert anchor points and paths into the image to precisely define which areas should remain static and which should follow a specific movement.
This feature allows for finer and more predictable control of the animation, significantly improving frame-to-frame consistency compared to traditional automatic approaches.
Attached is a quick test I performed locally using an RTX 4060 Ti GPU with 16 GB of VRAM.

https://www.youtube.com/watch?v=AI9-1G7niXY

#comfyui #qwen #wan #ati #ai #video #aivideo #videoai hashtag#motion #animation
🎬 New experiments with WAN 2.2 Animate: from 3D 🎬 New experiments with WAN 2.2 Animate: from 3D model to final animation

In this new test with WAN 2.2 Animate, I integrated a 3D model in .fbx format (downloaded from characters3d.com) to generate a video with the animated skeleton. This was then used as a reference to create the final animation, combining it with the chosen character.

✅ Workflow

➡️ Generating the pose video from the 3D model.
➡️ Inputting the video + character image into the WAN 2.2 Animate model.
➡️ Interpolation with RIFE to improve fluidity and speed control.

The result? A more consistent, fluid, and controllable animation, which opens up new possibilities for those working with AI and motion design.

💡 If you're exploring the use of AI for video animation, this approach might offer some interesting insights.

#AIart #AnimationWorkflow #WAN2_2 #ComfyUI #3Dmodeling #GenerativeAI #MotionDesign #StefanoFlore
#Qwen #WAN 2.2 Animate: Fluid Controlled #Animatio #Qwen #WAN 2.2 Animate: Fluid Controlled #Animation

Over the past few weeks, I've been exploring different approaches to generating animated videos with AI models, starting with basic experiments on #ComfyUI with WAN2.2. In an initial workflow, I attempted to create a long video by splitting the project into three distinct sections, each with a start and end frame. Despite using various control techniques, the results still showed obvious limitations: imperfect character consistency and suboptimal video fluidity.

Here's the previous post: https://lnkd.in/dkNAj_Uf

For this new experiment, I took a more structured and technical approach. I used four pose maps to generate three separate video clips with WAN2.2, leveraging the first/last frame feature. I then merged the three clips into a single base video. At this point, I fed both the model image (previously generated with hashtag#Hidream) and the pose video to the WAN2.2 Animate model, resulting in the final animation. Finally, I applied a RIFE interpolation pass to triple the frames, improving fluidity and allowing for greater speed control during editing.

The orchestrated use of individual workflows (pose animation, character animation, and frame rate increase) combines static generation models (Hidream), video animation models (WAN2.2), and interpolation techniques (hashtag#RIFE), allowing for improved visual consistency and motion quality. This is another step towards total control over the animations.

The great thing? I used the WAN2.2 workflows provided "as standard" in the ComfyUI installation. Yes, I could create a single mega-workflow, but I much prefer working on the production phases separately and in a more controlled manner.
I tested the #OVI model on #ComfyUI, achieving ast I tested the #OVI model on #ComfyUI, achieving astonishingly high-quality results, with audio-video generations that rival high-end closed-source models. #OVI stands out for its precise lip sync without explicit bounding boxes, its ability to handle multi-turn and multiple speakers, and its consistency between #music, #sound #effects, and #visual #actions. The open-source availability of weights and inference code, with support for even low-VRAM configurations, opens up new possibilities for advanced audiovisual production in the OSS community.

The creators of OVI are continuing development with the following goals:
- Fine-tuning the model with high-resolution data and using reinforcement learning (RL) to improve performance.
- Introducing new features, such as generating longer videos and the reference-voice condition.
- Building a distilled model to accelerate inference.
- Releasing training scripts to facilitate training.

These advances confirm our commitment to making OVI an increasingly powerful and versatile tool for open-source video and audio generation.

For the sample clips I personally created for testing, I used still images generated with various AI models: #Flux, #Hidream, #WAN, #Qwen, and #Seedance.

Resources:

https://aaxwaz.github.io/Ovi/
https://huggingface.co/chetwinlow1/Ovi
https://github.com/snicolast/ComfyUI-Ovi
#bytedance #seedance #seedance4 #ai #aicommunity # #bytedance #seedance #seedance4 #ai #aicommunity #aitools
#bytedance #seedance #seedance4 #ai #aicommunity # #bytedance #seedance #seedance4 #ai #aicommunity #aitools
#bytedance #seedance #seedance4 #ai #aicommunity # #bytedance #seedance #seedance4 #ai #aicommunity #aitools
#bytedance #seedance #seedance4 #ai #aicommunity # #bytedance #seedance #seedance4 #ai #aicommunity #aitools
#bytedance #seedance #seedance4 #ai #aicommunity # #bytedance #seedance #seedance4 #ai #aicommunity #aitools
#bytedance #seedance #seedance4 #ai #aicommunity # #bytedance #seedance #seedance4 #ai #aicommunity #aitools
#Flux #ACE + #EdgeTTS + #WAN #Infinity #Talk – We' #Flux #ACE + #EdgeTTS + #WAN #Infinity #Talk
-
We're reaching pretty good levels with 100% on-premises tools with ComfyUI.

SAAS tools (Veo, Kling, Midjourney, etc.) are still a bit ahead, but it's nice to know we don't have to rely on them.
#ai #videi #aivideo #wan #wanvideo #comfyui #portr #ai #videi #aivideo #wan #wanvideo #comfyui #portrait #artificialintelligence #aiartcommunity
#ai #comfyui #aiart #aiartwork #portrait #video #a #ai #comfyui #aiart #aiartwork #portrait #video #aivideo #wanvideo #wan
#ai #aiart #digitalart #aiartcommunity #generative #ai #aiart #digitalart #aiartcommunity #generativeart #aiartist #aiartwork #artificialintelligence #stablediffusion #aigeneratedart #conceptart #newmediaart #glitchart #surrealart #aiartdaily #comfyui #portrait #hairstyle #hairstyleideas
#ai #aiart #digitalart #aiartcommunity #generative #ai #aiart #digitalart #aiartcommunity #generativeart #aiartist #aiartwork #artificialintelligence #stablediffusion #aigeneratedart #conceptart #newmediaart #glitchart #surrealart #aiartdaily #comfyui #portrait #hairstyle #hairstyleideas
#ai #aiart #digitalart #aiartcommunity #generative #ai #aiart #digitalart #aiartcommunity #generativeart #aiartist #aiartwork #artificialintelligence #stablediffusion #aigeneratedart #conceptart #newmediaart #glitchart #surrealart #aiartdaily #comfyui #portrait #hairstyle #hairstyleideas
Segui su Instagram

Stefano Flore

Via Nazionale 21, 09070 Seneghe (OR)

P.IVA 01247560954

T. +39 347 90 725 96

Privacy Policy

Cookie Policy