Observe: The challenge web page for this work contains 33 autoplaying high-res movies totaling half a gigabyte, which destabilized my system on load. Because of this, I received’t hyperlink to it straight. Readers can discover the URL within the paper’s summary or PDF in the event that they select.
One of many major targets in present video synthesis analysis is producing a whole AI-driven video efficiency from a single picture. This week a brand new paper from Bytedance Clever Creation outlined what often is the most complete system of this type up to now, able to producing full- and semi-body animations that mix expressive facial element with correct large-scale movement, whereas additionally reaching improved identification consistency – an space the place even main business methods typically fall brief.
Within the instance under, we see a efficiency pushed by an actor (prime left) and derived from a single picture (prime proper), that gives a remarkably versatile and dexterous rendering, with not one of the ordinary points round creating giant actions or ‘guessing’ about occluded areas (i.e., elements of clothes and facial angles that have to be inferred or invented as a result of they don’t seem to be seen within the sole supply picture):
AUDIO CONTENT. Click on to play. A efficiency is born from two sources, together with lip-sync, which is often the protect of devoted ancillary methods. This can be a decreased model from the supply website (see observe at starting of article – applies to all different embedded movies right here).
Although we are able to see some residual challenges concerning persistence of identification as every clip proceeds, that is the primary system I’ve seen that excels in typically (although not all the time) sustaining ID over a sustained interval with out using LoRAs:
AUDIO CONTENT. Click on to play. Additional examples from the DreamActor challenge.
The brand new system, titled DreamActor, makes use of a three-part hybrid management system that offers devoted consideration to facial features, head rotation and core skeleton design, thus accommodating AI-driven performances the place neither the facial nor physique facet undergo on the expense of the opposite – a uncommon, arguably unknown functionality amongst comparable methods.
Under we see one among these sides, head rotation, in motion. The coloured ball within the nook of every thumbnail in direction of the fitting signifies a sort of digital gimbal that defines head-orientation independently of facial motion and expression, which is right here pushed by an actor (decrease left).
Click on to play. The multicolored ball visualized right here represents the axis of rotation of the pinnacle of the avatar, whereas the expression is powered by a separate module and knowledgeable by an actor’s efficiency (seen right here decrease left).
One of many challenge’s most attention-grabbing functionalities, which isn’t even included correctly within the paper’s checks, is its capability to derive lip-sync motion straight from audio – a functionality which works unusually effectively even with out a driving actor-video.
The researchers have taken on the very best incumbents on this pursuit, together with the much-lauded Runway Act-One and LivePortrait, and report that DreamActor was in a position to obtain higher quantitative outcomes.
Since researchers can set their very own standards, quantitative outcomes aren’t essentially an empirical normal; however the accompanying qualitative checks appear to help the authors’ conclusions.
Sadly this technique is just not supposed for public launch, and the one worth the neighborhood can doubtlessly derive from the work is in doubtlessly reproducing the methodologies outlined within the paper (as was performed to notable impact for the equally closed-source Google Dreambooth in 2022).
The paper states*:
‘Human picture animation has attainable social dangers, like being misused to make pretend movies. The proposed know-how could possibly be used to create pretend movies of individuals, however present detection instruments [Demamba, Dormant] can spot these fakes.
‘To cut back these dangers, clear moral guidelines and accountable utilization pointers are obligatory. We’ll strictly limit entry to our core fashions and codes to forestall misuse.’
Naturally, moral concerns of this type are handy from a business standpoint, because it supplies a rationale for API-only entry to the mannequin, which may then be monetized. ByteDance has already performed this as soon as in 2025, by making the much-lauded OmniHuman obtainable for paid credit on the Dreamina web site. Subsequently, since DreamActor is presumably a good stronger product, this appears the possible end result. What stays to be seen is the extent to which its ideas, so far as they’re defined within the paper, can support the open supply neighborhood.
The new paper is titled DreamActor-M1: Holistic, Expressive and Strong Human Picture Animation with Hybrid Steering, and comes from six Bytedance researchers.
Technique
The DreamActor system proposed within the paper goals to generate human animation from a reference picture and a driving video, utilizing a Diffusion Transformer (DiT) framework tailored for latent house (apparently some taste of Steady Diffusion, although the paper cites solely the 2022 landmark launch publication).
Reasonably than counting on exterior modules to deal with reference conditioning, the authors merge look and movement options straight contained in the DiT spine, permitting interplay throughout house and time by means of consideration:

Schema for the brand new system: DreamActor encodes pose, facial movement, and look into separate latents, combining them with noised video latents produced by a 3D VAE. These indicators are fused inside a Diffusion Transformer utilizing self- and cross-attention, with shared weights throughout branches. The mannequin is supervised by evaluating denoised outputs to wash video latents. Supply: https://arxiv.org/pdf/2504.01724
To do that, the mannequin makes use of a pretrained 3D variational autoencoder to encode each the enter video and the reference picture. These latents are patchified, concatenated, and fed into the DiT, which processes them collectively.
This structure departs from the widespread observe of attaching a secondary community for reference injection, which was the strategy for the influential Animate Anybody and Animate Anybody 2 tasks.
As a substitute, DreamActor builds the fusion into the primary mannequin itself, simplifying the design whereas enhancing the circulation of data between look and movement cues. The mannequin is then educated utilizing circulation matching quite than the usual diffusion goal (Stream matching trains diffusion fashions by straight predicting velocity fields between knowledge and noise, skipping rating estimation).
Hybrid Movement Steering
The Hybrid Movement Steering technique that informs the neural renderings combines pose tokens derived from 3D physique skeletons and head spheres; implicit facial representations extracted by a pretrained face encoder; and reference look tokens sampled from the supply picture.
These parts are built-in throughout the Diffusion Transformer utilizing distinct consideration mechanisms, permitting the system to coordinate international movement, facial features, and visible identification all through the technology course of.
For the primary of those, quite than counting on facial landmarks, DreamActor makes use of implicit facial representations to information expression technology, apparently enabling finer management over facial dynamics whereas disentangling identification and head pose from expression.
To create these representations, the pipeline first detects and crops the face area in every body of the driving video, resizing it to 224×224. The cropped faces are processed by a face movement encoder pretrained on the PD-FGC dataset, which is then conditioned by an MLP layer.

PD-FGC, employed in DreamActor, generates a speaking head from a reference picture with disentangled management of lip sync (from audio), head pose, eye motion, and expression (from separate movies), permitting exact, impartial manipulation of every. Supply: https://arxiv.org/pdf/2211.14506
The result’s a sequence of face movement tokens, that are injected into the Diffusion Transformer by means of a cross-attention layer.
The identical framework additionally helps an audio-driven variant, whereby a separate encoder is educated that maps speech enter on to face movement tokens. This makes it attainable to generate synchronized facial animation – together with lip actions – with out a driving video.
AUDIO CONTENT. Click on to play. Lip-sync derived purely from audio, with out a driving actor reference. The only real character enter is the static picture seen upper-right.
Secondly, to manage head pose independently of facial features, the system introduces a 3D head sphere illustration (see video embedded earlier on this article), which decouples facial dynamics from international head motion, enhancing precision and adaptability throughout animation.
Head spheres are generated by extracting 3D facial parameters – comparable to rotation and digital camera pose – from the driving video utilizing the FaceVerse monitoring technique.

Schema for the FaceVerse challenge. Supply: https://www.liuyebin.com/faceverse/faceverse.html
These parameters are used to render a coloration sphere projected onto the 2D picture aircraft, spatially aligned with the driving head. The sphere’s measurement matches the reference head, and its coloration displays the pinnacle’s orientation. This abstraction reduces the complexity of studying 3D head movement, serving to to protect stylized or exaggerated head shapes in characters drawn from animation.

Visualization of the management sphere influencing head orientation.
Lastly, to information full-body movement, the system makes use of 3D physique skeletons with adaptive bone size normalization. Physique and hand parameters are estimated utilizing 4DHumans and the hand-focused HaMeR, each of which function on the SMPL-X physique mannequin.

SMPL-X applies a parametric mesh over the complete human physique in a picture, aligning with estimated pose and expression to allow pose-aware manipulation utilizing the mesh as a volumetric information. Supply: https://arxiv.org/pdf/1904.05866
From these outputs, key joints are chosen, projected into 2D, and linked into line-based skeleton maps. In contrast to strategies comparable to Champ, that render full-body meshes, this strategy avoids imposing predefined form priors, and by relying solely on skeletal construction, the mannequin is thus inspired to deduce physique form and look straight from the reference pictures, decreasing bias towards mounted physique sorts, and enhancing generalization throughout a spread of poses and builds.
Throughout coaching, the 3D physique skeletons are concatenated with head spheres and handed by means of a pose encoder, which outputs options which might be then mixed with noised video latents to supply the noise tokens utilized by the Diffusion Transformer.
At inference time, the system accounts for skeletal variations between topics by normalizing bone lengths. The SeedEdit pretrained picture enhancing mannequin transforms each reference and driving pictures into an ordinary canonical configuration. RTMPose is then used to extract skeletal proportions, that are used to regulate the driving skeleton to match the anatomy of the reference topic.

Overview of the inference pipeline. Pseudo-references could also be generated to counterpoint look cues, whereas hybrid management indicators – implicit facial movement and express pose from head spheres and physique skeletons – are extracted from the driving video. These are then fed right into a DiT mannequin to supply animated output, with facial movement decoupled from physique pose, permitting for using audio as a driver.
Look Steering
To boost look constancy, significantly in occluded or hardly ever seen areas, the system dietary supplements the first reference picture with pseudo-references sampled from the enter video.
Click on to play. The system anticipates the necessity to precisely and persistently render occluded areas. That is about as shut as I’ve seen, in a challenge of this type, to a CGI-style bitmap-texture strategy.
These extra frames are chosen for pose range utilizing RTMPose, and filtered utilizing CLIP-based similarity to make sure they continue to be according to the topic’s identification.
All reference frames (major and pseudo) are encoded by the identical visible encoder and fused by means of a self-attention mechanism, permitting the mannequin to entry complementary look cues. This setup improves protection of particulars comparable to profile views or limb textures. Pseudo-references are all the time used throughout coaching and optionally throughout inference.
Coaching
DreamActor was educated in three phases to progressively introduce complexity and enhance stability.
Within the first stage, solely 3D physique skeletons and 3D head spheres had been used as management indicators, excluding facial representations. This allowed the bottom video technology mannequin, initialized from MMDiT, to adapt to human animation with out being overwhelmed by fine-grained controls.
Within the second stage, implicit facial representations had been added, however all different parameters frozen. Solely the face movement encoder and face consideration layers had been educated at this level, enabling the mannequin to be taught expressive element in isolation.
Within the remaining stage, all parameters had been unfrozen for joint optimization throughout look, pose, and facial dynamics.
Information and Exams
For the testing part, the mannequin is initialized from a pretrained image-to-video DiT checkpoint† and educated in three phases: 20,000 steps for every of the primary two phases and 30,000 steps for the third.
To enhance generalization throughout completely different durations and resolutions, video clips had been randomly sampled with lengths between 25 and 121 frames. These had been then resized to 960x640px, whereas preserving facet ratio.
Coaching was carried out on eight (China-focused) NVIDIA H20 GPUs, every with 96GB of VRAM, utilizing the AdamW optimizer with a (tolerably excessive) studying charge of 5e−6.
At inference, every video phase contained 73 frames. To keep up consistency throughout segments, the ultimate latent from one phase was reused because the preliminary latent for the following, which contextualizes the duty as sequential image-to-video technology.
Classifier-free steerage was utilized with a weight of two.5 for each reference pictures and movement management indicators.
The authors constructed a coaching dataset (no sources are acknowledged within the paper) comprising 500 hours of video sourced from various domains, that includes cases of (amongst others) dance, sports activities, movie, and public talking. The dataset was designed to seize a broad spectrum of human movement and expression, with a good distribution between full-body and half-body photographs.
To boost facial synthesis high quality, Nersemble was included within the knowledge preparation course of.

Examples from the Nersemble dataset, used to enhance the information for DreamActor. Supply: https://www.youtube.com/watch?v=a-OAWqBzldU
For analysis, the researchers used their dataset additionally as a benchmark to evaluate generalization throughout numerous situations.
The mannequin’s efficiency was measured utilizing normal metrics from prior work: Fréchet Inception Distance (FID); Structural Similarity Index (SSIM); Discovered Perceptual Picture Patch Similarity (LPIPS); and Peak Sign-to-Noise Ratio (PSNR) for frame-level high quality. Fréchet Video Distance (FVD) was used for assessing temporal coherence and total video constancy.
The authors carried out experiments on each physique animation and portrait animation duties, all using a single (goal) reference picture.
For physique animation, DreamActor-M1 was in contrast towards Animate Anybody; Champ; MimicMotion, and DisPose.

Quantitative comparisons towards rival frameworks.
Although the PDF supplies a static picture as a visible comparability, one of many movies from the challenge website could spotlight the variations extra clearly:
AUDIO CONTENT. Click on to play. A visible comparability throughout the challenger frameworks. The driving video is seen top-left, and the authors’ conclusion that DreamActor produces the very best outcomes appears cheap.
For portrait animation checks, the mannequin was evaluated towards LivePortrait; X-Portrait; SkyReels-A1; and Act-One.

Quantitative comparisons for portrait animation.
The authors observe that their technique wins out in quantitative checks, and contend that additionally it is superior qualitatively.
AUDIO CONTENT. Click on to play. Examples of portrait animation comparisons.
Arguably the third and remaining of the clips proven within the video above displays a much less convincing lip-sync in comparison with a few the rival frameworks, although the final high quality is remarkably excessive.
Conclusion
In anticipating the necessity for textures which might be implied however not truly current within the sole goal picture fueling these recreations, ByteDance has addressed one of many greatest challenges dealing with diffusion-based video technology – constant, persistent textures. The following logical step after perfecting such an strategy can be to in some way create a reference atlas from the preliminary generated clip that could possibly be utilized to subsequent, completely different generations, to keep up look with out LoRAs.
Although such an strategy would successfully nonetheless be an exterior reference, that is no completely different from texture-mapping in conventional CGI methods, and the standard of realism and plausibility is way greater than these older strategies can receive.
That stated, essentially the most spectacular facet of DreamActor is the mixed three-part steerage system, which bridges the normal divide between face-focused and body-focused human synthesis in an ingenious method.
It solely stays to be seen if a few of these core ideas may be leveraged in additional accessible choices; because it stands, DreamActor appears destined to develop into yet one more synthesis-as-a-service providing, severely sure by restrictions on utilization, and by the impracticality of experimenting extensively with a business structure.
* My substitution of hyperlinks for the authors; inline citations
† As talked about earlier, it isn’t clear with taste of Steady Diffusion was used on this challenge.
First revealed Friday, April 4, 2025