Editing portrait videos is a challenging task that requires flexible yet precise control over a wide range of modifications, such as appearance changes, expression edits, or the addition of objects. The key difficulty lies in preserving the subject's original temporal behavior, demanding that every edited frame remains precisely synchronized with the corresponding source frame. We present Sync-LoRA, a method for editing portrait videos that achieves high-quality visual modifications while maintaining frame-accurate synchronization and identity consistency. Our approach uses an image-to-video diffusion model, where the edit is defined by modifying the first frame and then propagated to the entire sequence. To enable accurate synchronization, we train an in-context LoRA using paired videos that depict identical motion trajectories but differ in appearance. These pairs are automatically generated and curated through a synchronization-based filtering process that selects only the most temporally aligned examples for training. This training setup teaches the model to combine motion cues from the source video with the visual changes introduced in the edited first frame. Trained on a compact, highly curated set of synchronized human portraits, Sync-LoRA generalizes to unseen identities and diverse edits (e.g., modifying appearance, adding objects, or changing backgrounds), robustly handling variations in pose and expression. Our results demonstrate high visual fidelity and strong temporal coherence, achieving a robust balance between edit fidelity and precise motion preservation.
Top row: source | Bottom row: output
"+ wildfire"
"+ yellow shirt"
"+ stylish hair and mustache"
"+ blonde hair"
"+ joker makeup"
"+ leopard coat"
"+ lipstick"
"+ red hat"
"- hat"
"+ spotlight"
Sync-LoRA enables synchronized facial expression modifications while preserving identity
Source
Happy
Angry
Sad
"+ happy"
"+ angry"
"+ happy"
"+ angry"
"+ happy"
"+ angry"
"+ happy"
"+ angry"
"+ happy"
"+ beard"
"- headset"
"+ scarf"
"Batman to Joker"