LoRA+ID: Enhancing Identity Preservation in Generative Models Through Face-Conditioned LoRA Training
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
Low-Rank Adaptation (LoRA) has become the standard approach for fine-tuning large-scale generative models like Stable Diffusion XL (SDXL), offering efficiency in compute and memory. However, traditional LoRA methods rely solely on text prompts, limiting their ability to preserve detailed identity features. In this work, we propose a novel training framework, LoRA+ID, that integrates face embeddings-derived from face recognition networks-into the LoRA training loop. Unlike methods such as FaceID or InstantID, which introduce image conditioning only at inference time, our approach conditions LoRA directly on facial features during training. We evaluate our method across four setups involving 8 identities and 18 generations per identity. Experimental results show that LoRA+ID, especially when used with FaceID during inference, significantly improves identity preservation compared to both traditional LoRA and zero shot FaceID generation. © 2025 IEEE.
Description
Keywords
Face Embeddings, Generative Models, Identity Preservation, Lora, Stable Diffusion Xl
Fields of Science
Citation
WoS Q
Scopus Q

OpenCitations Citation Count
N/A
Volume
Issue
Start Page
End Page
PlumX Metrics
Citations
Scopus : 0
