Blind Face Restoration aims to recover high-fidelity, detail-rich facial images from unknown degraded
inputs, presenting
significant challenges in preserving both identity and detail. Pre-trained diffusion models have been
increasingly used
as image priors to generate fine details. Still, existing methods often use fixed diffusion sampling
timesteps and a
global guidance scale, assuming uniform degradation. This limitation and potentially imperfect degradation
kernel
estimation frequently lead to under- or over-diffusion, resulting in an imbalance between fidelity and
quality. We
propose DynFaceRestore, a novel blind face restoration approach that learns to map any blindly degraded
input to
Gaussian blurry images. By leveraging these blurry images and their respective Gaussian kernels, we
dynamically select
the starting timesteps for each blurry image and apply closed-form guidance during the diffusion sampling
process to
maintain fidelity. Additionally, we introduce a dynamic guidance scaling adjuster that modulates the
guidance strength
across local regions, enhancing detail generation in complex areas while preserving structural fidelity in
contours.
This strategy effectively balances the trade-off between fidelity and quality. DynFaceRestore achieves
state-of-the-art
performance in both quantitative and qualitative evaluations, demonstrating robustness and effectiveness
in blind face
restoration.