DynFaceRestore: Balancing Fidelity and Quality in Diffusion-Guided Blind Face Restoration with Dynamic Blur-Level Mapping and Guidance

(ICCV 2025 – Highlight)

Huu-Phu Do1, Yu-Wei Chen1, Yi-Cheng Liao1, Chi-Wei Hsiao2, Han-Yang Wang2, Wei-Chen Chiu1, Ching-Chun Huang1
1National Yang Ming Chiao Tung University, Taiwan, 2MediaTek Inc., Taiwan

Abstract

Blind Face Restoration aims to recover high-fidelity, detail-rich facial images from unknown degraded inputs, presenting significant challenges in preserving both identity and detail. Pre-trained diffusion models have been increasingly used as image priors to generate fine details. Still, existing methods often use fixed diffusion sampling timesteps and a global guidance scale, assuming uniform degradation. This limitation and potentially imperfect degradation kernel estimation frequently lead to under- or over-diffusion, resulting in an imbalance between fidelity and quality. We propose DynFaceRestore, a novel blind face restoration approach that learns to map any blindly degraded input to Gaussian blurry images. By leveraging these blurry images and their respective Gaussian kernels, we dynamically select the starting timesteps for each blurry image and apply closed-form guidance during the diffusion sampling process to maintain fidelity. Additionally, we introduce a dynamic guidance scaling adjuster that modulates the guidance strength across local regions, enhancing detail generation in complex areas while preserving structural fidelity in contours. This strategy effectively balances the trade-off between fidelity and quality. DynFaceRestore achieves state-of-the-art performance in both quantitative and qualitative evaluations, demonstrating robustness and effectiveness in blind face restoration.

Performance

Method illustration

Blind face restoration demands both high fidelity and rich detail. Compared to other SOTA methods that leverage GAN priors, codebook priors, or diffusion priors, our proposed method, Our method (denoted as an asterisk), demonstrates superior image fidelity (PSNR↑) and quality (FID↓) on the CelebA-Test.

Observation

Method illustration

Using DiffFace [41] as an example, let RM represent DiffFace’s restoration model and pθ the diffusion model. In the t-SNE plot, green and red points denote the features of xt400 sampled from pθ(xt400|ŷ) and pθ(xt400|x0), respectively. Here, ŷ, the LQ image restored by RM, is used for diffusion guidance. DiffFace initiates the diffusion process at a fixed t = 400, resulting in under-diffusion (right) for severely degraded LQ images and just enough diffusion for mildly degraded LQ images (left). This underscores the importance of selecting an appropriate starting step.

Method

Method illustration

Overview of our proposed DynFaceRestore framework, which consists of three key components: DBLM, DSST, and DGSA (defined in Sec. 4). The upper and lower sections illustrate two independent restoration scenarios with inputs degraded to varying levels. DBLM generates multiple Gaussian-blurred images based on the degradation level of the unknown degraded input. Then, given these blur levels, DSST identifies the optimal starting step for each Gaussian-blurred image via a predefined lookup table, providing sampling guidance to avoid under- or over-diffusion. Lastly, the trained network, DGSA, locally adjusts the guidance scale used in the pre-trained diffusion process, enabling DynFaceRestore to achieve an optimal balance between fidelity and quality.


Visual Comparison

Method illustration

Qualitative results on CelebA-Test. Our method achieves high-fidelity reconstruction with visually accurate details, particularly in the mouth, hair, and skin texture. Please zoom in for the best view.

Method illustration

Qualitative results from three real-world datasets demonstrate that our restoration method produces more natural features (e.g., eyes) and realistic details (e.g., hair) compared to other approaches, with improved fidelity. Please zoom in for the best view.

BibTeX

@misc{do2025dynfacerestorebalancingfidelityquality,
      title={DynFaceRestore: Balancing Fidelity and Quality in Diffusion-Guided Blind Face Restoration with Dynamic Blur-Level
      Mapping and Guidance},
      author={Huu-Phu Do and Yu-Wei Chen and Yi-Cheng Liao and Chi-Wei Hsiao and Han-Yang Wang and Wei-Chen Chiu and
      Ching-Chun Huang},
      year={2025},
      eprint={2507.13797},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.13797},
      }