Face Deblurring using Dual Camera Fusion on Mobile Phones

Wei-Sheng Lai, YiChang Shih, Lun-Cheng Chu, Xiaotong Wu, Sung-Fang Tsai, Michael Krainin, Deqing Sun, Chia-Kai Liang

Motion blur of fast-moving subjects is a longstanding problem in photography and very common on mobile phones due to limited light collection efficiency, particularly in low-light conditions. While we have witnessed great progress in image deblurring in recent years, most methods require significant computational power and have limitations in processing high-resolution photos with severe local motions. To this end, we develop a novel face deblurring system based on the dual camera fusion technique for mobile phones. The system detects subject motion to dynamically enable a reference camera, e.g., ultrawide angle camera commonly available on recent premium phones, and captures an auxiliary photo with faster shutter settings. While the main shot is low noise but blurry, the reference shot is sharp but noisy. We learn ML models to align and fuse these two shots and output a clear photo without motion blur. Our algorithm runs efficiently on Google Pixel 6, which takes 463 ms overhead per shot. Our experiments demonstrate the advantage and robustness of our system against alternative single-image, multi-frame, face-specific, and video deblurring algorithms as well as commercial products. To the best of our knowledge, our work is the first mobile solution for face motion deblurring that works reliably and robustly over thousands of images in diverse motion and lighting conditions.

To appear in SIGGRAPH 2022
    author    = {Lai, Wei-Sheng and Shih, YiChang and Chu, Lun-Cheng and Wu, Xiaotong and Tsai, Sung-Fang and Krainin, Michael and Sun, Deqing and Liang, Chia-Kai}, 
    title     = {Face Deblurring using Dual Camera Fusion on Mobile Phones}, 
    journal   = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)},
    year      = {2022}
Supplementary Materials