EL2NM: Extremely Low-light
Noise Modeling Through Diffusion Iteration

CVPR Workshops 2024


Jiahao Qin1, Pinle Qin*, 1, Rui Chai1, Jia Qin1, Zanxia Jin1

1North University of China   

Abstract


Low-light Original Denoising (LOD) is a challenging task in Computational Photography (CP). The low number of photons in low light environments makes imaging very difficult. The most difficult step in LOD is to establish a noise model under low light. Currently, there are numerous approaches aim to noise modeling, however the noise established have significant differences from real noise due to the highly intricate distribution of noise. Towards this goal, this paper proposes an Extremely Low-light Noise Modeling (EL2NM) approach, which designs an original image condition constraint module and a multi-noise fusion module to generate complex noise consistent with real scenes. In order to satisfy the complex noise distribution in low-light environments instead of just Gaussian noise, we integrate various noises into cold diffusion to establish a realistic noise generation model for extremely low-light environments. At the same time, to avoid the image semantic misinterpret during the reverse diffusion process, we propose to use conditional image to guide noise generation of the diffusion model. Extensive experiments demonstrate that our proposed method EL2NM exhibits excellent performance in extremely low-light environments and achieves the state-of-the-art on Starlight Dataset.


Introduction


Inspired by recent advancements in conditional diffusion models and cold diffusion models, we introduces a new noise modeling approach, EL2NM, building upon the work of Kristina Monakhova. EL2NM employs a series of refinement steps to convert complex noise distributions into empirical data distributions, akin to Langevin dynamics. At its core lies the U-Net architecture, utilized to train the noise model and iteratively produce noise outputs. The U-Net architecture, adapted from SR3, is modified in this work to accommodate conditional image generation. The sampling technique of the cold diffusion model is applied iteratively to generate the final noise image.

Our main contributions are summarized as follows:

  1. We proposes a novel approach for noise modeling in low-light environments. By combining the conditional diffusion model with the cold diffusion model, the method employs the conditional diffusion model to control the generation of conditional images and integrates the cold diffusion model to introduce a wider range of noise distributions. Through iterative refinement, the approach achieves the creation of high-quality noise images. Importantly, this study represents the first application of diffusion models in the domain of noise modeling for low-light images.

  2. overview

  3. We extends the application of the conditional diffusion model to noise image generation. The EL2NM method involves an iterative subdivision technique to generate noise images in low-light conditions. It departs from an understanding of the physical processes involved, sidestepping the need for adversarial training, yet yielding high-quality noise images for dim-light scenarios.

  4. wangluo

  5. Experimental results affirm that the proposed approach exhibits a high level of advancement in both quantitative and qualitative evaluations. It effectively generates superior quality noise images. Moreover, the method demonstrates remarkable performance on the Starlight Dataset, showcasing its competitiveness in the field.



Performance


Comparison

Quantitative Results

This study compared the proposed noise model with previous works, with each row representing a different noise modeling method. The experiments demonstrated that the proposed method is capable of generating noise images similar to the Starlight method, yielding higher quality noise images.


Public Datasets (Starlight)


This study compared commonly used noise modeling methods and demonstrated the noise images generated by the proposed method and baseline methods. We found that our method EL2NM is more stable than the starlight and can generate realistic noisy image in low light environments. We compared the baseline Starlight model based on GAN networks, the non-deep low-light noise model ELD, as well as two deep learning-based noise models, CA-GAN and Noise Flow. Both Noise Flow and CA-GAN miss the significant banding noise present in real noisy clips. ELD miss the quantizaion noise. The EL2NM method exhibited good performance on this dataset. Qualitative performance indicators are presented in Table, indicating that KL divergence computed by EL2NM was comparable to the baseline.



Ablation Study

Quantitative Results

We compared the method with only conditional diffusion model and the method with only cold diffusion model with our method, and the results show that our method can better establish the noise model.


Visual Results

We can see that when only the conditional diffusion model is used, the image does not recover noise information and some image information is lost. When there is only a cold diffusion model, the noise in image restoration is not complete enough, and the visual effect quality is not high. When combined, it can generate noise images in weak light environments more completely while ensuring the stability of the generation.


Citation


@InProceedings{Qin_2024_CVPR,
  author    = {Qin, Jiahao and Qin, Pinle and Chai, Rui and Qin, Jia and Jin, Zanxia},
  title     = {EL2NM: Extremely Low-light Noise Modeling Through Diffusion Iteration},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month     = {June},
  year      = {2024},
  pages     = {1085-1094}
}