Videos captured under low light conditions are often noisy and of poor visibility. Low-light video enhancement aims to improve viewers’ experience by increasing brightness, suppressing noise, and amplifying detailed texture. The performance of computer vision tasks such as object tracking and face recognition can be severely affected under low-light noisy environments. Hence, low-light video enhancement is needed to ensure the robustness of computer vision systems. Besides, the technology is highly demanded in consumer electronics such as video capturing by smart phones.
A self-supervised adaptive low-light video enhancement (SALVE) method is proposed in this work. SALVE first conducts an effective Retinex-based low-light image enhancement on a few key frames of an input low-light video. Next, it learns mappings from the low- to enhanced-light frames via Ridge regression. Finally, it uses these mappings to enhance the remaining frames in the input video. SALVE is a hybrid method that combines components from a traditional Retinex-based image enhancement method and a learning-based method. The former component leads to a robust solution which is easily adaptive to new real-world environments. The latter component offers a fast, computationally inexpensive and temporally consistent solution. We conduct extensive experiments to show the superior performance of SALVE. Our user study shows that 87% of participants prefer SALVE over prior work.
First figure shows an overview of the proposed SALVE method. For intra-coded frames (I frames), it estimates an illumination component and a reflectance component using the NATLE method. For inter-coded frames (P/B frames), it predicts these components using a ridge regression learned from the last raw and enhanced I frame pairs.
Second figure shows a quantitative comparison table between our low-light video enhancement method and prior work. To further demonstrate the effectiveness of our method, we conduct a user study with 31 participants. In this study, we have 10 blind A/B tests between our method and prior works. At each time, only 2 videos are shown to the user. We show the results of this study in the right side of the second figure. As seen, depending on the comparison baseline, between 87% to 100% of users prefer our enhanced videos over prior work.
— Zohreh Azizi