Demosaicing is a crucial step in the process of converting raw image data captured by sensors with Bayer color filter arrays into a full-color image that humans can perceive. Bayer arrays are a type of color filter array commonly used in digital image sensors, named after Bryce Bayer, who patented the design. These arrays consist of a grid of photosites, each covered by a color filter—typically red, green, or blue. Please see Figure 1.

Demosaicing in real-world scenarios poses a significant challenge due to the emergence of artifacts caused by factors such as sensor noise, motion blur, and edge artifacts. Sensor noise, introduced during image acquisition, can manifest as color noise, luminance noise, and texture loss in demosaiced images.

Another hurdle in demosaicing involves the necessity for ground truth data. Obtaining precise RGB images for training machine learning-based demosaicing methods is time-consuming, requiring meticulous alignment and synchronization across multiple color channels. To overcome this challenge, our strategy involves leveraging synthetic data generation, data augmentation, and domain adaptation techniques to augment the training dataset and mitigate the constraints of limited training data. We also consider semi-supervised learning as a viable approach for demosaicing.

In our investigation of the semi-supervised representation-learning module, we delve into a novel form of representation termed “oriented line segments (OLS),” illustrated in Figure 2. The OLS introduces two key hyperparameters: the length of a line segment, denoted as (2l+1), and the angle formed between two consecutive line segments.

The computational complexity of advanced demosaicing algorithms presents another challenge, particularly for real-time applications on resource-constrained devices like mobile phones or embedded systems. To address this, we propose the utilization of regression-based methods. This involves extracting and processing small image patches during demosaicing to learn the ground truth representation. This approach proves to be more computationally efficient compared to alternative methods.

Our model is trained and evaluated using McMaster and Kodak PhotoCD datasets. We have carefully balanced trade-offs between resolution, color accuracy, and computational complexity, tailoring our approach to meet the specific demands of the application.