News

MCL Research on SSL-based Image Anomaly Localization

Image anomaly localization is an important problem in image processing and computer vision, with numerous applications in many areas, such as industrial manufacturing inspection, medical image diagnosis and even video surveillance analysis. The goal of image anomaly localization is to locate the anomaly or anomalous region on the pixel level. Like most other anomaly detection problems, we formulate image anomaly localization as an unsupervised task. More specifically, it means training set only contains normal images, and no anomalous images and corresponding labeled masks are available during model training. This is because anomalous examples are either too expensive to collect or too few to be modeled, which makes it an extremely challenging yet attracting problem.

To tackle this problem, we propose a new image anomaly localization method, called AnomalyHop [1], based on the successive subspace learning (SSL) framework. This is also the first work that applies SSL to the anomaly localization problem. AnomalyHop consists of three modules: 1) feature extraction via successive subspace learning (SSL), 2) normality feature distributions modeling via various Gaussian models, and 3) anomaly map generation and fusion. As compared with previous deep-learning-based image anomaly localization methods, AnomalyHop is mathematically transparent, easy to train, and fast in its inference speed. Besides that, its area under the ROC curve (ROC-AUC) performance on the MVTec AD dataset is 95.9%, which is the state-of-the-art performance.

-By Kaitai Zhang and Bin Wang

 

[1] Zhang, K., Wang, B., Wang, W., Sohrab, F., Gabbouj, M., & Kuo, C. C. J. (2021). AnomalyHop: An SSL-based Image Anomaly Localization Method. arXiv preprint arXiv:2105.03797.

By |May 31st, 2021|News|Comments Off on MCL Research on SSL-based Image Anomaly Localization|

Congratulations to Kaitai Zhang for Passing His Defense

Congratulations to Kaitai Zhang for passing his defense on May 19, 2021. His Ph.D. thesis is entitled “Data-Driven Image Analysis, Modeling, Synthesis and Anomaly Localization Techniques”. Here we invite Kaitai to share a brief introduction of his thesis and some words he would like to say at the end of the Ph.D. study journey.

1) Abstract of Thesis

Emerging Deep learning and machine learning techniques have brought impressive improvements for numerous topics in image processing and computer vision fields. In this thesis, we introduce our research on Data-Driven Image Analysis, Modeling, Synthesis and Anomaly Localization Techniques: 1) image anomaly detection and localization; 2) texture analysis, modeling and synthesis.

For the first part, we will focus on image anomaly detection and localization tasks. Image anomaly detection is a binary classification problem to determine whether an input contains an anomaly, and image anomaly localization is to get pixel-precise segmentation of regions that appear anomalous. Detecting and localizing anomalies is a critical and long-standing problem in image processing and computer vision, and has applications in many real-world scenarios like medical image diagnosis and automated manufacturing inspection. In this talk, I will introduce two of our recent works, PEDENet and AnomalyHop. PEDENet is a neural network-based framework that jointly learns image local feature and density estimation model. AnomalyHop employs successive subspace learning (SSL) framework, and utilizes various Gaussian Descriptors to learn normality feature distributions. Both of them achieve state-of-the-art performance on MVTec AD dataset, and provide either smaller model size or faster inference speed.

In the second part, our previous works in texture analysis, modeling and synthesis will be reviewed. For dynamic texture synthesis, two effective techniques will be proposed and proved effective. The enhanced model could encode coherence of local features as well as the [...]

By |May 24th, 2021|News|Comments Off on Congratulations to Kaitai Zhang for Passing His Defense|

MCL Research on Image Super-resolution

Image super-resolution (SR) is a classic image reconstruction problem in computer vision (CV), which aims at recovering a high-resolution image from a low-resolution image. As a type of supervised generative problem, image SR attracts wide attention due to its strong connection with other CV topics, such as object recognition, object alignment, texture synthesis and so on. Besides, it has extensive applications in real world, for example, medical diagnosis, remote sensing, biometric information identification, etc.

For the state-of-the-art approaches for SR, typically there are two mainstreams: 1) example-based learning methods, and 2) Deep Learning (CNN-based) methods. Example-based methods either exploit external low-high resolution exemplar pairs [1], or learn internal similarity of the same image with different resolution scales [2]. However, features used in example-based methods are usually traditional gradient-related or just handcraft, which may affect model performance. While CNN-based SR methods (e.g. SRCNN [3]) does not really distinguish between feature extraction and decision making. Lots of basic CNN models/blocks are applied to SR problem, e.g. GAN, residual learning, attention network, and provide superior SR results. Nevertheless, the non-explainable process and exhaustive training cost are serious drawbacks of CNN-based methods.

We propose a Successive-Subspace-Learning-based (SSL-based) method to gradually partition data into subspaces by feature statistics. In addition, we utilize spatial-spectral compatible cw-Saab features to express exemplar pairs by taking advantage of reasonable feature extraction [4]. In the future, we aim at providing such a SSL-based explainable method with high efficiency for SR problem.

—  By Wei Wang

 

Reference:

[1] Timofte, Radu, Vincent De Smet, and Luc Van Gool. “A+: Adjusted anchored neighborhood regression for fast super-resolution.” Asian conference on computer vision. Springer, Cham, 2014.

[2] Huang, Jia-Bin, Abhishek Singh, and Narendra Ahuja. “Single image super-resolution from transformed self-exemplars.” Proceedings of the IEEE conference on [...]

By |May 16th, 2021|News|Comments Off on MCL Research on Image Super-resolution|

MCL Research on Speed-up of Multi-Class XGBoost Classifier

Machine learning has witnessed a rapid increase in the amount of training/testing data, feature dimensions and class number due to the arrival of the big data era. In many applications, the systems are expected to deal with a very large number of classes and a huge amount of training/test data. These impose major challenges in: 1) classification accuracy, 2) model complexity in terms of the number of model parameters, and 3)  computational complexity in terms of training and testing costs.  Although deep-learning-based (DL-based) systems can provide good performance in many application contexts, their model sizes are large and training complexities are high.

A popular machine learning tool is the XGBoost [1], which is able to achieve excellent performance on many tasks. XGBoost is a boosting algorithm, which combines multiple weak classifiers to form a more powerful classifier. The XGBoost adds a tree in each iteration, which models the residue from the last iteration. We can simply sum the output from each tree to obtain the final prediction. For classification, XGBoost supports both binary classification and the multiclass prediction. However, multiclass classification tasks with XGBoost can take a very long time if the class number is huge. Hence, in our research, we aim to speed up the multiclass XGBoost.

 

Image credits:

[1] Chen, T., Guestrin, C., 2016.   Xgboost:  A scalable tree boosting system, in: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794.

 

 

By |May 9th, 2021|News|Comments Off on MCL Research on Speed-up of Multi-Class XGBoost Classifier|
  • Permalink Gallery

    Professor Kuo Received 2021 IEEE CASS Charles A. Desoer Technical Achievement Award

Professor Kuo Received 2021 IEEE CASS Charles A. Desoer Technical Achievement Award

Congratulations to MCL Director, Professor C.-C. Jay Kuo, for being selected as the recipient of the 2021 IEEE CASS Charles A. Desoer Technical Achievement Award. This award is named after  Charles A. Desoer, Professor of Electrical Engineering and Computer Science, Emeritus, at UC Berkeley. This award honors individuals whose exceptional technical contributions to a field within the scope of the Circuits and Systems Society have been consistently evident over a period of years. Contributions are documented by publications and based on originality and continuity of effort. Professor Kuo received this award for his contributions to visual communications and multimedia systems

The 2021 CAS Society Awards Ceremony will be held virtually in parallel with the ISCAS 2021 event on Monday, 24 May. The ceremony will be pre-recorded and presented on both the online platform and at the live banquet in Daegu. Here is Professor Kuo’s acceptance speech.

“It is my great honor to receive the 2021 IEEE CASS Charles A. Desoer Technical Achievement Award. I know that this is a highly competitive award, and there are many well qualified nominees each year. I would like to give my deepest appreciation to the Technical Achievement Award Sub-Committee and the CASS Board for their recognition.

I am also grateful for the excellent research environment provided by the University of Southern California. It is a privilege to supervise 160 hardworking PhD students at USC in the last 30 years. We brainstorm research ideas, share research frustration, and enjoy research breakthrough together. I can say that there is nothing more rewarding than working with a large number of talented young people.

I have been heavily involved in two CASS technical committees in my career. They are the visual signal processing and communication technical [...]

By |May 2nd, 2021|News|Comments Off on Professor Kuo Received 2021 IEEE CASS Charles A. Desoer Technical Achievement Award|

MCL Research on Next Generation Video Coding

With the development of camera and sensor technologies, high-resolution images and videos have become ubiquitous in daily life. Demands on fast transmission and efficiency store high-quality images and videos increase dramatically. Problem on how to transmit and store media data efficiently have been widely discussed. Online high-resolution video meeting/live broadcasting also raise the pressure on fast encoding and decoding under the limitation of current bandwidth.

Numerous codecs have been developed during the past 20 years including the well know H.264, MPEG-4 and the latest H.265/HEVC which are widely used in our daily life. H26x and MPEG-x standards are well supported in both software and hardware. There are many encoder and decoder chipsets available commercially (for example chips from System on Chip Technologies Inc.) which can speed up the process and be configured based on user specifications. While the royal free codecs like AV1, it has higher complexity and not widely supported by the hardware chips which hinder it’s being widely used.

Previous frameworks used one layer transformation to perform the energy compaction task. We propose to use the multi-hop transform to perform this task which is supposed to have better energy compaction result. Presently, we focus on image compression (intra coding in video compression) to increase the performance while lower the complexity. In this project, we are trying to develop some low-complexity compression tools which can achieve comparable performance against the current standard.

By |April 25th, 2021|News|Comments Off on MCL Research on Next Generation Video Coding|

MCL Research on Data-Driven Image Compression

Block-based image coding is adopted by the JPEG standard, which has been widely used in the last three decades. The block-based discrete Cosine transform (DCT) and quantization matrices in YCbCr three color channels play key roles in JPEG. In this work, we propose a new image coding method, called DCST. It adopts data-driven colour transform and spatial transforms based on statistical properties of image pixels and machine learning. To match the data-driven forward transform, we propose a quantization table based on the human visual system (HVS). Furthermore, to efficiently compensate for the quantization error, a machine learning-based inverse transform is used. The performance of our new design is verified using the Kodak image dataset. The optimal inverse transformation can achieve 0.11-0.30dB over the standard JPEG over a wide range of quality factors. The whole pipeline outperforms JPEG with a gain of 0.5738 in the BD-PSNR (or a decrease of 9.5713 in the BD-rate) from 0.2 to 3bpp.

the colour input: previous standard use YCbCr as input which is ideal if viewed from statics while not optimal for every single image. In our case, we trained a PCA for every single image to perform color transformation which gives better de-correlate performance;
(2D)^2 PCA [1] transformation and corresponding quantization matrix designed based on the PCA kernel and HVS [2];
Machine learning inverse transform: which uses linear regression to compute the optimal inverse transform kernel for both color conversion and spatial to spectrum transform. This idea helps to estimate the quantization error and results from a better inverse result. Comparing with the previous method which compensates this error using probability mode during the post-processing stage. The ML-inverse transformation would take less time during the decoding [...]

By |April 18th, 2021|News|Comments Off on MCL Research on Data-Driven Image Compression|

MCL Research on Image Artifact Detection and Localization

Image anomaly detection and localization is a fundamental problem in pattern recognition and computer vision, with numerous applications in many areas, such as industrial manufacturing inspection, medical image diagnosis and even video surveillance analysis. The goal of image anomaly detection is to determine whether an input image contains an anomaly, and image anomaly localization is to locate the anomaly on the pixel level. Like most other anomaly detection problem, we formulate image anomaly detection as an unsupervised task, which means only normal images are available during model training. This is because anomalous examples are either too expensive or too few to model their distributions during the training, which also makes it an extremely challenging yet attracting problem.

To tackle this problem, we propose two method with both deep learning and successive subspace learning techniques.

We propose a new deep learning framework for unsupervised image anomaly detection and localization. Our model first utilizes an encoder to generate low-dimensional embeddings for local image patches, which are further fed into a density estimation network inspired by Gaussian Mixture Model. Given the low-dimensional patch embedding as input, density estimation network model the distribution of embedding like GMM clustering, and predict its cluster membership as output. Then, total probability of the give local patch could be computed and further used as a loss term to guide the learning process. Extensive experimental results show that the proposed method achieves very competitive performance compared with the state-of-the-art methods.
We are exploring to use successive subspace learning (SSL) to achieve a more efficient and interpretable method for image anomaly detection and localization. It first employees PixelHop++[1] as feature extractor, in which each hop could encode feature with different receptive field. Then, we [...]

By |April 11th, 2021|News|Comments Off on MCL Research on Image Artifact Detection and Localization|
  • Permalink Gallery

    MCL Research on Semantic Scene Segmentation Based on Multiple Sensor Inputs

MCL Research on Semantic Scene Segmentation Based on Multiple Sensor Inputs

Semantic segmentation can help people identify and locate objects, which provide important road information for upper-level navigational tasks. Due to the rapid development of deep Convolutional Neural Networks (CNNs)[1], the performance of image segmentation models has been greatly improved and CNNs is widely used for this task. However, maintaining the performance under different conditions is a non-trivial task. In the dark, rain, or fog environment, the quality of RGB images will be greatly reduced while other sensors may still get fair results. Thus, our model combines the information of RGB image and depth map. When driving, we often encounter obstacles, like the trash can, barrier, rubble, stones, and cargos. Recognizing and avoiding them is very crucial for safety. To address this problem, we apply multi-dataset learning. In this way, our model can learn more classes from other data sets, including obstacles.

In our experiment, we fully evaluate the RFNet[2] with different datasets and methods to combine them. Regarding the framework of our model, the inputs from different datasets will pass through the resizing module. Then, the depth map and RGB image are sent to the network. Ground truth will go to Relabeling module. Multi-dataset learning strategy is applied to Resizing, Relabeling and Modified Softmax layer. Finally, by comparing the results of relabeled ground truth and prediction, we can obtain the intersection of union(IoU) and value of cross-entropy loss.

Our result shows that our models have excellent performances in the urban environment and blended environment. However, in the field environment, the depth map helps the model very slightly. We also proposed a new thrifty relabeling, which can improve the performance of the model without increasing the complexity of the network. Moreover, more datasets can help the model [...]

By |April 4th, 2021|News|Comments Off on MCL Research on Semantic Scene Segmentation Based on Multiple Sensor Inputs|

MCL Research on Image Calibration from Multiple Sensors

In order to get an accurate perception of surrounding environment in different tasks including autonomous driving, robot navigation, and sensor-driven situational awareness, abundant environment information is necessary. This information can be obtained from different types of multimodal sensors, such as LiDAR sensors, electro-optical/infrared (EO/IR) cameras, GPS/IMU. Before using the collected data, information fusion among these sensors is a critical topic. Specifically, people want to utilize color and shape information from camera and distance information from LiDAR sensors. In which task, the process of finding correspondent points between two sensors is essential. This procedure is called multimodal sensor calibration, in which we need to find the 6DoF extrinsic parameters between these two sensors.

In this work, we develop a new deep learning-driven technique for accurate calibration of LiDAR-Camera pair, which is completely data-driven, does not require any specific calibration targets or hardware assistants, and the entire processing is end to end and fully automatic. We utilize the advanced deep neural network to align accurately the LiDAR point cloud to the image, and regress 6DoF extrinsic calibration parameters. Geometric supervision and transformation supervision are employed to guide the learning process to maximize the consistency of input images and point clouds. Given input LiDAR-Camera pairs as training dataset, the system automatically learns meaningful features, infers modal cross-correlations, and estimates the accurate 6DoF rigid body transformation between the 3D LiDAR and 2D image in real-time.

Images in slides show the system overview and experiment results. In experiment results, the background is the correspondent RGB image. The transparent colormap is the depth map, from blue to red corresponding small to a large distance. The first row is the input RGB images. The second row is input depth maps, the third row [...]

By |March 28th, 2021|News|Comments Off on MCL Research on Image Calibration from Multiple Sensors|