MCL Research on Advanced Deepfake Video Detection

A robust fake satellite image detection method, called Geo-DefakeHop, is proposed in this work. Geo-DefakeHop is developed based on the parallel subspace learning (PSL) methodology. PSL maps the input image space into several feature subspaces using multiple filter banks. By exploring response differences of different channels between real and fake images for filter banks, Geo-DefakeHop learns the most discriminant channels and uses their soft decision scores as features. Then, Geo-DefakeHop selects a few discriminant features by validation dataset from each filter bank and ensembles them to make a final binary decision. Geo-DefakeHop offers a light-weight high-performance solution to fake satellite images detection. Its model size is analyzed, which ranges from 0.8 to 62K parameters. Furthermore, it is shown by experimental results that it achieves an F1-score higher than 95% under various common image manipulations such as resizing, compression, and noise corruption.

— By Max Chen

By |March 27th, 2022|News|Comments Off on MCL Research on Advanced Deepfake Video Detection|

MCL Research on Graph Learning

Graph-based semi-supervised learning has shown prominent performance in node classification task by exploiting the underlying manifold structure of data. Recently, an enhancement on the classical label propagation (LP) named GraphHop is proposed, which has outperformed the existing graph convolutional networks (GCNs) on various networks. Although the superior performance in GraphHop model is explained in the view of smoothening both node attribute and label signals, its mechanisms are still not fundamentally clear.

In this work, we develop deeper insights into the the GraphHop model from the point of regularization framework. We show that GraphHop model can be cast into an iterative approximated optimization of a particular regularization function on graphs. Then, based on this variational interpretation, we propose two approaches to address the limits in the GraphHop model due to the approximated optimization process. In particular, these are 1) additional aggregations in optimizing the label embeddings; 2) adaptively selecting of the reliable unlabeled samples for the classifier training. Experiments show that equipped with these two improvements, our model called GraphHop++ is able to gain significantly better performance than the former GraphHop model, in addition to the state-of-the-art methods on various benchmark networks with limited label rates.

— By Tian Xie

By |March 20th, 2022|News|Comments Off on MCL Research on Graph Learning|

MCL Research on Green Progressive Learning

Image classification has been studied for many years as a fundamental problem in computer vision. With  the development of convolutional neural networks (CNNs) and the availability of larger scale datasets, we see a rapid success in the classification using deep learning for both low- and high-resolution images. Although being effective, one major challenge associated with deep learning is that its underlying mechanism is not transparent. Being inspired by deep learning, the successive subspace learning (SSL) methodology was proposed by Kuo et.al. in a sequence of papers. Different from deep learning, SSL-based methods learn feature representations in an unsupervised feedforward manner using multi-stage principle component analysis (PCA). Joint spatial-spectral representations are obtained at different scales through multi-stage transforms.

Applying the existing SSL-based model the classification takes usage of all the data at a time for the training, which is a single-round approach. Among the samples, there are easy samples which is usually of a high ratio in the dataset, and a portion of hard samples. Easy samples can achieve quite high conditional accuracy, while hard samples need further attention as the distribution are masked by the easy sample. This motivates the design of Green Progressive Learning, which adds more rounds of training progressive to zoom in to smaller and smaller subspace of hard samples. The selection of training samples to train the progressive learning in each round is critical to the performance gain. In each learning round, the hard training samples are re-selected to represent the subspace. Experiments on MNIST and Fashion-MNIST show the potential of progressive learning, which can help boost the performance of difficult cases.

— By Yijing Yang


Chen and C.-C. J. Kuo, “Pixelhop: A successive subspace learning (ssl) method for object recognition,” Journal [...]

By |March 13th, 2022|News|Comments Off on MCL Research on Green Progressive Learning|

MCL Research on Subspace Learning Machine

Classification-oriented machine learning models have been well-studied in the past decades. The focus has shifted to deep learning (DL) in recent years. Feature learning and classification are handled jointly in DL models. Although the best performance of classification tasks is often achieved by DL through back propagation (BP), DL models suffer from lack of interpretability, high computational cost and high model complexity. Feature extraction and classification are treated as separate modules in classical machine learning. We focus on the classical learning paradigm and propose a new high-performance classifier with features as the input. Examples of classical classifiers include support vector machine (SVM), decision tree (DT) , multilayer perceptron(MLP) feedforward multilayer perceptron(FF-MLP) and extreme learning machine (ELM). SVM, DT and FF-MLP share one common idea, i.e., feature space partitioning. Inspired by the MLP, the DT and the ELM, a new classification model, called the subspace learning machine (SLM), is proposed aiming at general classification tasks.

The SLM attempts to efficiently partition the input feature space into multiple discriminant subspaces in a hierarchical manner and it works as follows: First, SLM identifies a discriminant subspace by examining the discriminant power of input features. Then, it applies random projections to input discriminant subspace features to yield p 1D subspaces and finds optimal partitions in each of them. This is equivalent to partitioning input space with p hyper-planes whose orientations and biases are determined by random projections and partitions, respectively.  Among p projections, we develop a criterion to choose the best q partitions that yield 2q partitioned subspaces. The subspace partitioning process is repeated at each child node.  When the samples are sufficiently pure at a child node, the partitioning process stops and SLM makes final predictions. SLM offers [...]

By |March 6th, 2022|News|Comments Off on MCL Research on Subspace Learning Machine|

MCL Research on Unsupervised Nuclei Segmentation

Nuclei segmentation is a consequential task in biological image analysis, helping in the reading process of histology images. Different attributes, such as shape, population, cluster formation and density play a significant role in clinical practice for cancer diagnosis and its aggressiveness level assessment.  Given that the annotation of this data is carried out by expertized pathologists who reportedly [2] need to spend on average 120-150 hours to annotate 50 image patches (about 12M pixels), one can realize that annotated data are in scarcity. That is a big impediment for supervised methods, particularly for DL-based solutions that need massive annotated data to learn generalizable representations. Moreover, the annotations have a high inter-observer variation which is subject to the experience of the annotator [1]. On top of that, nuclei color and texture variations across images from different laboratories and multiple organs further widen the gap between train and test domains.

Given the aforementioned limitations, a natural way to solve the problem is to pursue an unsupervised line of research. Also, given the limited number of annotated data, our proposed method decouples from the DL paradigm and utilizes conceptually simpler techniques that make the pipeline more transparent in terms of segmentation decision making. It is mainly based on prior knowledge about the nuclei segmentation problem. CBM [3] pipeline starts out with a data-driven Color (C) transform, to highlight the nuclei cell regions over the background, followed by an adaptive Binarization (B) process built on the bi-modal assumption in each local region. That process is being run in a patch-wise manner, to leverage the local distribution assumptions between background and foreground. The final part of the pipeline uses Morphological (M) transformations that refines the segmented output based on certain [...]

By |February 27th, 2022|News|Comments Off on MCL Research on Unsupervised Nuclei Segmentation|

MCL Research on Learning-based Image Coding

Traditional image coding has achieved great success within four decades. Image coding standards have been developed and widely used today such as JPEG and JPEG-2000. Furthermore, intra coding schemes of modern video coding standards also provide very effective image coding solutions. Several powerful tools have been used to de-correlate the pixel values:
1. Block transform coding, which is used in the majority of the codecs where images are partitioned into blocks of different sizes and pixel values in blocks are transformed from the spatial domain to the spectrum domain for energy compaction before quantization and entropy coding.
2. Intra prediction, as another powerful tool that reduces the pixel correlation using pixel values from neighboring blocks at a low cost. Residuals after intra prediction are still coded by block transform coding.

Recently, deep-learning-based compression methods have attracted a lot of attention due to their superior rate-distortion performance. Compared with the traditional codecs, learned based codec has the following characteristic:
1. Inter correlations: Traditional image codecs only explore correlation in the same image while learning-based image codecs can exploit correlation from other images (i.e., inter-image correlation).
2. Multi-scale representation: Traditional image codecs only capture the representation with variable block size while learning-based image codecs can exploit the multi-scale representation based on pooling. In other words, traditional image codecs primarily explore correlation at the block level while learning-based image codecs can exploit short, middle, and long-range correlations using the multi-scale representation.
3. Advanced loss functions: different loss functions can be easily designed in learning-based schemes to fit the human visual system (HVS) and attention can be introduced to the learning-based schemes conveniently.

To achieve low-complexity learning-based image coding, we propose a multi-grid multi-block-size vector quantization (MGBVQ) method based on [...]

By |February 20th, 2022|News|Comments Off on MCL Research on Learning-based Image Coding|
  • Permalink Gallery

    MCL Research on Point Cloud Object Retrieval and Pose Estimation

MCL Research on Point Cloud Object Retrieval and Pose Estimation

Object pose estimation is an important problem in 3D scene understanding. Given a 3D point cloud object, it tries to estimate the 6-DOF pose comprising of rotation and translation with respect to a chosen coordinate system. The pose information can then be used for downstream tasks such as object grasping, obstacle avoidance, path planning, etc. which are commonly encountered in Robotics. In a complete scene understanding system, pose estimation usually comes after a 3D detection algorithm has localized and classified the object.

The pose estimation problem is similar to the problem of point cloud object registration which has been previously studied at MCL. In particular, the R-PointHop [1] method was proposed which successfully registers a source point cloud with a template. In the most recent work, we present a method termed PCRP that modifies R-PointHop for object pose estimation when a similar template object is unavailable. PCRP assumes a gallery set of pre-aligned point cloud objects and reuses the R-PointHop features to retrieve a similar object from the gallery. To do so, the pointwise features obtained using R-PointHop are aggregated into a global feature vector for nearest neighbor retrieval using the Vector of Locally Aggregated Descriptors (VLAD) [2]. Then, the input object’s pose is estimated by registering it with the retrieved object.

Though point cloud retrieval is extensively studied in contexts like shape retrieval or place recognition, retrieval in presence of different object poses is less talked of. In this work we show how the similar object can be retrieved even in presence of different object poses. This is achieved due to the rotation invariant features learned by R-PointHop. Another improvement over R-PointHop is the replacement of conventional eight octant partitioning based point attributes with more [...]

By |February 13th, 2022|News|Comments Off on MCL Research on Point Cloud Object Retrieval and Pose Estimation|

MCL Research on Point Cloud Compression

Point Cloud Compression (PCC) has received a lot of attention in recent years due to its wide applications such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Video-based PCC (V-PCC) and geometry-based PCC (G-PCC) are two distinct technologies developed by MPEG 3DG[1][2]. Deep-learning-based (DL-based) PCC is a strong competitor to them. Most DL methods generalize the DL-based image coding pipeline to the point cloud data [3][4]. They outperform G-PCC in the current MPEG 3DG standard in the dense point cloud compression. Yet, their performances are still inferior to that of V-PCC in the coding of dynamic point clouds.

We propose to design a learning-based PCC solution that could outperform those DL-based methods with lower complexity and less memory consumption. Our method uses geometry projection to generate 2D images and apply vector quantization-based 2D image codec to compress the projected map. For a point cloud sequence, we can do the projection in three steps. First, split the sequence into blocks by doing the octree partition. Second, project each 3D block into a plane and pack all the planes into a map. Third, encode/decode the 2D map and reconstruct the 3D point cloud sequence. They are demonstrated in Fig.1. We do the non-uniform sampling for the projected planes and pack all the planes to generate one depth map and one texture map in the reconstruction process. The two maps are shown in Fig.2.

Presently, we utilize the x264/x265 codec to code the maps. In the future, we will adopt a vector quantization-based image codec to compress the two maps.

— Qingyang Zhou


[1] S. Schwarz, M. Preda, V. Baroncini, M. Budagavi, P. Cesar, P. A. Chou, R. A. Cohen, M. Krivoku ́ca, S. Lasserre, Z. Li et [...]

By |February 6th, 2022|News|Comments Off on MCL Research on Point Cloud Compression|

MCL Research on Unsupervised Object Tracking

Video object tracking is one of the fundamental computer vision problems. It finds rich applications in video surveillance, autonomous navigation, robotics vision, etc. Given a bounding box on the target object at the first frame, a tracker has to predict object box locations and sizes for all remaining frames in online single object tracking (SOT). The performance of a tracker is measured by accuracy (higher success rate), robustness (automatic recovery from tracking loss), computational complexity and speed (a higher number of frames per second of FPS).

Online trackers can be categorized into supervised and unsupervised ones. Supervised trackers based on deep learning (DL) dominate the SOT field in recent years. DL trackers offer state-of-the-art tracking accuracy, but they do have some limitations. First, a large number of annotated tracking video clips are needed in the training, which is a laborious and costly task. Second, they demand large memory space to store the parameters of deep networks due to large model sizes. Third, the high computational power requirement hinders their applications in resource-limited devices such as drones or mobile phones. Advanced unsupervised SOT methods often use discriminative correlation filters (DCFs) which could run fast on CPU with Fast Fourier Transform and has extra small model size. There is a significant performance gap between unsupervised DCF trackers and supervised DL trackers. It is attributed to the limitations of DCF trackers such as failure to recover from tracking loss and inflexibility in object box adaptation.

To address the above issues with a green solution, previously we proposed UHP-SOT (Unsupervised High-Performance Single Object Tracker) which used STRCF as the baseline and incorporated two new modules – background motion modeling and trajectory-based object box prediction. Our new work UHP-SOT++ is an [...]

By |December 12th, 2021|News|Comments Off on MCL Research on Unsupervised Object Tracking|

MCL Research on Point Cloud Odometry

Odometry is the process of using motion sensors to estimate the change in position of an object over time. It has been widely studied in the context of mobile robots, autonomous vehicles, drones, and other moving agents. Traditional odometry based on motion sensors such as Inertial Measurement Unit (IMU) and magnetometers is prone to error accumulation over time, known as odometry drift. Visual odometry makes use of camera images and/or point cloud scans collected over time to determine the position and orientation of the moving object. Several visual odometry systems that integrate monocular, stereo vision, point clouds and IMU have been developed for object localization.
We propose an unsupervised learning method for visual odometry from LiDAR point clouds called Green Point Cloud Odometry (GPCO). GPCO follows the traditional scan matching based approach to solve the odometry problem by incrementally estimating the motion between two consecutive point cloud scans. The GPCO method can be divided into four steps. First, a geometry-aware sampling method selects a small subset of points from the input point clouds. To do so, the eigen features of points in a local neighborhood are considered followed by random point sampling. Next, the 3D view surrounding the moving object is partitioned into four parts representing the front, rear, left and right-side view. The view-partitioning step divides the sampled points into four disjoint sets. The features of the sampled points are derived using the PointHop++ [1] method. Matching points between two consecutive point clouds are found in each view using nearest neighbor rule in the feature space. Finally, the motion between the two scans is estimated using Singular Value Decomposition (SVD). The motion is updated to the estimates from previous times and the process [...]

By |December 5th, 2021|News|Comments Off on MCL Research on Point Cloud Odometry|