MCL Research on Point Cloud Segmentation

Processing and analysis of 3D Point clouds are challenging since the 3D spatial coordinates of points are irregular so that 3D points cannot be properly ordered to be fed into deep neural networks (DNNs). To deal with the order problem, a certain transformation is needed in the deep learning pipeline. Transformation of a point cloud into another form often leads to information loss. Several DNNs have been designed for point cloud classification and segmentation in recent years. They address the point order problem and reach impressive performance in tasks such as classification, segmentation, registration, object detection, etc. However, DNNs rely on expensive labeled data. Furthermore, due to the end-to-end optimization, deep features are learned iteratively via backpropagation. To save both labeling and computational costs, it is desired to obtain features in an unsupervised and feedforward one-pass manner.

Unsupervised or self-supervised feature learning for 3D point clouds was investigated. Although no labels are needed, the learned features are not as powerful as the supervised one with degraded performance. Recently, two light-weight point cloud classification methods, PointHop [1] and PointHop++ [2], were proposed. Both of them have an unsupervised feature learning module, and their performance is comparable with state-of-the-art deep learning methods.

By generalizing the PointHop, we propose a new solution for joint point cloud classification and part segmentation here. Our main contribution is the development of an unsupervised feedforward feature (UFF) learning system [3] with an encoder-decoder architecture. UFF exploits the statistical correlation between points in a point cloud set to learn shape and point features in a one-pass feedforward manner. It obtains the global shape features with an encoder and the local point features using the encoder-decoder cascade. The shape/point features are then fed into classifiers [...]

By |October 18th, 2020|News|Comments Off on MCL Research on Point Cloud Segmentation|

MCL Research on Point Cloud Registration

Point cloud registration refers to the process of aligning two point clouds. The two point clouds to be aligned are commonly called source and target. The goal is to find a spatial transformation (3D rotation and translation) that needs to be applied to the source to optimally align it with the target.  Registration has become popular with the proliferation of 3D scanning devices like LiDAR and their applications in autonomous driving, robotics, graphics, mapping, etc.

Point clouds need to be registered in order to merge data from different sensors to obtain a globally consistent view, mapping a new observation to known data, etc. Registration is challenging due to several reasons. The source and the target point clouds may have different sampling densities and different number of points. Point clouds may contain outliers and/or be corrupted by noise. Sometimes, only partial views are available.

The problem of registration (or alignment) has been studied for a long while. Prior to point cloud processing, the focus has been on aligning lines, parametric curves and surfaces. The classical Iterative Closest Point (ICP) algorithm alternates between finding corresponding points and estimating the optimal rotation and translation. ICP just uses the spatial coordinates of points to establish point correspondences. More recently there has been a trend to use deep learning, feature based methods for registration. Two such popular methods include PointNetLK and Deep Closest Point (DCP). PointNetLK and DCP treat registration as a supervised learning problem and train end-to-end networks using deep learning. The supervision is in terms of class labels and ground truth rotation matrix and translation vector. We propose a method called ‘Salient Points Analysis (SPA)’ [1] for registration.  In contrast with the recent deep learning methods, our SPA method [...]

By |October 11th, 2020|News|Comments Off on MCL Research on Point Cloud Registration|

MCL Research on Texture Synthesis

Automatic synthesis of visually pleasant texture that resembles exemplary texture finds applications in computer graphics. We have witnessed amazing quality improvement of synthesized texture in the last 5-6 years due to the resurgence of neural networks. Texture synthesis based on deep learning (DL), such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), yield visually pleasant results. DL-based methods learn transform kernels from numerous training data through end-to-end optimization.  However, these methods have two main shortcomings: 1) lack of mathematical transparency and 2) higher training and inference complexity.

To address these shortcomings, we investigate a non-parametric and interpretable texture synthesis method, called NITES, in this work. NITES is mathematically transparent and efficient in training and inference.  NITES consists of three steps. First, it analyzes the texture patches (as training samples) which are cropped from the input exemplary texture image to obtain its joint spatial-spectral representations. Second, the probabilistic distributions of training samples in the joint spatial-spectral spaces are characterized. The sample distribution in the core subspace was carefully studied, which allows us to build a core subspace generation model. Furthermore, a successive subspace generation model was developed to build a higher-dimensional subspace based on a lower-dimensional subspace. Finally, new texture images are generated by mimicking probabilities and/or conditional probabilities of the source texture patches. In particular, we adopt a data-driven transform, known as the channel-wise (c/w) Saab trans-form, which provides a powerful representation in the joint spatial-spectral space. The c/w Saab transform is derived from the successive subspace learning (SSL) theory.

Experimental results show the superior quality of generated texture images and efficiency of the proposed NITES method in terms of both training and inference time. It can generate visually pleasant texture images effectively, including [...]

By |October 4th, 2020|News, Research|Comments Off on MCL Research on Texture Synthesis|

Congratulations to Jiali Duan for Passing Qualifying Exam

The title of his Ph.D. thesis proposal is “Theory and Applications of Adversarial and Structured Knowledge Learning”. His qualifying exam committee consisted of C.-C. Jay Kuo (Chair), Keith Michael Chugg, Keith Jenkins, Rahul Jain and Stefanos Nikolaidis.


Abstract of thesis proposal:

Deep learning has brought impressive improvements for many tasks, thanks to end-to-end data-driven optimization. However, people have little control over the system during training and limited understanding about the structure of knowledge being learned. In this thesis proposal, we study theory and applications of adversarial and structured knowledge learning: 1) learning adversarial knowledge with human interaction or by incorporating human-in-the-loop; 2) learning structured knowledge by modelling contexts and users’ preferences.

In the first category, our research topics include human-robot adversarial learning; Human-guided curriculum reinforcement learning and PortraitGAN for simultaneous emotion and modality manipulations. In the second category, a real-world compatible recommendation problem was tackled with structural graph representation and deep metric learning. The two categories are also related in the sense that structured knowledge often help lay a solid foundation, on which adversarial knowledge can be learned more successfully. Additionally, we contribute technically by open-sourcing relevant platforms.

By |September 27th, 2020|News|Comments Off on Congratulations to Jiali Duan for Passing Qualifying Exam|
  • Permalink Gallery

    Congratulations to Mozhdeh Rouhsedaghat for Her Summer Internship at PayPal

Congratulations to Mozhdeh Rouhsedaghat for Her Summer Internship at PayPal

Mozhdeh Rouhsedaghat received her bachelor’s degree from the EE dept. of Sharif University of Technology. She is currently a Ph.D. student in Media Communications Lab at the University of Southern California, under the supervision of Prof. C.-C. Jay Kuo. Her research interests include computer vision and deep learning. She was a research intern at PayPal during the summer. Here is a short interview with Mozhdeh.

1. How does the study in USC and MCL help you?

During my Ph.D. studies at USC and MCL, I achieved a solid understanding of deep learning and machine learning and strengthened my research skills. So I was able to explore a research area during my internship and achieve great results. At MCL lab, we write weekly reports and hold seminars which helped me improve my writing and presentation skills as well.

2. How was it like working at PayPal?

This year because of the global pandemic, all the interns worked remotely. So PayPal provided the required equipment for all the interns and the University Program Team at PayPal tried to make the whole experience more interesting and exciting. I had daily meetings with my mentor and weekly meetings with my manager. Overall, I was very satisfied with the whole experience.

3. Do you have any suggestions for current graduate students?

When you want to apply for a position make sure that the mentioned responsibilities match your goals. For example, Ph.D. students usually prefer a research position. My second advice is to apply early for the internship positions as most of the positions are offered 5-7 months prior to their start date.

By |September 20th, 2020|News|Comments Off on Congratulations to Mozhdeh Rouhsedaghat for Her Summer Internship at PayPal|
  • Permalink Gallery

    Congratulations to Yeji Shen for His Summer Internship at Facebook

Congratulations to Yeji Shen for His Summer Internship at Facebook

Yeji Shen is a PhD candidate in Multimedia Communication Lab (MCL) in USC, supervised by Prof. C.-C. Jay Kuo. He received his Bachelor’s degree in Computer Science from Peking University, Beijing, China in June 2016. Since August 2016, he has been pursuing his PhD degree in MCL. His research interests include Machine Learning, Computer Vision and Artificial Intelligence. During this summer, he did an internship at Facebook. Here is a short interview with Yeji.

1. How does the study in USC and MCL help you?

First of all, in MCL, I learned to have a reasonable understanding of the research topics that I’ve been focusing on, like active learning, 3D vision and some semi-supervised learning. Such understanding is pretty helpful and valuable for both job interviews and the actual working experience. Second, I got to have a reasonable level of presentation skills, which I believe is very important in the future career. Third, a tough mind. Life is challenging. Only those with a tough mind can get through all those difficulties and obtain happiness.

2. How was it like working at Facebook?

The internship this year was a remote one. Different from normal working style, interns needed to work at home with the equipment sent by the company. (Of course, I need to mail them back.) Compared to a normal internship, the main pros are: 1) No need to physically move to the bay area. And thus the fee for house rent was saved. 2) Commuting time was saved. However, it is also clear that some cons are: 1) Harder to communicate. 2) Less interaction with team members. 3) It just didn’t feel good when the remote working style lasts for too long. Still, the overall feeling was not bad.

3. [...]

By |September 13th, 2020|News|Comments Off on Congratulations to Yeji Shen for His Summer Internship at Facebook|
  • Permalink Gallery

    Congratulations to Kaitai Zhang for His Summer Internship at Facebook

Congratulations to Kaitai Zhang for His Summer Internship at Facebook

Kaitai Zhang is currently a fourth year Ph.D. candidate at Multimedia Communication Lab. His research mainly focus on computer vision, machine learning and deep learning. Kaitai received an internship offer and spent the past summer at Facebook. Here is a short interview with Kaitai.


1. How does the study in USC and MCL help you? (technically and psychologically)

I believe my research experience at MCL and my education background from USC are the foundation on which I could get the internship opportunity. From the technical side, all my machine learning-related projects from MCL help a lot to get hiring manger’s attention during the interview process(This is especially important if you want to get into a very popular team). From the psychological side, I found the industrial project I worked on are even more beneficial to me that I expected. It is more like an opportunity to get exposed to real-world problem from industry and learn how things work in companies, which could make our students more well-prepared for the internship.

Beside the above two aspects, I also want to mention another advantage for students from MCL, which is the extraordinary reputation and wide alumni network of our lab. More than one engineers talked to me about our alumni and their awesome works at Facebook.


2. How was it like working at Facebook?

The internship at Facebook was like an amazing journey. Here I will focus on one thing that impressed me most. It is the move-fast working style at Facebook. People at Facebook are moving fast on all aspects. They are very energetic and acute. There is a daily sync meeting and also few ad-hoc meetings to discuss things efficiently. People like asking others for help and also like helping others, and this is how they unblock themselves when meet [...]

By |September 6th, 2020|News|Comments Off on Congratulations to Kaitai Zhang for His Summer Internship at Facebook|
  • Permalink Gallery

    Congratulations to Jiali Duan for His Summer Internship at Amazon

Congratulations to Jiali Duan for His Summer Internship at Amazon

Jiali Duan is a PhD candidate supervised by Prof. C-.C. Jay Kuo at MCL. He interned at Amazon A9 during Summer. Here is a short interview with Jiali.


1. How does the study in USC and MCL help you? (technically and psychologically)

First, MCL helped me lay a solid foundation for research and communication. Academic and technical exchange of

ideas happen almost daily for applied scientist intern at Amazon, making it a necessity to communicate concisely and logically. Thankfully, I got trained for this aspect at MCL by sticking to weekly report and regular personal meetings. Second, the ability to prototype new ideas is intensely tested during internship. Original research here at MCL prepare us well for this aspect.


2. How was it like working at Amazon A9?

I had two internship experience at A9, a physical one in last year and a remote one this year. Generally, the physical one is much better. Last year, I was working at A9 Palo Alto. The company is located at University St, within walking distance to StandFord. Due to its location, there’re many nice restaurants and the neighborhood is very safe. The internship program was also very kind to provide free baseball/football match tickets, Santa Cruz tour and Seattle headquarter tour.


3. Do you have any suggestions for current graduate students?

I know that some companies provide more software development positions than research positions and some allow limited number of interview trials from the same person. So, be ready when you try. In terms of suggestions, be prepared for questions that reach beyond your scope of knowledge, which may require certain amount of improvision.

By |August 31st, 2020|News|Comments Off on Congratulations to Jiali Duan for His Summer Internship at Amazon|

Congratulations to Bin Wang for His Summer Internship at JD

Bin Wang received his B.Eng. from University of Electronic Science and Technology of China in June, 2017. Since July 2017, He joined Media Communication Lab (MCL) at University of Southern California (USC) as a Ph.D. student, supervised by Prof. C.-C. Jay Kuo. His research interest includes natural language processing and machine learning.

1. How does the study in USC and MCL help you? (technically and psychologically)    

The research topics I have been working on for the last two years helped a lot to build a solid understanding of natural language understanding and machine learning field. Particularly the experience with representation learning and graph learning projects are really helpful and allows me to behave well in interviews and also get started quickly when doing intern projects. Additionally, our weekly report and presentation training really sharpened my writing and oral presentation skills, which is at least as important as the coding/implementation ability in a long run.

2. How was it like working at JD AI-Research?

Because of global pandemic, the year of 2020 is quite different for everyone. Instead of heading to Mountain View, all interns are working remotely. A more flexible working style is allowed. Here, we have daily meetings to get sync with supervisor and mentor. Each week, we also have to submit the weekly report for summarization and planning. At AI-Research group, the working style is very close to a university lab and the goal is also for publishing at high-tier conference in the AI field.

3. Do you have any suggestions for current graduate students? (e.g. interview strategy and preparation, etc.)       

Usually evaluation protocol varies with different companies and groups. Gathering information for your interested positions is extremely important and MCL alumnus can be [...]

By |August 23rd, 2020|News|Comments Off on Congratulations to Bin Wang for His Summer Internship at JD|

MCL Research on Face Gender Classification

Face attributes classification is an important topic in biometrics. The ancillary information of faces such as gender, age and ethnicity is referred to as soft biometrics in forensics. The face gender classification problem has been extensively studied for more than two decades. Before the resurgence of deep neural networks (DNNs) around 7-8 years ago, the problem was treated using the standard pattern recognition paradigm. It consists of two cascaded modules: 1) unsupervised feature extraction and 2) supervised classification via common machine learning tools such as support vector machine (SVM) and random forest (RF) classifiers.

We have seen a fast progress on this topic due to the application of deep learning (DL) technology in recent years. Cloud-based face verification, recognition and attributes classification technologies have become mature, and they have been used in many real world biometric systems. Convolution neural networks (CNNs) offer high performance accuracy. Yet, they rely on large learning models consisting of several hundreds of thousands or even millions of model parameters. The superior performance is contributed by factors such as higher input image resolutions, more and more training images and abundant computational/memory resources.

Edge/mobile computing in a resource-constrained environment cannot meet the above-mentioned conditions. The technology of our interest finds applications in rescue missions and/or field operational settings in remote locations. The accompanying face inference tasks are expected to execute inside a poor computing and communication infrastructure. It is essential to have a smaller learning model size, lower training and inference complexity, and lower input image resolution. The last requirement arises from the need to image individuals at farther standoff distances, which results in faces with fewer pixels.

In this research, MCL worked closely with ARL researchers in developing a new interpretable non-parametric machine [...]

By |August 17th, 2020|News|Comments Off on MCL Research on Face Gender Classification|