News

  • Permalink Gallery

    Professor Kuo Talked about Deep Learning in MHI Emerging Trends Series

Professor Kuo Talked about Deep Learning in MHI Emerging Trends Series

Professor Kuo Gave a Talk on Deep Learning at Ming Hsieh Institute

The Ming Hsieh Institute has launched an MHI Emerging Trends Series. MCL Director, Professor C.-C. Jay Kuo, was the first speaker in this series. Professor Kuo gave his talk on deep learning on April 10 (Monday), 2017.

There is a resurging interest in developing a neural-network-based solution to supervised machine learning in the last 5 years. However, little theoretical work was reported in this area. In his talk, Professor Kuo attempted to provide some theoretical foundation to the working principle of the convolutional neural network (CNN) from a signal processing viewpoint. First, he introduced the RECOS transform as a basic building block for CNNs. The term “RECOS” is an acronym for “REctified-COrrelations on a Sphere”. It consists of two main concepts: data clustering on a sphere and rectification. Then, a CNN is interpreted as a network that implements the guided multi-layer RECOS transform. Along this line, he also compared the traditional single-layer and modern multi-layer signal analysis approaches. Furthermore, he discussed how guidance can be provided by data labels through backpropagation in the training with an attempt to offer a smooth transition from weakly to heavily supervised learning. Finally, he pointed out several future research directions at the end.

There were about 80 people attending Professor Kuo’s seminar. Many questions were asked after his talk. Professor Kuo said that he enjoyed the interaction with the audience very much and it demonstrated the strong interest of the audience on this topic.

By |April 17th, 2017|News|Comments Off on Professor Kuo Talked about Deep Learning in MHI Emerging Trends Series|

MCL works on Interactive Advisement for Smart TV

When watching images/videos on a TV, we often have many questions about the image/video. What is the name of the beautiful places? What is the name of the actors? Which store sell the actor’s car at big discounts? Imagine one day we have a smart TV which can interactively answer your questions, and recommend relevant shopping/travel advertisements. We will enjoy more convenience and have more funs on watching TV.

MCL members, Bing Li, Zhehang Ding and Yuhang Su are collaborating with Samsung Company on Interactive Advisement for Smart TV. At the first year, we focus on automatic image/video caption. Image/video caption is to describe an image/video by a sentence instead of detecting objects.

Currently, we propose three pipelines for this project. The first pipeline is general image caption. The second and third pipeline are respectively place aware caption and face aware caption, such that our system can achieve better performance in vertical industrials such as travel, entertainment, sport and etc. For general image caption, we develop a detection method which achieves 84% mAP. For place-ware annotation, since no image datasets is for world-wide famous places, we collect images from 118 famous places in 21 countries to construct a landmark dataset. For face aware annotation, we construct a celebrity dataset, and face detection and face recognition method based on CNN.

In our future work, we will put more efforts into video caption.

By |April 11th, 2017|News|Comments Off on MCL works on Interactive Advisement for Smart TV|
  • Permalink Gallery

    MCL Members Chi-Hao Wu and Siyang Li Presented Their Research Work at WACV 2017

MCL Members Chi-Hao Wu and Siyang Li Presented Their Research Work at WACV 2017

MCL members, Chi-Hao (Eddy) Wu and Siyang Li presented their papers at Winter Conference on Applications of Computer Vision (WACV) 2017, Santa Rosa, CA, USA

The title of Eddy’s paper is “Boosted Convolutional Neural Networks (BCNN) for Pedestrian Detection”, with Weihao Gan, De Lan and C.-C. Jay Kuo as the co-authors. Here is a brief summary:

“In this work, a boosted convolutional neural network (BCNN) system is proposed to enhance the pedestrian detection performance. Being inspired by the classic boosting idea, we develop a weighted loss function that emphasizes challenging samples in training a convolutional neural network (CNN). Two types of samples are considered challenging: 1) samples with detection scores falling in the decision boundary, and 2) temporally associated samples with inconsistent scores. A weighting scheme is designed for each of them. Finally, we train a boosted fusion layer to benefit from the integration of these two weighting schemes. We use the Fast-RCNN as the baseline, and test the corresponding BCNN on the Caltech pedestrian dataset in the experiment, and show a significant performance gain of the BCNN over its baseline.” 

Siyang’s paper is entitled “Box Refinement: Object Proposal Enhancement and Pruning”, co-authored with Heming Zhang, Junting Zhang, Yuzhuo Ren and C.-C. Jay Kuo. The summary goes as followed:

“Object proposal generation has been an important preprocessing step for object detectors in general and the convolutional neural network (CNN) detectors in particular. Recently, people start to use the CNN to generate object proposals but most of these methods suffer from the localization bias problem, like other objectness-based methods. Since contours offer a powerful cue for accurate localization, we propose a box refinement method by searching for the optimal contour for each initial bounding box that minimizes the contour cost. [...]

By |April 5th, 2017|News|Comments Off on MCL Members Chi-Hao Wu and Siyang Li Presented Their Research Work at WACV 2017|

MCL Works on Text Localization

Spotting text in a natural scene image is a challenging task. It involves text localization in the image and text recognition given these localized text image patches. To tackle this problem, traditional optical character recognition (OCR) techniques – which are designed specifically for black and white text contents – give way to more sophisticated methods like neural networks.

Yuanhang Su, one MCL member, is now collaborating with Inha University, Korean Airline and Pratt & Whitney institute for collaborative engineering (PWICE) to build a text spotting system. Our lab has developed a comprehensive text spotting system that can localize and recognize text in natural scene images by using combined convolutional neural network (CNN) and recurrent neural network (RNN) architecture. Our system is able to deal with English and Korean text contents.

By |April 2nd, 2017|News|Comments Off on MCL Works on Text Localization|

MCL Works on Deep Learning based Fashion Fingerprinting

Fashion fingerprint is a compact feature vector for fashion items that can be used for tasks such as recognition, clustering, and retrieval of similar items. It is equally useful for both online fashion retailers as well as for physical apparel stores (with or without their online extensions). A related problem is understanding the apparel preferences of an individual from the dresses that they wear while visiting the physical store. One of the challenges of fashion study different from others is lack of enough accurate annotation. Available datasets have either limited number of images or very noisy annotation.

Currently we have successfully trained a fashion item localization model based on SSD[1]. The model is able to localize upper clothes, bottom clothes and one-pieces and has been tested on the Clothing Parsing dataset[2]. It achieves an F-score of 0.887 on upper clothes localization. For other clothing items, errors occur because the model may focus on too local regions and thus gets confused between skirt and dress. Prior location of human body will be incorporated in our model to solve this problem.

In the future we will further refine our localization model and also work on two directions. One is to recognize the garments based on our localization. The other one is to automatically label more images to enlarge the size of datasets.

[1] Liu, Wei, et al. “SSD: Single shot multibox detector.” European Conference on Computer Vision. Springer International Publishing, 2016.
[2] Liang, Xiaodan, et al. “Deep human parsing with active template regression.” IEEE transactions on pattern analysis and machine intelligence 37.12 (2015): 2402-2414.

By |March 26th, 2017|News|Comments Off on MCL Works on Deep Learning based Fashion Fingerprinting|

MCL Works on Splicing Image Detection

With the advent of Web 2.0 and ubiquitous adoption of low-cost and high-resolution digital cameras, users upload and share images on a daily basis. This trend of public image distribution and access to user-friendly editing software such as Photoshop and GIMP has made image forgery a serious issue. Splicing is one of the most common types of image forgery. It manipulates images by copying a region from one image (i.e., the donor image) and pasting it onto another image (i.e., the host or spliced image). Forgers often use splicing to give a false impression that there is an additional object present in the image, or to remove an object from the image. A spliced image from the Columbia Uncompressed [1] dataset is shown above. Image splicing can potentially be used in generating false propaganda for political purposes. For example, during the 2004 US Presidential election campaign, an image that showed John Kerry and Jane Fonda speaking together at an anti-Vietnam war protest was released and circulated. It was discovered later that this was a spliced image, and was created for political purposes. The spliced image and the two corresponding authentic images can be seen above [2].

Early work on image splicing detection only deduced whether a given image has been spliced or not, and no effort to localize the spliced area was attempted. The problem of joint splicing detection and localization has only been studied in recent years. For the problem of image splicing localization, one has to determine which pixels in an image have been manipulated as a result of a splicing operation.

One of the MCL members, Ronald Salloum, is currently working on an image splicing localization research project funded by the Defense Advanced [...]

By |March 22nd, 2017|News|Comments Off on MCL Works on Splicing Image Detection|
  • Permalink Gallery

    Congratulations to Professor Kuo for Receiving the 2017 IEEE Leon K. Kirchmayer Graduate Teaching Award

Congratulations to Professor Kuo for Receiving the 2017 IEEE Leon K. Kirchmayer Graduate Teaching Award

MCL Director, Professor C.-C. Jay Kuo, received the 2017 IEEE Leon K. Kirchmayer Graduate Teaching Award on March 6 at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) held in New Orleans, Louisiana, USA.

The IEEE Leon K. Kirchmayer Graduate Teaching Award is sponsored by the Leon K. Kirchmayer Memorial Fund and recognizes inspirational teaching of graduate students in the IEEE fields of interest.  Professor Kuo received this award for excellence in inspirational guidance of graduate students and curriculum development in the area of multimedia signal processing.

Professor Kuo gave the following short speech in the Award Ceremony. “It was a great honor to be recognized by the prestigious Lion K. Kirchmayer Graduate Teaching Award. I would like to use this opportunity to appreciate a few people who had great impacts on my teaching career. When I was a PhD student at MIT, I was fortunate to work with a few young faculty members. They were my PhD and Master thesis advisor Bernard Levy, my PhD thesis co-advisor, John Tsitsiklis, my MS thesis co-advisor, Bruce Musicus, my PhD thesis committee member, Nick Trefethen, and my postdoc mentor at UCLA, Tony Chan. They spent an enormous amount of time nurturing and advising me. I am obliged to them deeply. Furthermore, I would like to say thanks to my graduate students. They are not only my students but also my teachers. I learned many new topics together with them. Finally, I would like to give thanks to my wife and daughter. Their unconditional love and patience allow me to do whatever I want to pursue. I do owe them tremendously, and would to like to share my joy and honor with them.”

Congratulations to Professor [...]

By |March 12th, 2017|News|Comments Off on Congratulations to Professor Kuo for Receiving the 2017 IEEE Leon K. Kirchmayer Graduate Teaching Award|
  • Permalink Gallery

    MCL Works on Automatic Medical Image Segmentation with Convolutional Neural Networks

MCL Works on Automatic Medical Image Segmentation with Convolutional Neural Networks

Automatic image segmentation has always been an important topic in medical imaging. Many medical applications, such as delineating heart structures, rely heavily on the accurate segmentation results. Nowadays, manual segmentation is still required in many applications. Manual segmentation is not only time-consuming and tedious but also prone to human error. One of MCL members, Ruiyuan Lin, is working on this research topic.

Many methods have been proposed to automate the segmentation process, ranging from region growing and active contour models to multi-atlas segmentation. In our research work, we focus on the convolutional neural networks (CNN) based segmentation method. We attempted several segmentation networks such as fully convolutional networks (FCN) and residual networks, compared their performance with other methods, and analyzed the strengths and problems of the networks. We are planning to further explore the use of CNN on more complicated medical images such as cross-domain images.

Image credit: both images are modified from the MRI images in the Left Atrium Segmentation Challenge dataset:
Tobon-Gomez C, Geers AJ, Peters, J, Weese J, Pinto K, Karim R, Ammar M, Daoudi A, Margeta J, Sandoval Z, Stender B, Zheng Y, Zuluaga, MA, Betancur J, Ayache N, Chikh MA, Dillenseger J-L, Kelm BM, Mahmoudi S, Ourselin S, Schlaefer A, Schaeffter T, Razavi R, Rhode KS. Benchmark for Algorithms Segmenting the Left Atrium From 3D CT and MRI Datasets. IEEE Transactions on Medical Imaging, 34(7):1460–1473, 2015.

By |March 5th, 2017|News|Comments Off on MCL Works on Automatic Medical Image Segmentation with Convolutional Neural Networks|

MCL Works on User’s Experience on Head-Mounted VR Devices

Virtual Reality, or more precisely, the head-mounted-display (HMD) is becoming increasingly popular in recent years. With the release of consumer level products such as Oculus Rift and HTC Vive, it is no longer difficult for users to have a visit to the virtual world. Their fabulous immersive experience can always amaze the users when first played. However, adverse effects such as Motion Sickness are sometimes reported during the play. It is important to have a better understanding on these side effects.

Our research focuses on the qualitative and further quantitative measurement of the Motion Sickness in Virtual Reality. With the help of a better understanding on the reliable measurement on the Motion Sickness, we can not only control and even avoid this effect accordingly, but also develop a set of research paradigm to measure similar subjective feelings.

Currently, we have tried to proposed a physically sound, as well as practically feasible model, to explain and quantify the Motion Sickness in Virtual Reality. Our initial small-scale experiments have shown supportive evidence to our model. Though, will this model actually work on further experiments? Who knows. Maybe only the nature can tell. However, aren’t those endeavors to know more about the complex nature what research is about?

By |February 27th, 2017|News|Comments Off on MCL Works on User’s Experience on Head-Mounted VR Devices|

MCL Works on Road Detection for Autonomous Driving

Advanced driver assistance systems (ADAS) have attracted more and more attention nowadays, where various IT technologies are introduced to vehicles to enhance driving safety and automation.

MCL members, Junting Zhang and Yuhang Song, together with MediaTek Inc. have started a collaborative research project on ADAS-oriented deep learning technologies since January 2016. Single-image-based traffic scene segmentation and road detection have been studied extensively throughout 2016. We adapted the state-of-the-art general-purpose CNN architectures to urban scene semantic segmentation task, overcoming the cross-domain issue. On the other hand, computational and memory efficiency have always been our major concerns, we were also devoted to simplify the network structure and reduce redundant computation.

In 2017, we will explore the deep learning technologies for video processing. Although there are many interesting results in semantic urban scene understanding based on the CNN technology, semantic video understanding is still a challenging problem. We will try to find a semantic video understanding method that outperforms the single-image-based algorithms. To address this type of problems, we will exploit the temporal information.

By |February 19th, 2017|News|Comments Off on MCL Works on Road Detection for Autonomous Driving|