Abstract: The goal of transfer learning is to improve the performance of target learning task by leveraging information (or transferring knowledge) from other related tasks. We develop a general flexible heterogeneous TDML (HTDML) framework. In particular, any (linear/nonlinear) DML algorithms can be employed to learn the source metric beforehand. Then the pre-learned source metric is represented as a set of knowledge fragments to help target metric learning. We show how generalization error in the target domain could be reduced using the proposed transfer strategy, and develop novel algorithm to learn either linear or nonlinear target metric.
Yong Luo received the B.E. degree in Computer Science from the Northwestern Polytechnical University, Xi’an, China, in 2009, and the D.Sc. degree in the School of Electronics Engineering and Computer Science, Peking University, Beijing, China, in 2014. He is currently a Research Fellow with the School of Computer Science and Engineering, Nanyang Technological University. He was a visiting student in the School of Computer Engineering, Nanyang Technological University, and the Faculty of Engineering and Information Technology, University of Technology Sydney. His research interests are primarily on machine learning and data mining with applications to visual information understanding and analysis. He has authored several scientific articles at top venues including IEEE T-PAMI, IEEE T-NNLS, IEEE T-IP, IEEE T-KDE, IEEE T-MM, IJCAI and AAAI. He received the IEEE Globecom 2016 Best Paper Award, and was nominated as the IJCAI 2017 Distinguished Best Paper