We welcome you to this special issue of Machine Learning Journal (MLJ), comprising of papers accepted to the journal track of the 11th Asian Conference on Machine Learning (ACML 2019), held in Nagoya, Japan, from 17 to 19 November 2019. The ACML conference is running a dedicated journal track alongside the usual conference proceedings track. We are delighted to share the contributions with you.

This year’s ACML journal track received 52 submissions and 9 papers have been accepted for this special issue. The program committee members of ACML made bids on the papers for review assignment while ensuring that there were no conflicts of interest. The senior program committee members of ACML also followed the same process, acting as meta-reviewers for the papers. Promising papers that did not quite meet the expected standard were allowed to be resubmitted after improvement, following the reviewing policy of this journal.

The paper “Joint Consensus and Diversity for Multi-view Semi-supervised Classification”, by Wenzhang Zhuge, Chenping Hou, Shaoliang Peng, and Dongyun Yi, presents a method for multi-view semi-supervised classification problems that simultaneously learns a common label matrix for all training samples and view-specific classifiers. Although the training is formulated as a non-convex optimization problem, the proposed method comes with a theoretical guarantee that it monotonically improves the objective. The algorithm is shown to perform very well by experiments on benchmark datasets.

The paper “Gradient Descent Optimizes Over-parameterized Deep ReLU Networks”, by Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu, presents the theoretical analysis of the convergence rate for a special class of over-parameterized deep neural networks. The results advance the state-of-the-art analysis on the study of global convergence for training deep neural networks.

The paper “Skill-based Curiosity for Intrinsically Motivated Reinforcement Learning”, by Nicolas Bougie and Ryutaro Ichise, presents a deep reinforcement learning method for acquiring new skills based on intrinsic reward function representing curiosity of the agent. Using standard benchmark reinforcement learning tasks, the resulting agent is shown to outperform traditional agents that only use the rewards provided by the environment.

The paper “Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric”, by Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, and Myunghee Cho Paik, concern with the PU learning, where the labels in the training data are either positive or unobserved. A kernel-based method is presented to achieve computational efficiency for scaling to large datasets, accompanied by a formal error bound analysis.

The paper “Handling Concept Drift via Model Reuse”, by Peng Zhao, Le-Wen Cai, and Zhi-Hua Zhou, presents an algorithm for handling concept drift in supervised learning, where the distribution changes over time. The key idea is to reuse the models trained in the past and a theoretically-motivated update rule of weights for combining the predictions. The algorithm is validated on a set of synthetic and real-world datasets.

The paper “Communication-Efficient Distributed Multi-Task Learning with Matrix Sparsity Regularization”, by Qiang Zhou, Yu Chen, and Sinno Jialin Pan, presents a method for achieving communication efficiency under distributed optimization for multi-task learning, where we want to train the models without centralizing the data. The proposed algorithm is also accompanied by a formal analysis of the convergence rate.

The paper “Rank minimization on tensor ring: An efficient approach for tensor decomposition and completion”, by Longhao Yuan, Chao Li, Jianting Cao, and Qibin Zhao, addresses the rank selection problem in tensor ring decomposition as an optimization problem using nuclear norm regularization. Two proposed optimization algorithms are shown to work very effectively using synthetic and image datasets.

The paper “Multi-Label Optimal Margin Distribution Machine”, by Zhi-Hao Tan, Peng Tan, and Zhi-Hua Zhou, presents optimal margin distribution machine for multi-label classification, which extends the classic rank-SVM approach by adding the margin variance term in the training objective. Experiments show significant improvement over the rank-SVM baseline.

The paper “Few-Shot Learning with Adaptively Initialized Task Optimizer”, by Han-Jia Ye, Xiang-Rong Sheng, and De-Chuan Zhan, presents AVIATOR, a few-shot learning algorithm that extends MAML by task-dependent initialization of model parameters. This approach is demonstrated to be highly effective, using benchmark synthetic dataset for evaluating meta-learning algorithm.

We would like to acknowledge the contribution from many people which made this special issue possible. We would like to thank the senior program committee and the program committee of ACML for their time and effort in reviewing papers and the authors for their contributing articles. We also would like to thank Peter Flach, editor-in-chief for MLJ, Dragos Margineantu, editor of special issues for MLJ, as well as the ACML Steering Committee for their guidance and support. Our gratitude also goes to Melissa Fearon and Karthika Deepak from the Springer editorial office for the help in ensuring that the process ran smoothly.

The senior program committee members who contributed to the reviewing process are Alice Oh (KAIST, Korea), Bernhard Pfahringer (University of Waikato, New Zealand), Chenping Hou (National University of Defense Technology, China), Dinh Phung (Monash University, Australia), Grigorios Tsoumakas (Aristotle University of Thessaloniki, Greece), Hang Su (Tsinghua University, China), Hsuan-Tien Lin (National Taiwan University, Taiwan), Ivor Tsang (University of Technology Sydney, Australia), James Tin-Yau Kwok (The Hong Kong University of Science and Technology, Hong Kong), Joseph Salmon (Université de Montpellier, France), Junmo Kim (KAIST, Korea), Junping Zhang (Fudan University, China), Ke Tang (Southern University of Science and Technology, China), Khan Mohammad Emtiyaz (RIKEN, Japan), Kohei Hatano (Kyushu University, Japan), Kun Zhang (Carnegie Mellon University, USA), Makoto Yamada (RIKEN, Japan), Marco Cuturi (ENSAE/CREST, France), Minlie Huang (Tsinghua University, China), Min-Ling Zhang (Southeast University, China), Qinghua Hu (Tianjin University, China), Quanquan Gu (UCLA, USA), Seungjin Choi (BARO, Korea), Sheng-Jun Huang (Nanjing University of Aeronautics and Astronautics, China), Shinichi Nakajima (Technische Universität Berlin, Germany), Shou-De Lin (NTU, Taiwan), Shunji Umetani (Osaka University/RIKEN, Japan), Sinno Pan (NTU, Singapore), Stephen Gould (Australian National University, Australia), Steven Hoi (Singapore Management University, Singapore), Takafumi Kanamori (Tokyo Institute of Technology/RIKEN AIP, Japan), Takayuki Okatani (Tohoku University/RIKEN AIP, Japan), Takayuki Osogami (IBM Research - Tokyo, Japan), Tao Qin (Microsoft Research Asia, China), Tomas Pfister (Google, Finland), Toru Tamaki (Hiroshima University, Japan), Truyen Tran (Deakin University, Australia), Wray Buntine (Monash University, Australia), Xiaolin Hu (Tsinghua University, China), Yang Yu (Nanjing University, China), Yanyan Lan (Institute of Computing Technology, China), Yasuo Tabei (RIKEN AIP, Japan), Yufeng Li (Nanjing University, China), Yuhong Guo (Carleton University, Canada), Yung-Kyun Noh (Hanyang University, Korea), Zhouchen Lin (Peking University, China).