题目：Robust Learning and Prediction in Deep Learning
摘要：Robustness is the ability to withstand adverse conditions. When it is transposed into deep learning, it refers to the ability to tolerate perturbations that might affect the functionality of the deep model. Learning a deep model and deploying it for usage require robustness. In this talk, we explore two types of robustness in deep learning, i.e., training robustness and adversarial robustness. Training robustness refers to successfully learning a deep neural network under slight perturbations of the training configurations. Adversarial robustness refers to maintaining faithful predictions of the deep neural network even if the input data are perturbed by the adversarially crafted noise. We propose geometry-aware instance-reweighted adversarial training (GAIRAT), where the weights are based on how difficult it is to attack a natural data point. Empirically, we show that our GAIRAT boosts the robustness of the standard adversarial training; combining two directions (i.e., FAT and GAIRAT), we improve both robustness and accuracy of the standard adversarial training.
主讲人简介：张景锋博士，本科毕业于山东大学泰山学堂；博士于新加坡国立大学师从Mohan Kanhanhalli教授，2020年取得博士学位；即将前往日本理化学研究所师从Masashi Sugiyama （杉山 将） 教授 。研究兴趣为人工智能的鲁棒性，具体细分方向为对抗机器学习。张景锋在IJCAI, ICML 和 ICLR 国际顶级机器学习会议上发表过论文。担任多个会议和期刊审稿人 比如 ICML, ICLR 和 IJCAI。