报告题目:understanding adversarial training via model calibration
主讲人: 刘弘 日本国立情报学研究所、日本学術振興会外国人特別研究員
报告时间:2023年04月4日(星期二)15:00
报告地点:厦门大学海韵园科研二 负205
报告摘要:
deep models have shown remarkable success in computer vision tasks, but they appear to be vulnerable to small, imperceptible changes over test instances. in this talk, i will provide a brief overview of our recent work in understanding adversarial training. firstly, i will evaluate the defense performances of several model calibration methods on various robust models. secondly, i will discuss some intriguing findings about adversarial training that show its connection to robust overfitting. next, i will present our work on designing a simple yet effective regularization technique. finally, i will conclude my talk by sharing some insights into adversarial training.
报告人简介:
hong liu is currently a jsps fellowship researcher working at the national institute of informatics, japan. before that, he received his ph.d. degree in computer science from xiamen university. his research interests include ai ethics, ml safety/reliability, and deep learning theory. he was awarded the japan society for the promotion of science (jsps) international fellowship, the outstanding doctoral dissertation awards of both the china society of image and graphics (csig) and fujian province, and the top-100 chinese new stars in artificial intelligence by baidu scholar.
邀请人:人工智能系 纪荣嵘教授