梁恒岳
在读 Ph.D. 第五年
Department of Electrical and Computer Engineering
明尼苏达大学双城校区(University of Minnesota, Twin Cities)
Email:
liang656 (at) umn (dot) edu — (学校)
lianghengyue1993 (at) outlook (dot) com — (个人)
其他相关链接:
| Google Scholar | CV | LinkedIn | Github |
我预计在24年秋季毕业,有合适的全职工作岗位请不吝联系我!
个人简介
我是一名在明尼苏达大学(明大)双城校区的第五年的在读Ph.D.学生。 很荣幸的为您介绍我目前的导师, 孙举教授.
在来到明大之前, 我于瑞典查尔姆斯理工大学电气工程学院获得硕士研究生学位; 本科则是就读于上海交通大学电子信息与电气工程学院, 自动化专业.
我所在的明大实验室GLOVEX以实现 “实用的, 可靠的, 可部署的医疗AI” 为主要研究导向。 其中, 我的研究方向主要集中与如何让深度学习模型更鲁棒(对抗鲁棒性,自然鲁棒性)和如何让深度学习模型做出更可靠的预测(选择性分类), 其中重点关注计算机视觉中 的问题。
在一项与明大医学院合作的开拓性的AI医学研究 “基于计算机视觉的图雷特综合症自动定量方法”, 我为其中计算机视觉算法及视频分析算法的主要贡献者。该项目目前已获得美国国立卫生研究院(NIH)的为期五年的科研资助(总计约350万美金)。
此外,我也有3D重建和3D模型重建的经验(亚马逊实习项目)以及机器人,系统控制, 强化学习的相关经验(本科及研究生,以及加入孙举老师实验室前的学术背景)。
最新动态
[ June 2023 ] I returned to Amazon again as an Applied Scientist Intern, working on reconstucting 3D objects and generating 3D assets for realistic talking avatars.
[ May 2023 ] I submitted a paper “Toward Effective Post-Training Selective Classification for High-Stakes Applications” to NeurIPS 2023. Stay tuned for the preprint release :)
[ May 2023 ] I presented a poster about the effort of our group in seeking trustworthy AI at Midwest Machine Learning Symposium 2023.
[ May 2023 ] The preliminary study “Automated Quantification of Eye Tics using Computer Vision and Deep Learning Techniques” are now under review for “Movement Disorders”, the official Journal of Movement Disorder Society
(MDS).
[ May 2023 ] Prof Ju Sun, with Prof. Christine Conelea and Prof. Kelvin Lim of UMN Medical School, receives NIH NINDS research grant (~ 3.5M US dollars in total for the next 5 years) to develop novel video analysis tools to help doctors expedite the diagnosis and treatment of Tourette syndrome,
where I am the main contributor of the video analysis and tic motion detection algorithms in the preliminary study of the grant proposal.
[ Apr 2023 ] I served as an assitant session chair in hosting SDM23 coference.
[ Mar 2023 ] Our paper “Optimization and optimizers for adversarial robustness” is released on arXiv, where we extensively discussed our (pessimistic) view on current adversarial robsutness studies. | Paper |
[ Feb 2023 ] Our paper “Implications of Solution Patterns on Adversarial Robustness” is accepted to CVPR 2023 (workshop of adversarial machine learning). | Paper |
[ Oct 2022 ] Our paper “Optimization for Robustness Evaluation beyond ℓp Metrics” is accepted to ICASSP 2023.
| Paper |
[ Dec 2021 ] Our paper “Early Stopping for Deep Image Prior” is submitted to Conference on Computer Vision and Pattern Recognition (CVPR) 2022. | Preprint |
[ Oct 2021 ] Our paper “Self-Validation: Early Stopping for Single-Instance Deep Generative Priors” is accepted to British Machine Vision Conference (BMVC) 2021! | Paper |
[ June - Sep 2021 ] I worked at Amazon as an Applied Scientist Intern, conducting a research project on generating realistic head motions for virtual animated avatar based on speech audio only.
[ June 2021 ] A study on transfer learning for medical image classification is available. Our study shows that transfer learning should probably be performed on truncated deep models, rather than full deep models which are conventionally used. | Project Blog | Paper |
[ June 2021 ] The preprint paper of a deployed AI powered diagnose assistant project for COVID-19 by our group is released on medRxiv. | Paper |
[ January 2021 ] Our paper “Learning Visual Affordances with Target-Orientated Deep Q-Network to Grasp Objectsby Harnessing Environmental Fixtures” was accepted and published at IEEE International Conference on Robotics and Automation (ICRA) 2021, Xi'an | Project Page | Paper |
[ January 2021 ] Our paper “Attribute-Based Robotic Grasping with One-Grasp Adaptation” was accepted and published at IEEE International Conference on Robotics and Automation (ICRA) 2021, Xi'an | Project Page | Paper |
[ January 2020 ] Our paper “A Deep Learning Approach to Grasping the Invisible” was accepted and published at IEEE International Conference on Robotics and Automation (ICRA) 2020, Paris | Project Page | Paper |