个人简历
郭诗辉于2010年毕业于北京大学元培学院,后通过国家公派项目前往英国国家计算动画中心攻读博士学位。后在新加坡南洋理工大学、瑞士科学院院士prof. nadia thalmann教授课题组从事博士后工作,现为厦门大学信息学院副教授。研究方向主要集中于虚拟现实中的体感交互。研究工作以第一/通讯作者发表于acm chi、ieee tip、ieee tvcg等国际顶级期刊会议,且获cvpr 2020最佳论文奖提名、chinavr 2021最佳海报奖等。现为中国计算机学会、ieee高级会员。入选2019年中国工程院与英国皇家工程院“创新领军人才联合培养项目”。担任《visual informatics》青年编委、《computer animation & virtual worlds》编委。主编《增强现实技术与应用》教材,入选教育部软件工程教学指导委员会第一批推荐教材。
教育背景
2010.10—2015.9 英国国家计算动画中心 博士
2014.3—2015.2 中国科学院软件研究所 计算机科学国家重点实验室 访问博士生
2006.9—2010.7 北京大学元培学院 本科
工作经历
2019.7—至今 厦门大学信息学院 副教授
2018.7—2019.6 厦门大学软件学院 副教授
2016.5—2018.5 厦门大学软件学院 研究型助理教授
荣誉奖励
[1] 入选2019年中国工程院与英国皇家工程院“创新领军人才联合培养项目”
[2] 会议论文”面向多源运动捕捉数据的可视化处理方法”获得国内学术会议chinavr 2021最佳海报奖
[2] 会议论文” total3dunderstanding: joint layout, object pose and mesh reconstruction for indoor scenes from a single image” 获得国际学术会议computer vision and pattern recognition 2020(cvpr、ccf-a类会议)最佳论文奖提名
[2] 会议论文"semantic modeling of indoor scenes with support inference from a single photograph." 获得国际学术会议computer animation and social agents 2018(casa、ccf-c类会议)最佳论文奖
研究领域
研究集中于虚拟现实中的体感交互,即通过计算设备和算法,帮助普通人类实现mission impossible。具体而言,是探索以柔性传感器为代表的新型可穿戴设备,利用深度学习、人工智能等工具实现鲁棒的信号处理、智能的系统决策、友好的人机共融。
欢迎博士生、硕士生加入我的团队,也欢迎本科生加入团队进行科研实践活动,我们有丰富的指导本科生发表高水平论文、推荐到国内外知名高校的成功案例。欢迎跨学科的同学前来交流合作。我们也和阿里、腾讯、华为等公司有紧密合作,多次推荐学生前往公司进行实习。关于加入团队,可以通过这个链接了解详情:https://www.humanplus.xyz/blog/join-us。
研究项目
[5] 基于织物传感器布局优化的人体运动姿态鲁棒跟踪方法研究,国家自然科学基金面上项目,62072383, 56万,2021/1-2024/12,主持。
[4] 基于生成对抗网络的服装纹理替换方法研究,阿里巴巴达摩院创新研究计划,12018759, 50万,2020/11-2021/10,主持。
[3] 具备三维场景感知能力的ar角色动画合成与控制,中国计算机学会-腾讯犀牛鸟基金,15万, 2020/8-2021/7,主持。
[2] 颗粒介质环境下动画角色的运动仿真研究,国家自然科学基金青年项目,25万,项目负责人, 61702433,2018/1/1-2020/12/31。
[1] 与虚拟人真实感地共存,国家自然科学基金委员会与新加坡国家研究基金会“数据科学”合作研究项 目,199万,61661146002,2017/1/1-2019/12/31,参与。
近期发表文章
[18] shihui guo, yubin shi, pintong xiao, yinan fu, juncong lin, wei zeng, tong-yee lee, creative and progressive interior color design with eye-tracked user preference, acm transactions on computer human interaction (tochi), 2022 (第一作者,ccf-a类期刊,中科院一区)
[17] juncong lin, pintong xiao, yinan fu, yubin shi, hongran wang, shihui guo*, ying he, tong-yee lee, c3 assignment: camera cubemap color assignment for creative interior design, ieee transactions on visualization and computer graphics, 2021 (通讯作者,ccf-a类期刊,中科院一区)
[16] hechuan zhang, zhiyong chen, shihui guo*, juncong lin, yating shi, xiangyang liu, yong ma, sensocks: 3d foot reconstruction with soft stretchable sensors, acm chi '20: chi conference on human factors in computing systems proceedings (通讯作者,ccf-a类会议).
[15] zhiyong chen, ronghui wu, shihui guo*, xiangyang liu, hongbo fu, xiaogang jin, minghong liao, 3d upper body reconstruction with sparse soft sensors[j]. soft robotics, 2020.(通讯作者,中科院一区、top期刊)
[14] xing gao, xu wu, panpan xu, shihui guo*, minghong liao, wenceng wang, semi-supervised texture filtering with shallow to deep understanding [j].ieee transactions on image processing, 2020.(通讯作者,ccf-a类期刊论文,中科院一区)
[13] minying zhang, kai liu, yidong li, shihui guo, hongtao duan, yimin long, yi jin. unsupervised domain adaptation for person re-identification via heterogeneous graph alignment[c]//proceedings of the aaai conference on artificial intelligence. 2021, 35(4): 3360-3368. (ccf-a类会议论文)
[12] zhicheng an, xiaoyan cao, yao yao, wanpeng zhang, lanqing li, yue wang, shihui guo, dijun luo. a simulator-based planning framework for optimizing autonomous greenhouse control strategy[c]//proceedings of the international conference on automated planning and scheduling. 2021, 31: 436-444. (ccf-b类会议论文)
[11] yinyu nie, yiqun lin, xiaoguang han, shihui guo, jian chang, shuguang cui, jian zhang. skeleton-bridged point completion: from global inference to local adjustment[j]. advances in neural information processing systems, 2020, 33: 16119-16130. (ccf-a类会议论文)
[10] yinyu nie, xiaoguang han, shihui guo, yujian zheng, jian chang, jian jun zhang. total3dunderstanding: joint layout, object pose and mesh reconstruction for indoor scenes from a single image[c]//proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2020: 55-64. (ccf-a类会议论文)
[9] yinyu nie, shihui guo*, jian chang, xiaoguang han, jiahui huang, shi-min hu, jian jun zhang. shallow2deep: indoor scene modeling by single image understanding[j]. pattern recognition, 2020, 103: 107271. (通讯作者,ccf-b类期刊论文、中科院一区期刊)
[8] xiaoyan cao#, shihui guo#, juncong lin, wenshu zhang, minghong liao. online tracking of ants based on deep association metrics: method, dataset and evaluation[j]. pattern recognition, 2020, 103: 107233. (通讯作者,ccf-b类期刊论文、中科院一区期刊)
[7] min jiang, zhenzhong wang, shihui guo, xing gao, kay chen tan. individual-based transfer learning for dynamic multiobjective optimization[j]. ieee transactions on cybernetics, 2020, 51(10): 4968-4981. (ccf-b类期刊论文、中科院一区期刊)
[6] xinyu shi, junjun pan, zeyong hu, juncong lin, shihui guo*, minghong liao, ye pan, ligang liu, accurate and fast classification of foot gestures for virtual locomotion, international symposium on mixed and augmented reality 2019. (通讯作者,ccf-b类会议)
[5] xuehan tan, panpan xu, shihui guo, wencheng wang. image composition of partially occluded objects[c]//computer graphics forum. 2019, 38(7): 641-650. (ccf-b类期刊论文)
[4] nie, yinyu, jian chang, ehtzaz chaudhry, shihui guo, andi smart, and jian jun zhang. "semantic modeling of indoor scenes with support inference from a single photograph." computer animation and virtual worlds 29, no. 3-4 (2018): e1825. (casa 2018最佳论文奖,ccf-c类)
[3] guo, shihui, meili wang, gabriel notman, jian chang, jianjun zhang, and minghong liao. "simulating collective transport of virtual ants." computer animation and virtual worlds 28, no. 3-4 (2017): e1779. (casa 2017最佳论文奖提名,ccf-c类)
[2] deng, shujie, nan jiang, jian chang, shihui guo*, and jian j. zhang. "understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3d virtual objects manipulation." international journal of human-computer studies 105 (2017): 68-80. (通讯作者,ccf-a 类)
[1] guo, shihui, hanxiang xu, nadia magnenat thalmann, and junfeng yao. "customization and fabrication of the appearance for humanoid robot." the visual computer 33, no. 1 (2017): 63-74.(ccf-c类,论文封面)