Hi all, welcome to my personal website. I am a 5th year PhD student in the CS department of Georgia Institute of Technology, who is passinoate about efficient/automated ML and algorithm-hardware co-design!
My CV is available at here in case you are interested !
Research Experience
- 12/2022 – Present: PhD Student at Georgia Tech
- Transferred to GT along with my advisor Prof. Yingyan Lin
- 08/2019 – 12/2022: Master Degree at Rice University
- Under the supervision of Prof. Yingyan Lin
- 09/2015 – 06/2019: Bachelor Degree at Huazhong University of Science and Technology
- Under the supervision of Prof. Pan Zhou and Prof. Wenyu Liu
For more information, refer to our Lab’s homepage [Website] [LinkedIn] [Twitter] [Github] [Youtube]
Internship Experiences
- Upcoming
- Summer 2023: Join Startup Launch Program organized by CREATE-X and Venture Lab!
- Past
- 08/2022 – 12/2022: Part-time Research Intern @ Meta Reality Labs
- 05/2022 – 08/2022: Full-time Research Intern @ Meta Reality Labs
- 08/2021 – 04/2022: Part-time Research Intern @ Baidu Research (USA)
- 05/2021 – 08/2021: Full-time Research Intern @ Baidu Research (USA)
Publication List
> Conference:
H. You*, H. Shi*, Y. Guo*, Y. Lin.
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer. In NeurIPS 2023 (Acceptance rate: 26%).
[Paper] [Code]H. You*, Y. Xiong*, X. Dai, B. Wu, P. Zhang, H. Fan, P. Vajda, Y. Lin.
Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference. In CVPR 2023 (Acceptance rate: 25%).
[Paper] [Code] [Project] [Slide] [Poster] [Talk@CVPR]H. You, Z. Sun, H. Shi, Z. Yu, Y. Zhao, Y. Zhang, C. Li, B. Li, Y. Lin.
ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design. In HPCA 2023 (Acceptance rate: 25%).
Selected as the Meta Faculty Research Award of 2022 !
[Paper] [Code] [Project] [Slide] [Poster] [Talk@GT] [Talk@HPCA]H. You, B. Li, Z. Sun, X. Ouyang, Y. Lin.
SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning. In ECCV 2022 (Acceptance rate: 20%).
[Paper] [Code] [Slide] [Poster] [Talk@ECCV]H. You, B. Li, H. Shi, Y. Fu, Y. Lin.
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks. In ICML 2022 (Acceptance rate: 20%).
[Paper] [Code] [Project] [Slide] [Talk@ICML]H. You*, C. Wan*, Y. Zhao*, Z. Yu*, Y. Fu, J. Yuan, S. Wu, S. Zhang, Y. Zhang, C. Li, V. Boominathan, A. Veeraraghavan, Z. Li, Y. Lin.
EyeCoD: Eye Tracking System Acceleration via FlatCam-Based Algorithm and Accelerator Co-Design. In ISCA 2022 (Acceptance rate: 17%).
Selected as the IEEE Micro’s Top Pick of 2023 !
[Paper] [Code] [Project] [Slide] [Poster@ISCA] [Poster@CoCoSys] [Talk@ISCA] [IEEE Micro’s TopPick’23]H. You, T. Geng, Y. Zhang, A. Li, Y. Lin.
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design. In HPCA 2022 (Acceptance rate: 30%).
[Paper] [Code] [Slide] [Talk@HPCA]H. You, Z. Lu, Z. Zhou, Y. Fu, Y. Lin
Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference. In AAAI 2022 (Acceptance rate: 15%).
[Paper] [Code] [Slide] [Poster] [Talk@AAAI]H. You, X. Cheng, Y. Zhang, S. Liu, Z. Liu, Z. Wang, Y. Lin.
ShiftAddNet: A Hardware-Inspired Deep Network. In NeurIPS 2020 (Acceptance rate: 20%).
[Paper] [Code] [Slide] [Poster] [Talk@NeurIPS] [Talk@RICE]H. You, C. Li, P. Xu, Y. Fu, Y. Wang, X. Chen, R.G. Baraniuk, Z. Wang, Y. Lin.
Drawing Early-Bird Tickets: Towards More Efficient Training of Deep Networks. In ICLR 2020 Spotlight (Acceptance rate: 4%).
Selected as an ICLR Spotlight !
[Paper] [Code] [Slide] [OpenReview] [Talk@ICLR]Y. Fu, H. You, Y. Zhao, Y. Wang, C. Li, K. Gopalakrishnan, Z. Wang, Y. Lin.
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training. In NeurIPS 2020 (Acceptance rate: 20%).
[Paper]Y. Zhang, H. You, Y. Fu, T. Geng, A. Li, Y. Lin.
G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency. In ICCAD 2021 (Acceptance rate: 23%).
[Paper]H. Shi, H. You, Y. Zhao, Z. Wang, Y. Lin.
NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks. In ICCAD 2022 (Acceptance rate: 24%).
[Paper]Y. Zhao, Z. Li, H. You, Y. Fu, Y. Zhang, C. Li, C. Wan, S. Wu, X. Ouyang, V. Boominathan, A. Veeraraghavan, Y. Lin
i-FlatCam: A 253 FPS, 91.49 µJ/Frame Ultra-Compact Intelligent Lensless Camera System for Real-Time and Efficient Eye Tracking in VR/AR. In VLSI 2022.
Won the first place in University Best Demonstration at DAC 2022 !
[Paper] [Demo]C. Li, T. Chen, H. You, Z. Wang, Y. Lin.
HALO: Hardware-Aware Learning to Optimize. In ECCV 2020 (Acceptance rate: 27%).
[Paper]Y. Zhao, X. Chen, Y. Wang, C. Li, H. You, Y. Fu, Y. Xie, Z. Wang, Y. Lin.
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation. In ISCA 2020 (Acceptance rate: 18%).
[Paper]T. Geng, C. Wu, Y. Zhang, C. Tan, C. Xie, H. You, M. Herbordt, Y. Lin, A. Li
I-GCN: A GCN Accelerator with Runtime Locality Enhancement through Islandization. In MICRO 2021 (Acceptance rate: 22%).
[Paper]C. Li, Z. Yu, Y. Fu, Y. Zhang, Y. Zhao, H. You, Q. Yu, Y. Wang, C. Hao, Y. Lin.
HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark. In ICLR 2021 Spotlight (Acceptance rate: 4%).
[Paper]Y. Fu, Z. Ye, J. Yuan, S. Zhang, S. Li, H. You, Y. Lin.
Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design. In ISCA 2023 (Acceptance rate: 21%).
[Paper]S. Li, C. Li, W. Zhu, B. Yu, Y. Zhao, C. Wan, H. You, H. Shi, Y. Lin.
Instant-3D: Instant Neural Radiance Fields Training Towards Real-Time AR/VR 3D Reconstruction. In ISCA 2023 (Acceptance rate: 21%).
[Paper]Z. Yu, Y. Fu, S. Wu, M. Li, H. You, Y. Lin.
LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference. In TinyML 2022
[Paper]
> Journal:
H. You, R. Balestriero, Z. Lu, Y. Kou, H. Shi, S. Zhang, S. Wu, Y. Lin, R. Baraniuk
Max-Affine Spline Insights Into Deep Network Pruning. In TMLR.
[Paper] [Code]H. You, Y. Cheng, T. Cheng, C. Li, P. Zhou.
Bayesian Cycle-Consistent Generative Adversarial Networks via Marginalizing Latent Sampling. In IEEE TNNLS.
[Paper@IEEE] [Paper@arXiv] [Code]H. Shi, H. You, Z. Wang, Y. Lin.
NASA+: Neural Architecture Search and Acceleration for Multiplication-Reduced Hybrid Networks. In IEEE Transactions on Circuits and Systems I.
[Paper@IEEE] [Code]X. Chen, Y. Zhao, Y. Wang, P. Xu, H. You, C. Li, Y. Fu, Y. Lin, Z. Wang
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training. In IEEE TNNLS.
[Paper] [Code]Y. Zhang, Y. Fu, W. Jiang, C. Li, H. You, M. Li, V. Chandra, Y. Lin
DIAN: differentiable accelerator-network co-search towards maximal DNN efficiency. In ISLPED 2021.
[Paper]
Research Interests
- Efficient DNN Training
- AutoML
- Computer Architecture
- Algorithm and Hardware Co-Design
Review Services
- 2024
- AAAI; ICLR
- 2023
- ICLR; CVPR; ICML; ICCV; NeurIPS; TPAMI
- 2022
- ICML; NeurIPS; ECCV; CVPR; AAAI; ICLR
- 2021
- ICML; NeurIPS; ICLR; MLSys; IEEE TNNLS;
- 2020
- ICML; ICLR; CVPR;