Accelerated Intelligent Systems Lab (AISys) is affiliated with ECE, Seoul National University. We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing.
AISys Lab is currently looking for talented students (graduate students, undergraduate interns).
Please contact leejinho at snu dot ac dot kr if you are interested.
석박사 신입생 및 학부생 인턴을 상시 선발하고 있습니다. 관심있는 학생은 leejinho at snu dot ac dot kr 로 연락 바랍니다.
Mar. 2024: Our paper titled PID-Comm: A Fast and Flexible Collective Communication Framework for Commodity Processing-in-DIMMs has been accepted to ISCA 2024. Congratulations to authors and see you at Buenos Aires!
Mar. 2024: Received the best paper award honorable mention from HPCA 2024. Congratulations to the authors of "Smart-Infinity"!
Feb. 2024: Our paper A Case for In-Memory Random Scatter-Gather for Fast Graph Processing has been accepted to IEEE CAL. Congratulations!
Feb. 2024: Our paper titled PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor has been accepted to CVPR 2024. Congratulations to authors!
Nov. 2023: We got a paper accpeted in PPoPP 2024: AGAThA: Fast and Efficient GPU Acceleration of Guided Sequence Alignment for Long Read Mapping. Conguratulations to authors and see you at Edinburgh Ü
Nov. 2023: Our paper Pipette: Automatic Fine-grained Large Language Model Training Configurator for Real-World Clusters has been accepted at DATE 2024. Congratulations!
Oct. 2023: We got a paper accepted in HPCA 2024: Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System. Congratulations to authors!
Jul. 2023: Our paper titled Enabling Fine-Grained Spatial Multitasking on Systolic-Array NPUs Using Dataflow Mirroring has been accepted to IEEE TC. Congratulations to authors!
Feb. 2023: We got a paper accepted in DAC 2023: Fast Adversarial Training with Dynamic Batch-level Attack Control congratulations to authors!
Jan. 2023: We welcome Junguk Hong as our new member Ü
Jan. 2023: We got a new year's first paper accepted to SIGMOD 2023: Design and Analysis of a Processing-in-DIMM Join Algorithm: A Case Study with UPMEM DIMMs congratulations to authors!
Nov. 2022: We got a paper accepted in DATE 2023: Pipe-BD: Pipelined Parallel Blockwise Distillation congratulations to authors, see you at Antwerp!
Oct. 2022: Our paper titled SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators has been accepted to HPCA 2023. Nice job!
Oct. 2022: Best paper award received from PACT 2022! Congratulations and thanks to the authors of "Slice-and-Forge"
Sep. 2022: Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression has been accepted to ASPLOS 2023. Congratulations authors!
Aug. 2022: Hongsun Jang joins the Lab. Welcome :)
Aug. 2022: two papers have been accepted to PACT 2022. Nice work!
Mar. 2022: Our CVPR 2022 paper 'AIT' has been selected for an oral presentation (342/8161 = 4.2%). Double congratulations!
Mar. 2022: Our paper It's All In the Teacher: Zero-shot Quantization Brought Closer to the Teacher has been accepted at CVPR 2022. Congratulations!
Feb. 2022: We have two newly accepted papers on Valentine's day. Congratulations authors Ü
Feb. 2022: Kanghyun Choi, Deokki Hong, and Hye Yoon Lee won a silver prize in 28th Samsung Humantech Paper Awards.
Jan. 2022: Our paper SALoBa: Maximizing Data Locality and Workload Balance for Fast Sequence Alignment on GPUs has been accepted at IPDPS 2022. Hope we get to travel to France :)It's going virtual.
Jan. 2022: SeongYeon Park joins the Lab. Welcome!
Oct. 2021: Jaewon Jung joins the Lab. Welcome!
Sep. 2021: SeongYeon Park became the first place winner for ACM Student Research Competition (SRC) at PACT 2021. Nice work!
Sep. 2021: Our paper Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples has been accepted at NeurIPS 2021. Congratulations!
Jul. 2021: Jaeyong Song and Hyeyoon Lee join the Lab. Welcome on board!
May. 2021: Our paper Making a Better Use of Caches for GCN Accelerators with Feature Slicing and Automatic Tile Morphing has been accepted at IEEE CAL. Congratulations!
May. 2021: Our paper AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression has been accepted at IJCAI 2021.
Mar. 2021: Jinho Lee received Yonsei Best Teaching Award for 2020.
Feb. 2021: We have two papers accepted to DAC 2021. Congratulations authors!
Feb. 2021: Mingi Yoo joins the Lab. Welcome!
Oct. 2020: Our paper GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent has been accepted at HPCA 2021
Jul. 2020: Deokki Hong and Kanghyun Choi join the Lab. Welcome!
Jul. 2020: Our paper FlexReduce: Flexible All-reduce for Distributed Deep Learning on Asymmetric Network Topology is published at DAC 2020
Feb. 2019: Hohyun Kim joins the Lab. Welcome!
Oct. 2019: Our paper In-memory database acceleration on FPGAs: a survey is published at VLDB Journal
Sep. 2019: Jinho Lee joined CS, Yonsei University as an assistant professor.
Aug. 2019: Our paper Accelerating conversational agents built with off-the-shelf modularized services is published at IEEE Pervasive Computing
June 2019: Our demo has been selected as the Best Demo Award at ACM MobiSys 2019
We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing, especially on FPGAs and GPUs. Some of the on-going research topics are listed below. However, you're free to bring your own exciting topic.