forked from klovien/klovien.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
57 lines (53 loc) · 4.76 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Profile Page</title>
<link rel="stylesheet" href="styles.css">
<script src="https://kit.fontawesome.com/a076d05399.js"></script>
</head>
<body>
<header>
<nav>
<ul>
<li><a href="index.html">Home</a></li>
<li><a href="research.html">Research</a></li>
<li><a href="publications.html">Publications</a></li>
<li><a href="blogs.html">Blogs</a></li>
<li><a href="gallery.html">Gallery</a></li>
</ul>
</nav>
</header>
<div class="container">
<aside>
<div class="profile-image">
<img src="img/about-qinqi.jpg" alt="Profile Image">
</div>
<h2>Qi Qin</h2>
<p>Ph.D student. in Electronic Science and Technology, THU</p>
<ul class="contact-info">
<li><i class="fas fa-map-marker-alt"></i> Beijing </li>
<li><i class="fas fa-university"></i> Tsinghua University</li>
<li><i class="fas fa-envelope"></i> <a href="qinqi1235@163.com">Email</a></li>
<li><i class="fas fa-graduation-cap"></i> <a href="https://scholar.google.com/citations?user=krv0LagAAAAJ&hl=zh-CN">Google Scholar</a></li>
<li><i class="fab fa-github"></i> <a href="https://github.com/QI-N-QIGT">Github</a></li>
<li><i class="fab fa-linkedin"></i> <a href="https://www.linkedin.com/in/%E7%90%A6-%E7%A7%A6-24aa80203/">LinkedIn</a></li>
<li><i class="fab fa-weixin"></i> <a href="https://www.zhihu.com/people/qnq--92">Zhihu</a></li>
</ul>
</aside>
<main>
<div class="content">
<h1>Qin Qi (秦琦)</h1>
<h2>Phd Student</h2>
<p><em>Tsinghua University<br>School of intergrated Circuits</em></p>
<p>Qin Qi is a <a href="#">doctoral student</a> at Tsinghua University. His doctoral research focuses on the design of in-memory computing chips for complex neural network training and inference. Currently, neural network algorithms represented by CNNs and LLMs are widely used in various intelligent scenarios such as image and video processing, next-generation social entertainment, and embodied intelligence. However, existing chip architectures face the memory wall problem, leading to high power consumption and low efficiency in edge-side inference, and they lack learning capabilities. Qin Qi's work aims to provide end-to-end solutions for achieving high-precision and high-efficiency inference and training of complex neural networks on in-memory computing chips based on new types of memory.</p>
<p>Qin Qi has strong research capabilities, having published a total of 11 academic papers, including four as the first author or co-first author. He has presented an oral paper at the top conference in the field of integrated circuits, <a href="#">IEDM</a>, and has co-authored papers published in top journals such as <a href="#">Science</a> and <a href="#">Nature Communications</a>. As the project team leader within his research group, he and his team have achieved device-circuit-algorithm-system co-optimization based on the world's first general-purpose RRAM in-memory computing SoC chip that supports both training and inference. They have successfully implemented high-precision, high-efficiency full-chip inference for deep CNN networks on the ImageNet task for the first time. He possesses innovative ideas in his research process. Based on the learning process of the human brain and the principles of RRAM in-memory computing chips, he proposed a highly efficient bio-inspired on-chip learning method to address the issue of cumulative analog computation errors during the training process of in-memory computing chips. Additionally, he led his team to build a multi-chip system to achieve multi-modal tasks involving voice, LiDAR, and image data, and performed real-time demo demonstrations to validate the capabilities of in-memory computing chips on the edge side. He has applied for six patents, including two U.S. patents, and two of his patents have been granted.</p>
<p>In the future, Qin Qi hopes to provide high-efficiency chip and system solutions for edge-side LLM inference and learning through heterogeneous integration of in-memory computing technology with other computing architectures, and through co-optimization design of algorithms, chips, and systems.</p>
<h2>Research Interests</h2>
<p>AI Chip, Computing-in-Memory, Computer Architecture, AI Systems</p>
</div>
</main>
</div>
</body>
</html>