


default search action
Zongwu Wang
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j8]Haomin Li
, Fangxin Liu
, Zongwu Wang
, Ning Yang
, Shiyuan Huang
, Xiaoyao Liang
, Haibing Guan
, Li Jiang
:
Attack and Defense: Enhancing Robustness of Binary Hyper-Dimensional Computing. ACM Trans. Archit. Code Optim. 22(3): 85:1-85:25 (2025)
[j7]Shiyuan Huang
, Fangxin Liu
, Tao Yang, Zongwu Wang
, Ning Yang
, Li Jiang
:
SpMMPlu-Pro: An Enhanced Compiler Plug-In for Efficient SpMM and Sparsity Propagation Algorithm. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 44(2): 669-683 (2025)
[j6]Shiyuan Huang
, Fangxin Liu
, Tian Li
, Zongwu Wang
, Ning Yang
, Haomin Li
, Li Jiang
:
STCO: Enhancing Training Efficiency via Structured Sparse Tensor Compilation Optimization. ACM Trans. Design Autom. Electr. Syst. 30(1): 1-22 (2025)
[c46]Fangxin Liu
, Zongwu Wang, Ning Yang, Haomin Li, Tao Yang, Haibing Guan, Li Jiang:
Irregular Sparsity-Enabled Search-in-Memory Engine for Accelerating Spiking Neural Networks. APPT 2025: 99-109
[c45]Fangxin Liu
, Zongwu Wang
, Peng Xu
, Shiyuan Huang
, Li Jiang
:
Exploiting Differential-Based Data Encoding for Enhanced Query Efficiency. ASP-DAC 2025: 594-600
[c44]Haomin Li
, Fangxin Liu
, Zewen Sun
, Zongwu Wang
, Shiyuan Huang
, Ning Yang
, Li Jiang
:
NeuronQuant: Accurate and Efficient Post-Training Quantization for Spiking Neural Networks. ASP-DAC 2025: 734-740
[c43]Fangxin Liu
, Haomin Li
, Bowen Zhu
, Zongwu Wang
, Zhuoran Song
, Haibing Guan
, Li Jiang
:
ASDR: Exploiting Adaptive Sampling and Data Reuse for CIM-based Instant Neural Rendering. ASPLOS (3) 2025: 18-33
[c42]Fangxin Liu, Haomin Li, Zongwu Wang, Bo Zhang, Mingzhe Zhang, Shoumeng Yan, Li Jiang, Haibing Guan:
ALLMod: Exploring Area-Efficiency of LUT-based Large Number Modular Reduction via Hybrid Workloads. DAC 2025: 1-7
[c41]Fangxin Liu, Ning Yang, Zongwu Wang, Xuanpeng Zhu, Haidong Yao, Xiankui Xiong, Li Jiang, Haibing Guan:
BLOOM: Bit-Slice Framework for DNN Acceleration with Mixed-Precision. DAC 2025: 1-7
[c40]Zongwu Wang, Peng Xu, Fangxin Liu, Yiwei Hu, Qingxiao Sun, Gezi Li, Cheng Li, Xuan Wang, Li Jiang, Haibing Guan:
MILLION: MasterIng Long-Context LLM Inference Via Outlier-Immunized KV Product QuaNtization. DAC 2025: 1-7
[c39]Ning Yang, Zongwu Wang, Qingxiao Sun, Liqiang Lu, Fangxin Liu:
PISA: Efficient Precision-Slice Framework for LLMs with Adaptive Numerical Type. DAC 2025: 1-7
[c38]Haomin Li, Fangxin Liu, Zongwu Wang, Dongxu Lyu, Shiyuan Huang, Ning Yang, Qi Sun, Zhuoran Song, Li Jiang:
TAIL: Exploiting Temporal Asynchronous Execution for Efficient Spiking Neural Networks with Inter-Layer Parallelism. DATE 2025: 1-7
[c37]Fangxin Liu, Haomin Li, Zongwu Wang, Dongxu Lyu, Li Jiang:
HyperDyn: Dynamic Dimensional Masking for Efficient Hyper-Dimensional Computing. DATE 2025: 1-7
[c36]Fangxin Liu, Ning Yang, Zongwu Wang, Xuanpeng Zhu, Haidong Yao, Xiankui Xiong, Qi Sun, Li Jiang:
OPS: Outlier-Aware Precision-Slice Framework for LLM Acceleration. DATE 2025: 1-2
[c35]Zongwu Wang, Fangxin Liu, Peng Xu, Qingxiao Sun, Junping Zhao, Li Jiang:
EVASION: Efficient KV CAche CompreSsion vIa PrOduct QuaNtization. DATE 2025: 1-2
[c34]Fangxin Liu, Zongwu Wang, JinHong Xia, Junping Zhao, Shouren Zhao, Jinjin Li, Jian Liu, Li Jiang, Haibing Guan:
FlexQuant: A Flexible and Efficient Dynamic Precision Switching Framework for LLM Quantization. EMNLP (Findings) 2025: 4152-4161
[c33]Fangxin Liu, Shiyuan Huang, Ning Yang, Zongwu Wang, Haomin Li, Li Jiang:
CROSS: Compiler-Driven Optimization of Sparse DNNs Using Sparse/Dense Computation Kernels. HPCA 2025: 963-976
[c32]Yiwei Hu, Fangxin Liu, Zongwu Wang, Yilong Zhao, Tao Yang, Li Jiang, Haibing Guan:
PLAIN: Leveraging High Internal Bandwidth in PIM for Accelerating Large Language Model Inference via Mixed-Precision Quantization. ICCAD 2025: 1-9
[c31]Zhixiong Zhao, Haomin Li, Fangxin Liu, Yuncheng Lu, Zongwu Wang, Tao Yang, Li Jiang, Haibing Guan:
QUARK: Quantization-Enabled Circuit Sharing for Transformer Acceleration by Exploiting Common Patterns in Nonlinear Operations. ICCAD 2025: 1-9
[c30]Haomin Li
, Fangxin Liu
, Yichi Chen
, Zongwu Wang
, Shiyuan Huang
, Ning Yang
, Dongxu Lyu
, Li Jiang
:
FATE: Boosting the Performance of Hyper-Dimensional Computing Intelligence with Flexible Numerical DAta TypE. ISCA 2025: 1269-1282
[c29]Fangxin Liu
, Junjie Wang
, Ning Yang
, Zongwu Wang
, Junping Zhao
, Li Jiang
, Haibing Guan
:
ASTER: Adaptive Dynamic Layer-Skipping for Efficient Transformer Inference via Markov Decision Process. ACM Multimedia 2025: 11853-11861
[i9]Fangxin Liu, Haomin Li, Zongwu Wang, Bo Zhang, Mingzhe Zhang, Shoumeng Yan, Li Jiang, Haibing Guan:
ALLMod: Exploring Area-Efficiency of LUT-based Large Number Modular Reduction via Hybrid Workloads. CoRR abs/2503.15916 (2025)
[i8]Zongwu Wang, Peng Xu, Fangxin Liu, Yiwei Hu, Qingxiao Sun, Gezi Li, Cheng Li, Xuan Wang, Li Jiang, Haibing Guan:
MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Quantization. CoRR abs/2504.03661 (2025)
[i7]Fangxin Liu, Zongwu Wang, JinHong Xia, Junping Zhao, Jian Liu, Haibing Guan, Li Jiang:
FlexQuant: A Flexible and Efficient Dynamic Precision Switching Framework for LLM Quantization. CoRR abs/2506.12024 (2025)
[i6]Fangxin Liu, Haomin Li, Bowen Zhu, Zongwu Wang, Zhuoran Song, Habing Guan, Li Jiang:
ASDR: Exploiting Adaptive Sampling and Data Reuse for CIM-based Instant Neural Rendering. CoRR abs/2508.02304 (2025)
[i5]Haomin Li, Fangxin Liu, Chenyang Guan, Zongwu Wang, Li Jiang, Haibing Guan:
LaMoS: Enabling Efficient Large Number Modular Multiplication through SRAM-based CiM Acceleration. CoRR abs/2511.03341 (2025)
[i4]Zhixiong Zhao, Haomin Li, Fangxin Liu, Yuncheng Lu, Zongwu Wang, Tao Yang, Li Jiang, Haibing Guan:
QUARK: Quantization-Enabled Circuit Sharing for Transformer Acceleration by Exploiting Common Patterns in Nonlinear Operations. CoRR abs/2511.06767 (2025)
[i3]Zhixiong Zhao, Fangxin Liu, Junjie Wang, Chenyang Guan, Zongwu Wang, Li Jiang, Haibing Guan:
SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization. CoRR abs/2511.11663 (2025)- 2024
[j5]Fangxin Liu
, Wenbo Zhao
, Zongwu Wang
, Yongbiao Chen
, Xiaoyao Liang
, Li Jiang
:
ERA-BS: Boosting the Efficiency of ReRAM-Based PIM Accelerator With Fine-Grained Bit-Level Sparsity. IEEE Trans. Computers 73(9): 2320-2334 (2024)
[j4]Ning Yang
, Fangxin Liu
, Zongwu Wang
, Junping Zhao
, Li Jiang
:
SearchQ: Search-Based Fine-Grained Quantization for Data-Free Model Compression. IEEE Trans. Circuits Syst. Artif. Intell. 1(2): 220-228 (2024)
[j3]Fangxin Liu
, Zongwu Wang, Wenbo Zhao, Ning Yang, Yongbiao Chen
, Shiyuan Huang
, Haomin Li
, Tao Yang, Songwen Pei, Xiaoyao Liang, Li Jiang:
Exploiting Temporal-Unrolled Parallelism for Energy-Efficient SNN Acceleration. IEEE Trans. Parallel Distributed Syst. 35(10): 1749-1764 (2024)
[c28]Fangxin Liu, Haomin Li
, Ning Yang, Yichi Chen, Zongwu Wang, Tao Yang, Li Jiang:
PAAP-HD: PIM-Assisted Approximation for Efficient Hyper-Dimensional Computing. ASPDAC 2024: 46-51
[c27]Fangxin Liu, Haomin Li
, Ning Yang, Zongwu Wang, Tao Yang, Li Jiang:
TEAS: Exploiting Spiking Activity for Temporal-wise Adaptive Spiking Neural Networks. ASPDAC 2024: 842-847
[c26]Shiyuan Huang, Fangxin Liu, Tian Li, Zongwu Wang, Haomin Li
, Li Jiang:
TSTC: Enabling Efficient Training via Structured Sparse Tensor Compilation. ASPDAC 2024: 884-889
[c25]Fangxin Liu
, Ning Yang
, Zhiyan Song
, Zongwu Wang
, Haomin Li
, Shiyuan Huang
, Zhuoran Song
, Songwen Pei
, Li Jiang
:
INSPIRE: Accelerating Deep Neural Networks via Hardware-friendly Index-Pair Encoding. DAC 2024: 10:1-10:6
[c24]Ning Yang
, Fangxin Liu
, Zongwu Wang
, Haomin Li
, Zhuoran Song
, Songwen Pei
, Li Jiang
:
EOS: An Energy-Oriented Attack Framework for Spiking Neural Networks. DAC 2024: 58:1-58:6
[c23]Fangxin Liu, Ning Yang
, Haomin Li
, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang:
SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding. HPCA 2024: 1029-1042
[c22]Ning Yang, Fangxin Liu, Zongwu Wang, Zhiyan Song, Tao Yang, Li Jiang:
T-BUS: Taming Bipartite Unstructured Sparsity for Energy-Efficient DNN Acceleration. ICCD 2024: 68-75
[c21]Zongwu Wang, Fangxin Liu, Xin Tang, Li Jiang:
PS4: A Low Power SNN Accelerator with Spike Speculative Scheme. ICCD 2024: 76-83
[c20]Fangxin Liu, Ning Yang, Zhiyan Song, Zongwu Wang, Li Jiang:
HOLES: Boosting Large Language Models Efficiency with Hardware-Friendly Lossless Encoding. ICCD 2024: 207-214
[c19]Longyu Zhao, Zongwu Wang, Fangxin Liu, Li Jiang:
Ninja: A Hardware Assisted System for Accelerating Nested Address Translation. ICCD 2024: 426-433
[c18]Yilong Zhao
, Mingyu Gao, Fangxin Liu, Yiwei Hu, Zongwu Wang, Han Lin, Jin Li, He Xian, Hanlin Dong, Tao Yang, Naifeng Jing, Xiaoyao Liang, Li Jiang:
UM-PIM: DRAM-based PIM with Uniform & Shared Memory Space. ISCA 2024: 644-659
[c17]Fangxin Liu
, Shiyuan Huang
, Longyu Zhao
, Li Jiang
, Zongwu Wang
:
LowPASS: A Low power PIM-based accelerator with Speculative Scheme for SNNs. ISLPED 2024: 1-6
[c16]Zongwu Wang, Fangxin Liu, Ning Yang, Shiyuan Huang, Haomin Li, Li Jiang:
COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with Adaptive Spike Speculation. MICRO 2024: 1090-1106
[i2]Zongwu Wang, Fangxin Liu, Mingshuai Li, Li Jiang:
TokenRing: An Efficient Parallelism Framework for Infinite-Context LLMs via Bidirectional Communication. CoRR abs/2412.20501 (2024)- 2023
[j2]Fangxin Liu
, Zongwu Wang, Yongbiao Chen, Zhezhi He
, Tao Yang
, Xiaoyao Liang, Li Jiang
:
SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(1): 204-217 (2023)
[c15]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Xiaokang Yang, Li Jiang:
SIMSnn: A Weight-Agnostic ReRAM-based Search-In-Memory Engine for SNN Acceleration. DATE 2023: 1-2- 2022
[j1]Fangxin Liu
, Wenbo Zhao, Zongwu Wang, Yilong Zhao
, Tao Yang
, Yiran Chen
, Li Jiang
:
IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(12): 5313-5326 (2022)
[c14]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Li Jiang:
SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks. AAAI 2022: 1692-1701
[c13]Qidong Tang, Zhezhi He, Fangxin Liu, Zongwu Wang, Yiyuan Zhou, Yinghuan Zhang, Li Jiang:
HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine. ASP-DAC 2022: 226-231
[c12]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks. DAC 2022: 259-264
[c11]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Zhezhi He, Rui Yang, Qidong Tang, Tao Yang, Cheng Zhuo, Li Jiang:
PIM-DH: ReRAM-based processing-in-memory architecture for deep hashing acceleration. DAC 2022: 1087-1092
[c10]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Tao Yang, Zhezhi He, Xiaokang Yang, Li Jiang:
SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture. DAC 2022: 1105-1110
[c9]Tao Yang, Dongyue Li, Zhuoran Song, Yilong Zhao, Fangxin Liu, Zongwu Wang, Zhezhi He, Li Jiang:
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture. DATE 2022: 700-705
[c8]Zongwu Wang, Zhezhi He, Rui Yang, Shiquan Fan, Jie Lin, Fangxin Liu, Yueyang Jia
, Chenxi Yuan
, Qidong Tang, Li Jiang:
Self-Terminating Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing. DATE 2022: 1251-1256
[c7]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Fei Dai:
DynSNN: A Dynamic Approach to Reduce Redundancy in Spiking Neural Networks. ICASSP 2022: 2130-2134
[c6]Fangxin Liu, Zongwu Wang, Wenbo Zhao, Yongbiao Chen, Tao Yang, Xiaokang Yang, Li Jiang:
Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs. ICCD 2022: 451-454
[c5]Chen Nie, Zongwu Wang, Qidong Tang, Chenyang Lv, Li Jiang, Zhezhi He:
Cross-layer Designs against Non-ideal Effects in ReRAM-based Processing-in-Memory System. ISQED 2022: 1-6- 2021
[c4]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Tao Yang, Li Jiang:
IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration. ACM Great Lakes Symposium on VLSI 2021: 253-258
[c3]Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao
, Yongbiao Chen, Li Jiang:
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator. ICCAD 2021: 1-9
[c2]Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao
, Tao Yang, Jingnai Feng, Xiaoyao Liang, Li Jiang:
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network. ICCD 2021: 417-424
[c1]Fangxin Liu, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, Li Jiang:
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point. ICCV 2021: 5261-5270
[i1]Fangxin Liu, Wenbo Zhao, Yilong Zhao, Zongwu Wang, Tao Yang, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network. CoRR abs/2103.01705 (2021)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-02-20 22:39 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







