default search action
Matthew Mattina
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [i34]Dhireesha Kudithipudi, Anurag Reddy Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin R. Epstein:
Design Principles for Lifelong Learning AI Accelerators. CoRR abs/2310.04467 (2023) - 2022
- [c28]Zhi Gang Liu, Paul N. Whatmough, Yuhao Zhu, Matthew Mattina:
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration. HPCA 2022: 573-586 - [c27]Igor Fedorov, Ramon Matas Navarro, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul N. Whatmough:
UDC: Unified DNAS for Compressible TinyML Models for Neural Processing Units. NeurIPS 2022 - [i33]Igor Fedorov, Ramon Matas Navarro, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul N. Whatmough:
UDC: Unified DNAS for Compressible TinyML Models. CoRR abs/2201.05842 (2022) - 2021
- [j3]Urmish Thakker, Igor Fedorov, Chu Zhou, Dibakar Gope, Matthew Mattina, Ganesh Dasika, Jesse G. Beu:
Compressing RNNs to Kilobyte Budget for IoT Devices Using Kronecker Products. ACM J. Emerg. Technol. Comput. Syst. 17(4): 46:1-46:18 (2021) - [c26]Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji:
Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification. ICCVW 2021: 2095-2104 - [c25]Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama:
Federated Learning Based on Dynamic Regularization. ICLR 2021 - [c24]Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama:
Debiasing Model Updates for Improving Personalized Federated Training. ICML 2021: 21-31 - [c23]Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough:
Strong data processing inequality in neural networks with noisy neurons and its implications. ISIT 2021: 1170-1175 - [c22]Matthew Mattina:
Co-Designing Hardware and Models for Efficient On-Device ML Inference. ISLPED 2021: 1 - [c21]Colby R. Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough:
MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers. MLSys 2021 - [c20]Urmish Thakker, Paul N. Whatmough, Zhi Gang Liu, Matthew Mattina, Jesse G. Beu:
Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices. MLSys 2021 - [c19]Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues:
On the effects of quantisation on model uncertainty in Bayesian neural networks. UAI 2021: 929-938 - [i32]Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough:
Information contraction in noisy binary neural networks and its implications. CoRR abs/2101.11750 (2021) - [i31]Urmish Thakker, Paul N. Whatmough, Zhi Gang Liu, Matthew Mattina, Jesse G. Beu:
Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices. CoRR abs/2102.07071 (2021) - [i30]Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues:
On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks. CoRR abs/2102.11062 (2021) - [i29]Zhi Gang Liu, Paul N. Whatmough, Yuhao Zhu, Matthew Mattina:
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration. CoRR abs/2107.07983 (2021) - [i28]Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji:
Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification. CoRR abs/2108.06317 (2021) - [i27]Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama:
Federated Learning Based on Dynamic Regularization. CoRR abs/2111.04263 (2021) - 2020
- [j2]Zhi Gang Liu, Paul N. Whatmough, Matthew Mattina:
Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference. IEEE Comput. Archit. Lett. 19(1): 34-37 (2020) - [c18]Dibakar Gope, Jesse G. Beu, Urmish Thakker, Matthew Mattina:
Ternary MobileNets via Per-Layer Hybrid Filter Banks. CVPR Workshops 2020: 3036-3046 - [c17]Zhi Gang Liu, Matthew Mattina:
Efficient Residue Number System Based Winograd Convolution. ECCV (19) 2020: 53-68 - [c16]Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Rank and run-time aware compression of NLP Applications. SustaiNLP@EMNLP 2020: 8-18 - [c15]Patrick Hansen, Alexey Vilkin, Yury Krustalev, James Imber, Dumidu S. Talagala, David Hanwell, Matthew Mattina, Paul N. Whatmough:
ISP4ML: The Role of Image Signal Processing in Efficient Deep Learning Vision Systems. ICPR 2020: 2438-2445 - [c14]Igor Fedorov, Marko Stamenovic, Carl Jensen, Li-Chia Yang, Ari Mandell, Yiming Gan, Matthew Mattina, Paul N. Whatmough:
TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids. INTERSPEECH 2020: 4054-4058 - [c13]Ananda Samajdar, Jan Moritz Joseph, Yuhao Zhu, Paul N. Whatmough, Matthew Mattina, Tushar Krishna:
A Systematic Methodology for Characterizing Scalability of DNN Accelerators using SCALE-Sim. ISPASS 2020: 58-68 - [c12]Javier Fernández-Marqués, Paul N. Whatmough, Andrew Mundy, Matthew Mattina:
Searching for Winograd-aware Quantized Networks. MLSys 2020 - [i26]Chuteng Zhou, Prad Kadambi, Matthew Mattina, Paul N. Whatmough:
Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation. CoRR abs/2001.04974 (2020) - [i25]Urmish Thakker, Paul N. Whatmough, Matthew Mattina, Jesse G. Beu:
Compressing Language Models using Doped Kronecker Products. CoRR abs/2001.08896 (2020) - [i24]Javier Fernández-Marqués, Paul N. Whatmough, Andrew Mundy, Matthew Mattina:
Searching for Winograd-aware Quantized Networks. CoRR abs/2002.10711 (2020) - [i23]Zhi Gang Liu, Paul N. Whatmough, Matthew Mattina:
Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference. CoRR abs/2005.08098 (2020) - [i22]Igor Fedorov, Marko Stamenovic, Carl Jensen, Li-Chia Yang, Ari Mandell, Yiming Gan, Matthew Mattina, Paul N. Whatmough:
TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids. CoRR abs/2005.11138 (2020) - [i21]Zhi Gang Liu, Matthew Mattina:
Efficient Residue Number System Based Winograd Convolution. CoRR abs/2007.12216 (2020) - [i20]Dibakar Gope, Jesse G. Beu, Matthew Mattina:
High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands. CoRR abs/2008.00638 (2020) - [i19]Zhi Gang Liu, Paul N. Whatmough, Matthew Mattina:
Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration. CoRR abs/2009.02381 (2020) - [i18]Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Rank and run-time aware compression of NLP Applications. CoRR abs/2010.03193 (2020) - [i17]Colby R. Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough:
MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers. CoRR abs/2010.11267 (2020)
2010 – 2019
- 2019
- [c11]Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse G. Beu, Matthew Mattina, Robert D. Mullins:
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs. EMC2@HPCA/CVPR/ISCA 2019: 1-5 - [c10]Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Run-Time Efficient RNN Compression for Inference on Edge Devices. EMC2@HPCA/CVPR/ISCA 2019: 26-30 - [c9]Zhi Gang Liu, Matthew Mattina:
Learning Low-precision Neural Networks without Straight-Through Estimator (STE). IJCAI 2019: 3066-3072 - [c8]Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications. SysML 2019 - [c7]Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas K. Venkataramanaiah, Jae-sun Seo, Matthew Mattina:
FixyNN: Energy-Efficient Real-Time Mobile Computer Vision Hardware Acceleration via Transfer Learning. SysML 2019 - [c6]Urmish Thakker, Igor Fedorov, Jesse G. Beu, Dibakar Gope, Chu Zhou, Ganesh Dasika, Matthew Mattina:
Pushing the limits of RNN Compression. EMC2@NeurIPS 2019: 18-21 - [c5]Igor Fedorov, Ryan P. Adams, Matthew Mattina, Paul N. Whatmough:
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers. NeurIPS 2019: 4978-4990 - [i16]Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas K. Venkataramanaiah, Jae-sun Seo, Matthew Mattina:
FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning. CoRR abs/1902.11128 (2019) - [i15]Zhi Gang Liu, Matthew Mattina:
Learning low-precision neural networks without Straight-Through Estimator(STE). CoRR abs/1903.01061 (2019) - [i14]Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse G. Beu, Matthew Mattina, Robert D. Mullins:
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs. CoRR abs/1903.01521 (2019) - [i13]Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications. CoRR abs/1903.01531 (2019) - [i12]Urmish Thakker, Ganesh Dasika, Jesse G. Beu, Matthew Mattina:
Measuring scheduling efficiency of RNNs for NLP applications. CoRR abs/1904.03302 (2019) - [i11]Igor Fedorov, Ryan P. Adams, Matthew Mattina, Paul N. Whatmough:
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers. CoRR abs/1905.12107 (2019) - [i10]Urmish Thakker, Jesse G. Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, Matthew Mattina:
Compressing RNNs for IoT devices by 15-38x using Kronecker Products. CoRR abs/1906.02876 (2019) - [i9]Urmish Thakker, Jesse G. Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina:
Run-Time Efficient RNN Compression for Inference on Edge Devices. CoRR abs/1906.04886 (2019) - [i8]Urmish Thakker, Igor Fedorov, Jesse G. Beu, Dibakar Gope, Chu Zhou, Ganesh Dasika, Matthew Mattina:
Pushing the limits of RNN Compression. CoRR abs/1910.02558 (2019) - [i7]Dibakar Gope, Jesse G. Beu, Urmish Thakker, Matthew Mattina:
Ternary MobileNets via Per-Layer Hybrid Filter Banks. CoRR abs/1911.01028 (2019) - [i6]Patrick Hansen, Alexey Vilkin, Yury Khrustalev, James Imber, David Hanwell, Matthew Mattina, Paul N. Whatmough:
ISP4ML: Understanding the Role of Image Signal Processing in Efficient Deep Learning Vision Systems. CoRR abs/1911.07954 (2019) - 2018
- [c4]Yuhao Zhu, Anand Samajdar, Matthew Mattina, Paul N. Whatmough:
Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision. ISCA 2018: 547-560 - [i5]Yuhao Zhu, Matthew Mattina, Paul N. Whatmough:
Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective. CoRR abs/1801.06274 (2018) - [i4]Yuhao Zhu, Anand Samajdar, Matthew Mattina, Paul N. Whatmough:
Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision. CoRR abs/1803.11232 (2018) - [i3]Ananda Samajdar, Yuhao Zhu, Paul N. Whatmough, Matthew Mattina, Tushar Krishna:
SCALE-Sim: Systolic CNN Accelerator. CoRR abs/1811.02883 (2018) - [i2]Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Matthew Mattina:
Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning. CoRR abs/1812.01672 (2018) - [i1]Franz Pernkopf, Wolfgang Roth, Matthias Zöhrer, Lukas Pfeifenberger, Günther Schindler, Holger Fröning, Sebastian Tschiatschek, Robert Peharz, Matthew Mattina, Zoubin Ghahramani:
Efficient and Robust Machine Learning for Real-World Systems. CoRR abs/1812.02240 (2018)
2000 – 2009
- 2008
- [c3]Shane Bell, Bruce Edwards, John Amann, Rich Conlin, Kevin Joyce, Vince Leung, John MacKay, Mike Reif, Liewei Bao, John F. Brown III, Matthew Mattina, Chyi-Chang Miao, Carl Ramey, David Wentzlaff, Walker Anderson, Ethan Berger, Nat Fairbanks, Durlov Khan, Froilan Montenegro, Jay Stickney, John Zook:
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect. ISSCC 2008: 88-89 - 2007
- [j1]David Wentzlaff, Patrick Griffin, Henry Hoffmann, Liewei Bao, Bruce Edwards, Carl Ramey, Matthew Mattina, Chyi-Chang Miao, John F. Brown III, Anant Agarwal:
On-Chip Interconnection Architecture of the Tile Processor. IEEE Micro 27(5): 15-31 (2007) - 2006
- [c2]Aamer Jaleel, Matthew Mattina, Bruce L. Jacob:
Last level cache (LLC) performance of data mining workloads on a CMP - a case study of parallel bioinformatics workloads. HPCA 2006: 88-98 - 2002
- [c1]Roger Espasa, Federico Ardanaz, Julio Gago, Roger Gramunt, Isaac Hernandez, Toni Juan, Joel S. Emer, Stephen Felix, P. Geoffrey Lowney, Matthew Mattina, André Seznec:
Tarantula: A Vector Extension to the Alpha Architecture. ISCA 2002: 281-292
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-05-15 20:41 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint