


Остановите войну!
for scientists:


default search action
Dan Alistarh
Person information

- affiliation: IST Austria, Klosterneuburg, Austria
- affiliation (former): MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2023
- [j29]Dan Alistarh:
Distributed Computing Column 86: A Summary of PODC 2022. SIGACT News 54(1): 105 (2023) - [j28]Dan Alistarh, Alkida Balliu, Dimitrios Los, Sean Ovens:
A Brief Summary of PODC 2022. SIGACT News 54(1): 106-112 (2023) - [j27]Dan Alistarh, Faith Ellen, Joel Rybicki
:
Wait-free approximate agreement on graphs. Theor. Comput. Sci. 948: 113733 (2023) - [c100]Nikita Koval, Dan Alistarh, Roman Elizarov:
Fast and Scalable Channels in Kotlin Coroutines. PPoPP 2023: 107-118 - [c99]Alexander Fedorov, Diba Hashemi, Giorgi Nadiradze, Dan Alistarh:
Provably-Efficient and Internally-Deterministic Parallel Union-Find. SPAA 2023: 261-271 - [i83]Elias Frantar, Dan Alistarh:
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot. CoRR abs/2301.00774 (2023) - [i82]Ilia Markov, Adrian Vladu, Qi Guo, Dan Alistarh:
Quantized Distributed Training of Large Models with Convergence Guarantees. CoRR abs/2302.02390 (2023) - [i81]Eldar Kurtic, Elias Frantar, Dan Alistarh:
ZipLM: Hardware-Aware Structured Pruning of Language Models. CoRR abs/2302.04089 (2023) - [i80]Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks. CoRR abs/2302.04852 (2023) - [i79]Denis Kuznedelev, Soroush Tabesh, Kimia Noorbakhsh, Elias Frantar, Sara Beery, Eldar Kurtic, Dan Alistarh:
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression. CoRR abs/2303.14409 (2023) - [i78]Alexander Fedorov, Diba Hashemi, Giorgi Nadiradze, Dan Alistarh:
Provably-Efficient and Internally-Deterministic Parallel Union-Find. CoRR abs/2304.09331 (2023) - [i77]Eugenia Iofinova, Alexandra Peste, Dan Alistarh:
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures. CoRR abs/2304.12622 (2023) - [i76]Mher Safaryan, Alexandra Peste, Dan Alistarh:
Knowledge Distillation Performs Partial Variance Reduction. CoRR abs/2305.17581 (2023) - 2022
- [j26]Dan Alistarh, Giorgi Nadiradze
, Amirmojtaba Sabour:
Dynamic Averaging Load Balancing on Cycles. Algorithmica 84(4): 1007-1029 (2022) - [j25]Dan Alistarh:
Distributed Computing Column 85 Elastic Consistency: A Consistency Criterion for Distributed Optimization. SIGACT News 53(2): 63 (2022) - [j24]Dan Alistarh, Ilia Markov, Giorgi Nadiradze:
Elastic Consistency: A Consistency Criterion for Distributed Optimization. SIGACT News 53(2): 64-82 (2022) - [c98]Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh:
How Well Do Sparse ImageNet Models Transfer? CVPR 2022: 12256-12266 - [c97]Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, Dan Alistarh:
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models. EMNLP 2022: 4163-4181 - [c96]Elias Frantar, Dan Alistarh:
SPDY: Accurate Pruning with Speedup Guarantees. ICML 2022: 6726-6743 - [c95]Ilia Markov, Hamidreza Ramezani-Kebrya, Dan Alistarh:
CGX: adaptive system support for communication-efficient deep learning. Middleware 2022: 241-254 - [c94]Elias Frantar, Dan Alistarh:
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning. NeurIPS 2022 - [c93]Dan Alistarh, Joel Rybicki, Sasha Voitovych:
Near-Optimal Leader Election in Population Protocols on Graphs. PODC 2022: 246-256 - [c92]Anastasiia Postnikova, Nikita Koval, Giorgi Nadiradze, Dan Alistarh:
Multi-queues can be state-of-the-art priority schedulers. PPoPP 2022: 353-367 - [c91]Trevor Brown, William Sigouin, Dan Alistarh:
PathCAS: an efficient middle ground for concurrent search data structures. PPoPP 2022: 385-399 - [i75]Elias Frantar, Dan Alistarh:
SPDY: Accurate Pruning with Speedup Guarantees. CoRR abs/2201.13096 (2022) - [i74]Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh:
Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD. CoRR abs/2203.06638 (2022) - [i73]Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, Dan Alistarh:
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models. CoRR abs/2203.07259 (2022) - [i72]Dan Alistarh, Joel Rybicki, Sasha Voitovych:
Near-Optimal Leader Election in Population Protocols on Graphs. CoRR abs/2205.12597 (2022) - [i71]Hossein Zakerinia, Shayan Talaei, Giorgi Nadiradze, Dan Alistarh:
QuAFL: Federated Averaging Can Be Both Asynchronous and Communication-Efficient. CoRR abs/2206.10032 (2022) - [i70]Alexandra Peste, Adrian Vladu, Dan Alistarh, Christoph H. Lampert:
CrAM: A Compression-Aware Minimizer. CoRR abs/2207.14200 (2022) - [i69]Elias Frantar, Dan Alistarh:
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning. CoRR abs/2208.11580 (2022) - [i68]Eldar Kurtic, Dan Alistarh:
GMP*: Well-Tuned Global Magnitude Pruning Can Outperform Most BERT-Pruning Methods. CoRR abs/2210.06384 (2022) - [i67]Shayan Talaei, Giorgi Nadiradze, Dan Alistarh:
Hybrid Decentralized Optimization: First- and Zeroth-Order Optimizers Can Be Jointly Leveraged For Faster Convergence. CoRR abs/2210.07703 (2022) - [i66]Denis Kuznedelev, Eldar Kurtic, Elias Frantar, Dan Alistarh:
oViT: An Accurate Second-Order Pruning Framework for Vision Transformers. CoRR abs/2210.09223 (2022) - [i65]Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh:
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers. CoRR abs/2210.17323 (2022) - [i64]Mohammadreza Alimohammadi, Ilia Markov, Elias Frantar, Dan Alistarh:
L-GreCo: An Efficient and General Framework for Layerwise-Adaptive Gradient Compression. CoRR abs/2210.17357 (2022) - [i63]Nikita Koval, Dan Alistarh, Roman Elizarov:
Fast and Scalable Channels in Kotlin Coroutines. CoRR abs/2211.04986 (2022) - [i62]Trevor Brown, William Sigouin, Dan Alistarh:
PathCAS: An Efficient Middle Ground for Concurrent Search Data Structures. CoRR abs/2212.09851 (2022) - 2021
- [j23]Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy:
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization. J. Mach. Learn. Res. 22: 114:1-114:43 (2021) - [j22]Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste:
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22: 241:1-241:124 (2021) - [j21]Dan Alistarh:
Distributed Computing Column 81: Byzantine Agreement with Less Communication: Recent Advances. SIGACT News 52(1): 70 (2021) - [j20]Dan Alistarh:
Distributed Computing Column 82 Distributed Computability: A Few Results Masters Students Should Know. SIGACT News 52(2): 91 (2021) - [j19]Dan Alistarh:
Distributed Computing Column 83 Five Ways Not To Fool Yourself: Designing Experiments for Understanding Performance. SIGACT News 52(3): 60 (2021) - [j18]Dan Alistarh:
Distributed Computing Column 84: Perspectives on the Paper "CCS Expressions, Finite State Processes, and Three Problems of Equivalence". SIGACT News 52(4): 74-75 (2021) - [j17]Shigang Li
, Tal Ben-Nun
, Giorgi Nadiradze, Salvatore Di Girolamo, Nikoli Dryden, Dan Alistarh, Torsten Hoefler:
Breaking (Global) Barriers in Parallel Stochastic Optimization With Wait-Avoiding Group Averaging. IEEE Trans. Parallel Distributed Syst. 32(7): 1725-1739 (2021) - [c90]Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh:
Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees. AAAI 2021: 8209-8216 - [c89]Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh:
Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent. AAAI 2021: 9037-9045 - [c88]Zeyuan Allen-Zhu, Faeze Ebrahimianghazani, Jerry Li, Dan Alistarh:
Byzantine-Resilient Non-Convex Stochastic Gradient Descent. ICLR 2021 - [c87]Peter Davies, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, Dan Alistarh:
New Bounds For Distributed Mean Estimation and Variance Reduction. ICLR 2021 - [c86]Foivos Alimisis, Peter Davies, Dan Alistarh:
Communication-Efficient Distributed Optimization with Quantized Preconditioners. ICML 2021: 196-206 - [c85]Foivos Alimisis, Peter Davies, Bart Vandereycken, Dan Alistarh:
Distributed Principal Component Analysis with Limited Communication. NeurIPS 2021: 2823-2834 - [c84]Giorgi Nadiradze, Amirmojtaba Sabour, Peter Davies, Shigang Li, Dan Alistarh:
Asynchronous Decentralized SGD with Quantized and Local Updates. NeurIPS 2021: 6829-6842 - [c83]Janne H. Korhonen, Dan Alistarh:
Towards Tight Communication Lower Bounds for Distributed Optimisation. NeurIPS 2021: 7254-7266 - [c82]Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh:
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks. NeurIPS 2021: 8557-8570 - [c81]Elias Frantar, Eldar Kurtic, Dan Alistarh:
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information. NeurIPS 2021: 14873-14886 - [c80]Dan Alistarh, Rati Gelashvili, Joel Rybicki:
Fast Graphical Population Protocols. OPODIS 2021: 14:1-14:18 - [c79]Dan Alistarh, Martin Töpfer, Przemyslaw Uznanski:
Comparison Dynamics in Population Protocols. PODC 2021: 55-65 - [c78]Dan Alistarh, Peter Davies:
Collecting Coupons is Faster with Friends. SIROCCO 2021: 3-12 - [c77]Dan Alistarh, Faith Ellen, Joel Rybicki
:
Wait-Free Approximate Agreement on Graphs. SIROCCO 2021: 87-105 - [c76]Alexander Fedorov, Nikita Koval, Dan Alistarh:
A Scalable Concurrent Algorithm for Dynamic Connectivity. SPAA 2021: 208-220 - [c75]Dan Alistarh, Rati Gelashvili, Giorgi Nadiradze:
Lower Bounds for Shared-Memory Leader Election Under Bounded Write Contention. DISC 2021: 4:1-4:17 - [c74]Dan Alistarh, Rati Gelashvili, Joel Rybicki
:
Brief Announcement: Fast Graphical Population Protocols. DISC 2021: 43:1-43:4 - [i61]Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste:
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. CoRR abs/2102.00554 (2021) - [i60]Foivos Alimisis, Peter Davies, Dan Alistarh:
Communication-Efficient Distributed Optimization with Quantized Preconditioners. CoRR abs/2102.07214 (2021) - [i59]Dan Alistarh, Rati Gelashvili, Joel Rybicki:
Fast Graphical Population Protocols. CoRR abs/2102.08808 (2021) - [i58]Dan Alistarh, Faith Ellen, Joel Rybicki:
Wait-free approximate agreement on graphs. CoRR abs/2103.08949 (2021) - [i57]Ali Ramezani-Kebrya, Fartash Faghri, Ilia Markov, Vitaly Aksenov, Dan Alistarh, Daniel M. Roy:
NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization. CoRR abs/2104.13818 (2021) - [i56]Alexander Fedorov, Nikita Koval, Dan Alistarh:
A Scalable Concurrent Algorithm for Dynamic Connectivity. CoRR abs/2105.08098 (2021) - [i55]Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh:
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks. CoRR abs/2106.12379 (2021) - [i54]Elias Frantar, Eldar Kurtic, Dan Alistarh:
Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization. CoRR abs/2107.03356 (2021) - [i53]Alexandra Peste, Dan Alistarh, Christoph H. Lampert:
SSSE: Efficiently Erasing Samples from Trained Machine Learning Models. CoRR abs/2107.03860 (2021) - [i52]Dan Alistarh, Rati Gelashvili, Giorgi Nadiradze:
Lower Bounds for Shared-Memory Leader Election under Bounded Write Contention. CoRR abs/2108.02802 (2021) - [i51]Anastasiia Postnikova, Nikita Koval, Giorgi Nadiradze, Dan Alistarh:
Multi-Queues Can Be State-of-the-Art Priority Schedulers. CoRR abs/2109.00657 (2021) - [i50]Foivos Alimisis, Peter Davies, Bart Vandereycken, Dan Alistarh:
Distributed Principal Component Analysis with Limited Communication. CoRR abs/2110.14391 (2021) - [i49]Ilia Markov, Hamidreza Ramezani-Kebrya, Dan Alistarh:
Project CGX: Scalable Deep Learning on Commodity GPUs. CoRR abs/2111.08617 (2021) - [i48]Nikita Koval, Dmitry Khalanskiy, Dan Alistarh:
A Formally-Verified Framework for Fair Synchronization in Kotlin Coroutines. CoRR abs/2111.12682 (2021) - [i47]Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh:
How Well Do Sparse Imagenet Models Transfer? CoRR abs/2111.13445 (2021) - [i46]Dan Alistarh, Peter Davies:
Collecting Coupons is Faster with Friends. CoRR abs/2112.05830 (2021) - 2020
- [j16]Dan Alistarh:
Distributed Computing Column 77 Consensus Dynamics: An Overview. SIGACT News 51(1): 57 (2020) - [j15]Dan Alistarh:
Distributed Computing Column 78: 60 Years of Mastering Concurrent Computing through Sequential Thinking. SIGACT News 51(2): 58 (2020) - [j14]Dan Alistarh:
Distributed Computing Column 79: Using Round Elimination to Understand Locality. SIGACT News 51(3): 62 (2020) - [j13]Dan Alistarh:
Distributed Computing Column 80: Annual Review 2020. SIGACT News 51(4): 73-74 (2020) - [j12]Nezihe Merve Gürel
, Kaan Kara, Alen Stojanov, Tyler M. Smith, Thomas Lemmin, Dan Alistarh, Markus Püschel
, Ce Zhang:
Compressive Sensing Using Iterative Hard Thresholding With Low Precision Data Representation: Theory and Applications. IEEE Trans. Signal Process. 68: 4268-4282 (2020) - [c73]Dan Alistarh, Giorgi Nadiradze, Amirmojtaba Sabour:
Dynamic Averaging Load Balancing on Cycles. ICALP 2020: 7:1-7:16 - [c72]Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph Lampert:
On the Sample Complexity of Adversarial Multi-Source PAC Learning. ICML 2020: 5416-5425 - [c71]Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William M. Leiserson, Sage Moore, Nir Shavit, Dan Alistarh:
Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks. ICML 2020: 5533-5543 - [c70]Vitaly Aksenov, Dan Alistarh, Janne H. Korhonen:
Scalable Belief Propagation via Relaxed Scheduling. NeurIPS 2020 - [c69]Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M. Roy, Ali Ramezani-Kebrya:
Adaptive Gradient Quantization for Data-Parallel SGD. NeurIPS 2020 - [c68]Sidak Pal Singh, Dan Alistarh:
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression. NeurIPS 2020 - [c67]Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, Leqi Zhu:
Brief Announcement: Why Extension-Based Proofs Fail. PODC 2020: 54-56 - [c66]Shigang Li, Tal Ben-Nun, Salvatore Di Girolamo, Dan Alistarh, Torsten Hoefler:
Taming unbalanced training workloads in deep learning with partial collective operations. PPoPP 2020: 45-61 - [c65]Trevor Brown, Aleksandar Prokopec, Dan Alistarh:
Non-blocking interpolation search trees with doubly-logarithmic running time. PPoPP 2020: 276-291 - [c64]Nikita Koval, Maria Sokolova, Alexander Fedorov, Dan Alistarh, Dmitry Tsitelov:
Testing concurrency on the JVM with lincheck. PPoPP 2020: 423-424 - [c63]Dan Alistarh, Trevor Brown, Nandini Singhal:
Memory Tagging: Minimalist Synchronization for Scalable Concurrent Data Structures. SPAA 2020: 37-49 - [c62]Vitaly Aksenov
, Dan Alistarh, Alexandra Drozdova, Amirkeivan Mohtashami:
The Splay-List: A Distribution-Adaptive Concurrent Skip-List. DISC 2020: 3:1-3:18 - [i45]Aleksandar Prokopec
, Trevor Brown, Dan Alistarh:
Analysis and Evaluation of Non-Blocking Interpolation Search Trees. CoRR abs/2001.00413 (2020) - [i44]Dan Alistarh, Bapi Chatterjee, Vyacheslav Kungurtsev:
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent. CoRR abs/2001.05918 (2020) - [i43]Dan Alistarh, Saleh Ashkboos, Peter Davies:
Distributed Mean Estimation with Optimal Error Bounds. CoRR abs/2002.09268 (2020) - [i42]Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert:
On the Sample Complexity of Adversarial Multi-Source PAC Learning. CoRR abs/2002.10384 (2020) - [i41]Vitaly Aksenov, Dan Alistarh, Janne H. Korhonen:
Relaxed Scheduling for Scalable Belief Propagation. CoRR abs/2002.11505 (2020) - [i40]Dan Alistarh, Martin Töpfer, Przemyslaw Uznanski:
Robust Comparison in Population Protocols. CoRR abs/2003.06485 (2020) - [i39]Dan Alistarh, Giorgi Nadiradze, Amirmojtaba Sabour:
Dynamic Averaging Load Balancing on Cycles. CoRR abs/2003.09297 (2020) - [i38]Dan Alistarh, Nikita Koval, Giorgi Nadiradze:
Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers. CoRR abs/2003.09363 (2020) - [i37]Sidak Pal Singh, Dan Alistarh:
WoodFisher: Efficient second-order approximations for model compression. CoRR abs/2004.14340 (2020) - [i36]Shigang Li, Tal Ben-Nun, Dan Alistarh, Salvatore Di Girolamo, Nikoli Dryden, Torsten Hoefler:
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging. CoRR abs/2005.00124 (2020) - [i35]Vyacheslav Kungurtsev, Bapi Chatterjee, Dan Alistarh:
Stochastic Gradient Langevin with Delayed Gradients. CoRR abs/2006.07362 (2020) - [i34]Alex Shamis, Matthew Renzelmann, Stanko Novakovic, Georgios Chatzopoulos, Anders T. Gjerdrum, Dan Alistarh, Aleksandar Dragojevic, Dushyanth Narayanan, Miguel Castro:
Fast General Distributed Transactions with Opacity using Global Time. CoRR abs/2006.14346 (2020) - [i33]Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, Amirkeivan Mohtashami:
The Splay-List: A Distribution-Adaptive Concurrent Skip-List. CoRR abs/2008.01009 (2020) - [i32]Dan Alistarh, Janne H. Korhonen:
Improved Communication Lower Bounds for Distributed Optimisation. CoRR abs/2010.08222 (2020) - [i31]Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M. Roy, Ali Ramezani-Kebrya
:
Adaptive Gradient Quantization for Data-Parallel SGD. CoRR abs/2010.12460 (2020) - [i30]Zeyuan Allen-Zhu, Faeze Ebrahimian, Jerry Li, Dan Alistarh:
Byzantine-Resilient Non-Convex Stochastic Gradient Descent. CoRR abs/2012.14368 (2020)
2010 – 2019
- 2019
- [j11]Dan Alistarh:
Distributed Computing Column 76: Annual Review 2019. SIGACT News 50(4): 31-32 (2019) - [c61]Nikita Koval, Dan Alistarh, Roman Elizarov:
Scalable FIFO Channels for Programming via Communicating Sequential Processes. Euro-Par 2019: 317-333 - [c60]Chen Yu, Hanlin Tang, Cédric Renggli, Simon Kassing, Ankit Singla, Dan Alistarh, Ce Zhang, Ji Liu:
Distributed Learning over Unreliable Networks. ICML 2019: 7202-7212 - [c59]Chris Wendler, Markus Püschel, Dan Alistarh:
Powerset Convolutional Neural Networks. NeurIPS 2019: 927-938 - [c58]Dan Alistarh, Alexander Fedorov, Nikita Koval:
In Search of the Fastest Concurrent Union-Find Algorithm. OPODIS 2019: 15:1-15:16 - [c57]Nikita Koval, Dan Alistarh, Roman Elizarov:
Lock-free channels for programming via communicating sequential processes: poster. PPoPP 2019: 417-418 - [c56]Cédric Renggli, Saleh Ashkboos, Mehdi Aghagolzadeh, Dan Alistarh, Torsten Hoefler:
SparCML: high-performance sparse communication for machine learning. SC 2019: 11:1-11:15 - [c55]Dan Alistarh, Giorgi Nadiradze, Nikita Koval:
Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers. SPAA 2019: 145-154 - [c54]Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, Leqi Zhu:
Why extension-based proofs fail. STOC 2019: 986-996 - [i29]Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Eric Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros G. Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim M. Hazelwood, Furong Huang, Martin Jaggi, Kevin G. Jamieson, Michael I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konecný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Jing Li
, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Gordon Murray, Dimitris S. Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan R. Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric P. Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar:
SysML: The New Frontier of Machine Learning Systems. CoRR abs/1904.03257 (2019) - [i28]Vitaly Aksenov, Dan Alistarh, Petr Kuznetsov:
Performance Prediction for Coarse-Grained Locking. CoRR abs/1904.11323 (2019) - [i27]Shigang Li, Tal Ben-Nun, Salvatore Di Girolamo, Dan Alistarh, Torsten Hoefler:
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations. CoRR abs/1908.04207 (2019) - [i26]Chris Wendler, Dan Alistarh, Markus Püschel:
Powerset Convolutional Neural Networks. CoRR abs/1909.02253 (2019) - [i25]Giorgi Nadiradze, Amirmojtaba Sabour, Aditya Sharma, Ilia Markov, Vitaly Aksenov, Dan Alistarh:
PopSGD: Decentralized Stochastic Gradient Descent in the Population Model. CoRR abs/1910.12308 (2019) - [i24]Dan Alistarh, Alexander Fedo