default search action
Mert Gürbüzbalaban
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j22]Xuan Zhang, Necdet Serhat Aybat, Mert Gürbüzbalaban:
Robust Accelerated Primal-Dual Methods for Computing Saddle Points. SIAM J. Optim. 34(1): 1097-1130 (2024) - [i32]Umut Simsekli, Mert Gürbüzbalaban, Sinan Yildirim, Lingjiong Zhu:
Differential Privacy of Noisy (S)GD under Heavy-Tailed Perturbations. CoRR abs/2403.02051 (2024) - 2023
- [j21]Rishabh Dixit, Mert Gürbüzbalaban, Waheed U. Bajwa:
Boundary Conditions for Linear Exit Time Gradient Trajectories Around Saddle Points: Analysis and Algorithm. IEEE Trans. Inf. Theory 69(4): 2556-2602 (2023) - [j20]Mert Gürbüzbalaban, Yuanhan Hu, Umut Simsekli, Lingjiong Zhu:
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize. Trans. Mach. Learn. Res. 2023 (2023) - [c26]Anant Raj, Melih Barsbey, Mert Gürbüzbalaban, Lingjiong Zhu, Umut Simsekli:
Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares. ALT 2023: 1292-1342 - [c25]Anant Raj, Lingjiong Zhu, Mert Gürbüzbalaban, Umut Simsekli:
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions. ICML 2023: 28578-28597 - [c24]Lingjiong Zhu, Mert Gürbüzbalaban, Anant Raj, Umut Simsekli:
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent. NeurIPS 2023 - [i31]Anant Raj, Lingjiong Zhu, Mert Gürbüzbalaban, Umut Simsekli:
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions. CoRR abs/2301.11885 (2023) - [i30]Mert Gürbüzbalaban, Yuanhan Hu, Umut Simsekli, Lingjiong Zhu:
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD. CoRR abs/2302.05516 (2023) - [i29]Lingjiong Zhu, Mert Gürbüzbalaban, Anant Raj, Umut Simsekli:
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent. CoRR abs/2305.12056 (2023) - [i28]Rishabh Dixit, Mert Gürbüzbalaban, Waheed U. Bajwa:
Accelerated gradient methods for nonconvex optimization: Escape trajectories from strict saddle points and convergence to local minima. CoRR abs/2307.07030 (2023) - 2022
- [j19]Xuefeng Gao, Mert Gürbüzbalaban, Lingjiong Zhu:
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration. Oper. Res. 70(5): 2931-2947 (2022) - [j18]Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar, Umut Simsekli, Lingjiong Zhu:
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks. J. Mach. Learn. Res. 23: 220:1-220:96 (2022) - [j17]Mert Gürbüzbalaban, Andrzej Ruszczynski, Landi Zhu:
A Stochastic Subgradient Method for Distributionally Robust Non-convex and Non-smooth Learning. J. Optim. Theory Appl. 194(3): 1014-1041 (2022) - [j16]Nurdan Kuru, S. Ilker Birbil, Mert Gürbüzbalaban, Sinan Yildirim:
Differentially Private Accelerated Optimization Algorithms. SIAM J. Optim. 32(2): 795-821 (2022) - [j15]Bugra Can, Saeed Soori, Necdet Serhat Aybat, Maryam Mehri Dehnavi, Mert Gürbüzbalaban:
Randomized Gossiping With Effective Resistance Weights: Performance Guarantees and Applications. IEEE Trans. Control. Netw. Syst. 9(2): 524-536 (2022) - [c23]Xuan Zhang, Necdet Serhat Aybat, Mert Gürbüzbalaban:
SAPD+: An Accelerated Stochastic Method for Nonconvex-Concave Minimax Problems. NeurIPS 2022 - [c22]Baorun Mu, Saeed Soori, Bugra Can, Mert Gürbüzbalaban, Maryam Mehri Dehnavi:
HyLo: A Hybrid Low-Rank Natural Gradient Descent Method. SC 2022: 47:1-47:16 - [i27]Bugra Can, Mert Gürbüzbalaban, Necdet Serhat Aybat:
A Variance-Reduced Stochastic Accelerated Primal Dual Algorithm. CoRR abs/2202.09688 (2022) - [i26]Mert Gürbüzbalaban, Yuanhan Hu, Umut Simsekli, Kun Yuan, Lingjiong Zhu:
Heavy-Tail Phenomenon in Decentralized SGD. CoRR abs/2205.06689 (2022) - [i25]Anant Raj, Melih Barsbey, Mert Gürbüzbalaban, Lingjiong Zhu, Umut Simsekli:
Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares. CoRR abs/2206.01274 (2022) - [i24]Mert Gürbüzbalaban, Yuanhan Hu, Lingjiong Zhu:
Penalized Langevin and Hamiltonian Monte Carlo Algorithms for Constrained Sampling. CoRR abs/2212.00570 (2022) - 2021
- [j14]Mert Gürbüzbalaban, Xuefeng Gao, Yuanhan Hu, Lingjiong Zhu:
Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo. J. Mach. Learn. Res. 22: 239:1-239:69 (2021) - [j13]Mert Gürbüzbalaban, Asuman E. Ozdaglar, Pablo A. Parrilo:
Why random reshuffling beats stochastic gradient descent. Math. Program. 186(1): 49-84 (2021) - [c21]Mert Gürbüzbalaban, Yuanhan Hu:
Fractional moment-preserving initialization schemes for training deep neural networks. AISTATS 2021: 2233-2241 - [c20]Bugra Can, Saeed Soori, Maryam Mehri Dehnavi, Mert Gürbüzbalaban:
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method. CDC 2021: 2386-2393 - [c19]Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris C. Holmes, Mert Gürbüzbalaban, Umut Simsekli:
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections. ICML 2021: 1249-1260 - [c18]Mert Gürbüzbalaban, Umut Simsekli, Lingjiong Zhu:
The Heavy-Tail Phenomenon in SGD. ICML 2021: 3964-3975 - [c17]Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Simsekli, Lingjiong Zhu:
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms. NeurIPS 2021: 18774-18788 - [c16]Hongjian Wang, Mert Gürbüzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A. Erdogdu:
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance. NeurIPS 2021: 18866-18877 - [i23]Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris C. Holmes, Mert Gürbüzbalaban, Umut Simsekli:
Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections. CoRR abs/2102.07006 (2021) - [i22]Saeed Soori, Bugra Can, Baourun Mu, Mert Gürbüzbalaban, Maryam Mehri Dehnavi:
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion. CoRR abs/2106.03947 (2021) - [i21]Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Simsekli, Lingjiong Zhu:
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms. CoRR abs/2106.04881 (2021) - [i20]Bugra Can, Saeed Soori, Maryam Mehri Dehnavi, Mert Gürbüzbalaban:
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method. CoRR abs/2108.09365 (2021) - 2020
- [j12]Mert Gürbüzbalaban, Asuman E. Ozdaglar, Nuri Denizcan Vanli, Stephen J. Wright:
Randomness and permutations in coordinate descent methods. Math. Program. 181(2): 349-376 (2020) - [j11]Necdet Serhat Aybat, Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar:
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions. SIAM J. Optim. 30(1): 717-751 (2020) - [c15]Saeed Soori, Konstantin Mishchenko, Aryan Mokhtari, Maryam Mehri Dehnavi, Mert Gürbüzbalaban:
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate. AISTATS 2020: 1965-1976 - [c14]Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban:
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. ICML 2020: 8970-8980 - [c13]Saeed Soori, Bugra Can, Mert Gürbüzbalaban, Maryam Mehri Dehnavi:
ASYNC: A Cloud Engine with Asynchrony and History for Distributed Machine Learning. IPDPS 2020: 429-439 - [c12]Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gürbüzbalaban, Stefanie Jegelka, Hongzhou Lin:
IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method. NeurIPS 2020 - [c11]Xuefeng Gao, Mert Gürbüzbalaban, Lingjiong Zhu:
Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization. NeurIPS 2020 - [i19]Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban:
Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise. CoRR abs/2002.05685 (2020) - [i18]Mert Gürbüzbalaban, Yuanhan Hu:
Fractional moment-preserving initialization schemes for training fully-connected neural networks. CoRR abs/2005.11878 (2020) - [i17]Mert Gürbüzbalaban, Umut Simsekli, Lingjiong Zhu:
The Heavy-Tail Phenomenon in SGD. CoRR abs/2006.04740 (2020) - [i16]Mert Gürbüzbalaban, Andrzej Ruszczynski, Landi Zhu:
A Stochastic Subgradient Method for Distributionally Robust Non-Convex Learning. CoRR abs/2006.04873 (2020) - [i15]Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gürbüzbalaban, Stefanie Jegelka, Hongzhou Lin:
IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method. CoRR abs/2006.06733 (2020) - [i14]Mert Gürbüzbalaban, Xuefeng Gao, Yuanhan Hu, Lingjiong Zhu:
Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo. CoRR abs/2007.00590 (2020) - [i13]Nurdan Kuru, S. Ilker Birbil, Mert Gürbüzbalaban, Sinan Yildirim:
Differentially Private Accelerated Optimization Algorithms. CoRR abs/2008.01989 (2020)
2010 – 2019
- 2019
- [j10]Mert Gürbüzbalaban, Asuman E. Ozdaglar, Pablo A. Parrilo:
Convergence Rate of Incremental Gradient and Incremental Newton Methods. SIAM J. Optim. 29(4): 2542-2565 (2019) - [c10]Bugra Can, Mert Gürbüzbalaban, Lingjiong Zhu:
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances. ICML 2019: 891-901 - [c9]Umut Simsekli, Levent Sagun, Mert Gürbüzbalaban:
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks. ICML 2019: 5827-5837 - [c8]Thanh Huy Nguyen, Umut Simsekli, Mert Gürbüzbalaban, Gaël Richard:
First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise. NeurIPS 2019: 273-283 - [c7]Necdet Serhat Aybat, Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar:
A Universally Optimal Multistage Accelerated Stochastic Gradient Method. NeurIPS 2019: 8523-8534 - [i12]Umut Simsekli, Levent Sagun, Mert Gürbüzbalaban:
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks. CoRR abs/1901.06053 (2019) - [i11]Bugra Can, Mert Gürbüzbalaban, Lingjiong Zhu:
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances. CoRR abs/1901.07445 (2019) - [i10]Necdet Serhat Aybat, Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar:
A Universally Optimal Multistage Accelerated Stochastic Gradient Method. CoRR abs/1901.08022 (2019) - [i9]Thanh Huy Nguyen, Umut Simsekli, Mert Gürbüzbalaban, Gaël Richard:
First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise. CoRR abs/1906.09069 (2019) - [i8]Saeed Soori, Bugra Can, Mert Gürbüzbalaban, Maryam Mehri Dehnavi:
ASYNC: Asynchronous Machine Learning on Distributed Systems. CoRR abs/1907.08526 (2019) - [i7]Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar, Umut Simsekli, Lingjiong Zhu:
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks. CoRR abs/1910.08701 (2019) - [i6]Umut Simsekli, Mert Gürbüzbalaban, Thanh Huy Nguyen, Gaël Richard, Levent Sagun:
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks. CoRR abs/1912.00018 (2019) - 2018
- [j9]N. Denizcan Vanli, Mert Gürbüzbalaban, Asuman E. Ozdaglar:
Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods. SIAM J. Optim. 28(2): 1282-1300 (2018) - [j8]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate. SIAM J. Optim. 28(2): 1420-1447 (2018) - [c6]Saeed Soori, Aditya Devarakonda, Zachary Blanco, James Demmel, Mert Gürbüzbalaban, Maryam Mehri Dehnavi:
Reducing Communication in Proximal Newton Methods for Sparse Least Squares Problems. ICPP 2018: 22:1-22:10 - [i5]Necdet Serhat Aybat, Alireza Fallah, Mert Gürbüzbalaban, Asuman E. Ozdaglar:
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions. CoRR abs/1805.10579 (2018) - [i4]Xuefeng Gao, Mert Gürbüzbalaban, Lingjiong Zhu:
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration. CoRR abs/1809.04618 (2018) - [i3]Xuefeng Gao, Mert Gürbüzbalaban, Lingjiong Zhu:
Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization. CoRR abs/1812.07725 (2018) - 2017
- [j7]Julia Eaton, Sara Grundel, Mert Gürbüzbalaban, Michael L. Overton:
Polynomial root radius optimization with affine constraints. Math. Program. 165(2): 509-528 (2017) - [j6]Mert Gürbüzbalaban, Asuman E. Ozdaglar, Pablo A. Parrilo:
On the Convergence Rate of Incremental Aggregated Gradient Algorithms. SIAM J. Optim. 27(2): 1035-1048 (2017) - [j5]Nicola Guglielmi, Mert Gürbüzbalaban, Tim Mitchell, Michael L. Overton:
Approximating the Real Structured Stability Radius with Frobenius-Norm Bounded Perturbations. SIAM J. Matrix Anal. Appl. 38(4): 1323-1353 (2017) - [c5]Necdet Serhat Aybat, Mert Gürbüzbalaban:
Decentralized computation of effective resistances and acceleration of consensus algorithms. GlobalSIP 2017: 538-542 - [c4]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
A double incremental aggregated gradient method with linear convergence rate for large-scale optimization. ICASSP 2017: 4696-4700 - [c3]Mert Gürbüzbalaban, Asuman E. Ozdaglar, Pablo A. Parrilo, Nuri Denizcan Vanli:
When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent. NIPS 2017: 6999-7007 - [i2]Saeed Soori, Aditya Devarakonda, James Demmel, Mert Gürbüzbalaban, Maryam Mehri Dehnavi:
Avoiding Communication in Proximal Methods for Convex Optimization Problems. CoRR abs/1710.08883 (2017) - 2016
- [c2]N. Denizcan Vanli, Mert Gürbüzbalaban, Asuman E. Ozdaglar:
Global convergence rate of incremental aggregated gradient methods for nonsmooth problems. CDC 2016: 173-178 - [i1]Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro:
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate. CoRR abs/1611.00347 (2016) - 2015
- [j4]Mert Gürbüzbalaban, Asuman E. Ozdaglar, Pablo A. Parrilo:
A globally convergent incremental Newton method. Math. Program. 151(1): 283-313 (2015) - 2013
- [j3]Nicola Guglielmi, Mert Gürbüzbalaban, Michael L. Overton:
Fast Approximation of the HINFINITY Norm via Optimization over Spectral Value Sets. SIAM J. Matrix Anal. Appl. 34(2): 709-737 (2013) - 2012
- [j2]Mert Gürbüzbalaban, Michael L. Overton:
Some Regularity Results for the Pseudospectral Abscissa and Pseudospectral Radius of a Matrix. SIAM J. Optim. 22(2): 281-285 (2012) - [j1]Vincent D. Blondel, Mert Gürbüzbalaban, Alexandre Megretski, Michael L. Overton:
Explicit Solutions for Root Optimization of a Polynomial Family With One Affine Constraint. IEEE Trans. Autom. Control. 57(12): 3078-3089 (2012) - 2010
- [c1]Vincent D. Blondel, Mert Gürbüzbalaban, Alexandre Megretski, Michael L. Overton:
Explicit solutions for root optimization of a polynomial family. CDC 2010: 485-488
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:17 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint