default search action
Eduard Gorbunov
Person information
- affiliation (PhD): Moscow Institute of Physics and Technology (MIPT), Russia
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j6]Yuriy Dorn, Nikita Kornilov, Nikolay Kutuzov, Alexander Nazin, Eduard Gorbunov, Alexander V. Gasnikov:
Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits. Comput. Manag. Sci. 21(1): 19 (2024) - [c27]Nikita Puchkin, Eduard Gorbunov, Nikolay Kutuzov, Alexander V. Gasnikov:
Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems. AISTATS 2024: 856-864 - [c26]Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtárik:
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates. AISTATS 2024: 1207-1215 - [c25]Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise. ICML 2024 - [i40]Nazarii Tupitsa, Samuel Horváth, Martin Takác, Eduard Gorbunov:
Federated Learning Can Find Friends That Are Beneficial. CoRR abs/2402.05050 (2024) - [i39]Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horváth, Martin Takác, Eduard Gorbunov:
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad. CoRR abs/2403.02648 (2024) - [i38]Saveliy Chezhegov, Yaroslav Klyukin, Andrei Semenov, Aleksandr Beznosikov, Alexander V. Gasnikov, Samuel Horváth, Martin Takác, Eduard Gorbunov:
Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed. CoRR abs/2406.04443 (2024) - [i37]Viktor Moskvoretskii, Nazarii Tupitsa, Chris Biemann, Samuel Horváth, Eduard Gorbunov, Irina Nikishina:
Low-Resource Machine Translation through the Lens of Personalized Federated Learning. CoRR abs/2406.12564 (2024) - [i36]Eduard Gorbunov, Nazarii Tupitsa, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takác:
Methods for Convex (L0,L1)-Smooth Optimization: Clipping, Acceleration, and Adaptivity. CoRR abs/2409.14989 (2024) - 2023
- [c24]Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, Nicolas Loizou:
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods. AISTATS 2023: 172-235 - [c23]Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel:
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top. ICLR 2023 - [c22]Eduard Gorbunov, Adrien B. Taylor, Samuel Horváth, Gauthier Gidel:
Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity. ICML 2023: 11614-11641 - [c21]Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance. ICML 2023: 29563-29648 - [c20]Sayantan Choudhury, Eduard Gorbunov, Nicolas Loizou:
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions. NeurIPS 2023 - [c19]Nikita Kornilov, Ohad Shamir, Aleksandr V. Lobanov, Darina Dvinskikh, Alexander V. Gasnikov, Innokentiy Shibaev, Eduard Gorbunov, Samuel Horváth:
Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance. NeurIPS 2023 - [c18]Nazarii Tupitsa, Abdulla Jasem Almansoori, Yanlin Wu, Martin Takác, Karthik Nandakumar, Samuel Horváth, Eduard Gorbunov:
Byzantine-Tolerant Methods for Distributed Variational Inequalities. NeurIPS 2023 - [i35]Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance. CoRR abs/2302.00999 (2023) - [i34]Sayantan Choudhury, Eduard Gorbunov, Nicolas Loizou:
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions. CoRR abs/2302.14043 (2023) - [i33]Nikita Fedin, Eduard Gorbunov:
Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient. CoRR abs/2303.04560 (2023) - [i32]Eduard Gorbunov:
Unified analysis of SGD-type methods. CoRR abs/2303.16502 (2023) - [i31]Yuriy Dorn, Nikita Kornilov, Nikolay Kutuzov, Alexander Nazin, Eduard Gorbunov, Alexander V. Gasnikov:
Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits. CoRR abs/2305.06743 (2023) - [i30]Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth:
Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity. CoRR abs/2305.18285 (2023) - [i29]Sarit Khirirat, Eduard Gorbunov, Samuel Horváth, Rustem Islamov, Fakhri Karray, Peter Richtárik:
Clip21: Error Feedback for Gradient Clipping. CoRR abs/2305.18929 (2023) - [i28]Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel E. Dvurechensky, Alexander V. Gasnikov, Peter Richtárik:
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise. CoRR abs/2310.01860 (2023) - [i27]Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtárik:
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates. CoRR abs/2310.09804 (2023) - [i26]Nikita Puchkin, Eduard Gorbunov, Nikolay Kutuzov, Alexander V. Gasnikov:
Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems. CoRR abs/2311.04161 (2023) - [i25]Nazarii Tupitsa, Abdulla Jasem Almansoori, Yanlin Wu, Martin Takác, Karthik Nandakumar, Samuel Horváth, Eduard Gorbunov:
Byzantine-Tolerant Methods for Distributed Variational Inequalities. CoRR abs/2311.04611 (2023) - [i24]Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov:
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences. CoRR abs/2311.14127 (2023) - 2022
- [j5]Eduard Gorbunov, Pavel E. Dvurechensky, Alexander V. Gasnikov:
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization. SIAM J. Optim. 32(2): 1210-1238 (2022) - [c17]Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel:
Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity. AISTATS 2022: 366-402 - [c16]Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou:
Stochastic Extragradient: General Analysis and Improved Rates. AISTATS 2022: 7865-7901 - [c15]Eduard Gorbunov, Alexander Borzunov, Michael Diskin, Max Ryabinin:
Secure Distributed Training at Scale. ICML 2022: 7679-7739 - [c14]Peter Richtárik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, Eduard Gorbunov:
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation. ICML 2022: 18596-18648 - [c13]Eduard Gorbunov, Marina Danilova, David Dobre, Pavel E. Dvurechenskii, Alexander V. Gasnikov, Gauthier Gidel:
Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise. NeurIPS 2022 - [c12]Eduard Gorbunov, Adrien B. Taylor, Gauthier Gidel:
Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities. NeurIPS 2022 - [i23]Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Zhize Li, Eduard Gorbunov:
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation. CoRR abs/2202.00998 (2022) - [i22]Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, Nicolas Loizou:
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods. CoRR abs/2202.07262 (2022) - [i21]Marina Danilova, Eduard Gorbunov:
Distributed Methods with Absolute Compression and Error Compensation. CoRR abs/2203.02383 (2022) - [i20]Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel:
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top. CoRR abs/2206.00529 (2022) - [i19]Eduard Gorbunov, Marina Danilova, David Dobre, Pavel E. Dvurechensky, Alexander V. Gasnikov, Gauthier Gidel:
Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise. CoRR abs/2206.01095 (2022) - [i18]Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik:
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression. CoRR abs/2206.07021 (2022) - [i17]Aleksandr Beznosikov, Boris T. Polyak, Eduard Gorbunov, Dmitry Kovalev, Alexander V. Gasnikov:
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey. CoRR abs/2208.13592 (2022) - 2021
- [j4]Pavel E. Dvurechensky, Eduard Gorbunov, Alexander V. Gasnikov:
An accelerated directional derivative method for smooth stochastic convex optimization. Eur. J. Oper. Res. 290(2): 601-621 (2021) - [c11]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
Local SGD: Unified Theory and New Efficient Methods. AISTATS 2021: 3556-3564 - [c10]Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
MARINA: Faster Non-Convex Distributed Learning with Compression. ICML 2021: 3788-3798 - [c9]Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, Gennady Pekhimenko:
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices. NeurIPS 2021: 18195-18211 - [i16]Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik:
MARINA: Faster Non-Convex Distributed Learning with Compression. CoRR abs/2102.07845 (2021) - [i15]Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, Gennady Pekhimenko:
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices. CoRR abs/2103.03239 (2021) - [i14]Eduard Gorbunov, Marina Danilova, Innokentiy Shibaev, Pavel E. Dvurechensky, Alexander V. Gasnikov:
Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise. CoRR abs/2106.05958 (2021) - [i13]Eduard Gorbunov, Alexander Borzunov, Michael Diskin, Max Ryabinin:
Secure Distributed Training at Scale. CoRR abs/2106.11257 (2021) - [i12]Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik:
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback. CoRR abs/2110.03294 (2021) - [i11]Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel:
Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity. CoRR abs/2110.04261 (2021) - [i10]Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou:
Stochastic Extragradient: General Analysis and Improved Rates. CoRR abs/2111.08611 (2021) - [i9]Eduard Gorbunov:
Distributed and Stochastic Optimization Methods with Gradient Compression and Local Steps. CoRR abs/2112.10645 (2021) - 2020
- [j3]El Houcine Bergou, Eduard Gorbunov, Peter Richtárik:
Stochastic Three Points Method for Unconstrained Smooth Minimization. SIAM J. Optim. 30(4): 2726-2749 (2020) - [c8]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. AISTATS 2020: 680-690 - [c7]Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou, Peter Richtárik:
A Stochastic Derivative Free Optimization Method with Momentum. ICLR 2020 - [c6]Eduard Gorbunov, Marina Danilova, Alexander V. Gasnikov:
Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping. NeurIPS 2020 - [c5]Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. NeurIPS 2020 - [i8]Eduard Gorbunov, Marina Danilova, Alexander V. Gasnikov:
Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping. CoRR abs/2005.10785 (2020) - [i7]Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtárik:
Linearly Converging Error Compensated SGD. CoRR abs/2010.12292 (2020) - [i6]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
Local SGD: Unified Theory and New Efficient Methods. CoRR abs/2011.02828 (2020) - [i5]Marina Danilova, Pavel E. Dvurechensky, Alexander V. Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev:
Recent Theoretical Advances in Non-Convex Optimization. CoRR abs/2012.06188 (2020)
2010 – 2019
- 2019
- [j2]Evgeniya A. Vorontsova, Alexander V. Gasnikov, Eduard A. Gorbunov:
Accelerated Directional Search with Non-Euclidean Prox-Structure. Autom. Remote. Control. 80(4): 693-707 (2019) - [j1]Evgeniya A. Vorontsova, Alexander V. Gasnikov, Eduard A. Gorbunov, Pavel E. Dvurechenskii:
Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator. Autom. Remote. Control. 80(8): 1487-1501 (2019) - [c4]Darina Dvinskikh, Eduard Gorbunov, Alexander V. Gasnikov, Pavel E. Dvurechensky, César A. Uribe:
On Primal and Dual Approaches for Distributed Stochastic Convex Optimization over Networks. CDC 2019: 7435-7440 - [c3]Alexander V. Gasnikov, Pavel E. Dvurechensky, Eduard Gorbunov, Evgeniya A. Vorontsova, Daniil Selikhanovych, César A. Uribe:
Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization. COLT 2019: 1374-1391 - [c2]Alexander V. Gasnikov, Pavel E. Dvurechensky, Eduard Gorbunov, Evgeniya A. Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford:
Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th Derivatives. COLT 2019: 1392-1393 - [i4]Konstantin Mishchenko, Eduard Gorbunov, Martin Takác, Peter Richtárik:
Distributed Learning with Compressed Gradient Differences. CoRR abs/1901.09269 (2019) - [i3]Eduard Gorbunov, Filip Hanzely, Peter Richtárik:
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent. CoRR abs/1905.11261 (2019) - 2018
- [c1]Dmitry Kovalev, Peter Richtárik, Eduard Gorbunov, Elnur Gasanov:
Stochastic Spectral and Conjugate Descent Methods. NeurIPS 2018: 3362-3371 - [i2]Pavel E. Dvurechensky, Alexander V. Gasnikov, Eduard Gorbunov:
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization. CoRR abs/1802.09022 (2018) - [i1]Pavel E. Dvurechensky, Alexander V. Gasnikov, Eduard Gorbunov:
An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization. CoRR abs/1804.02394 (2018)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 20:17 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint