Elnur Gasanov

I am a second year PhD student in Optimization and Machine Learning at Visual Computing Center (VCC), at King Abdullah University of Science and Technology, advised by professor Peter Richtárik. My research interests include Stochastic Optimization, Machine Learning and Randomized Linear Algebra.

I earned my bachelor's degree in Applied Mathematics and Physics from Moscow Institute of Physics and Technology.

Email  /  CV  /  LinkedIn  /  GitHub

profile photo
News

Talks and poster presentations

  • In November 2019, at KAUST-Tsinghua-Industry workshop, I presented a poster, based on our NeurIPS 2018 paper(poster here).
  • In June 2019, at DS3, I presented our poster "A New Randomized Method for Solving Large Linear Systems".
  • In June 2019, at LJK, I gave a talk on our new asynchronous delay-tolerant distributed algorithm.
  • In February 2018, at Optimization and Big Data workshop, our team presented a poster on randomized linear algebra(poster here).

Publications

From Local SGD to Local Fixed-Point Methods for Federated Learning
Most algorithms for solving optimization problems or finding saddle points of convex-concave functions are fixed-point algorithms. In this work we consider the generic problem of finding a fixed point of an average of operators, or an approximation thereof, in a distributed setting. Our work is motivated by the needs of federated learning. In this context, each local operator models the computations done locally on a mobile device. We investigate two strategies to achieve such a consensus: one based on a fixed number of local steps, and the other based on randomized computations. In both cases, the goal is to limit communication of the locally-computed variables, which is often the bottleneck in distributed frameworks. We perform convergence analysis of both methods and conduct a number of experiments highlighting the benefits of our approach.
Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik
ICML 2020

Stochastic Spectral and Conjugate Descent Methods
The state-of-the-art methods for solving optimization problems in big dimensions are variants of randomized coordinate descent (RCD). In this paper we introduce a fundamentally new type of acceleration strategy for RCD based on the augmenta- tion of the set of coordinate directions by a few spectral or conjugate directions. As we increase the number of extra directions to be sampled from, the rate of the method improves, and interpolates between the linear rate of RCD and a linear rate independent of the condition number. We develop and analyze also inexact variants of these methods where the spectral and conjugate directions are allowed to be approximate only. We motivate the above development by proving several negative results which highlight the limitations of RCD with importance sampling.
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
NeurIPS 2018

Creation of approximating scalogram description in a problem of movement prediction [in Russian]
The paper addresses the problem of a thumb movement prediction using electrocorticographic (ECoG) activity. The task is to predict thumb positions from the voltage time series of cortical activity. The scalograms are used as input features to this regression problem. Scalograms are generated by the spatio-spectro-temporal integration of voltage time series across multiple cortical areas. To reduce the dimension of a feature space, local approximation is used: every scalogram is approximated by parametric model. The predictions are obtained with partial least squares regression applied to local approximation parameters. Local approximation of scalograms does not significantly lower the quality of prediction while it efficiently reduces the dimension of feature space.
Elnur Gasanov, Motrenko Anastasia
Journal of Machine Learning and Data Analysis (in Russian)

This guy makes a cool webpage.