Enayat Ullah
    firstname@meta.com

I am a Research Scientist at Meta. I am interested in various theoretical and practical aspects of Machine Learning, Optimization and Differential Privacy.

Previously, I was a Ph.D. student in the department of Computer Science at the Johns Hopkins University, advised by Raman Arora. Before that, I graduated from the Indian Intitute of Technology Kanpur with a Bachelors and Masters degree in Mathematics and Computing, with minors in Computer Science and English Literature. At IIT Kanpur, I worked with Purushottam Kar, Debasis Kundu and Prateek Jain.

I have undertaken internships and visits at Google Research, with Peter Kairouz , Sewoong Oh and Christopher Choquette-Choo; Adobe Research, with Anup Rao, Tung Mai, and Ryan Rossi; the Institute for Advanced Study, Princeton, in the Special Year on Optimization, Statistics, and Theoretical Machine Learning program and the Simons Institute for the Theory of Computing in the Learning and Games program.

Publications/Preprints

Public-data Assisted Private Stochastic Optimization: Power and Limitations
with Michael Menart, Raef Bassily, Cristóbal Guzmán, Raman Arora
(Under Submission)

Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
with Michael Menart, Raman Arora, Raef Bassily, Cristóbal Guzmán
Algorithmic Learning Theory (ALT), 2024

Optimistic rates for Multi-task Representation Learning
with Austin Watkins, Thanh Nguyen-Tang, Raman Arora
Neural Information Processing Systems (NeurIPS), 2023

Private Federated Learning with Autotuned Compression
with Christopher A. Choquette-Choo, Peter Kairouz, Sewoong Oh
International Confernce on Machine Leraning (ICML), 2023

From Adaptive Query Release to Machine Unlearning
with Raman Arora
International Confernce on Machine Leraning (ICML), 2023
Workshop on Updatable Machine Leraning, 2022

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization
with Raman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart
International Confernce on Machine Leraning (ICML), 2023

Differentially Private Generalized Linear Models Revisited
with Raman Arora, Raef Bassily, Cristóbal Guzmán, Michael Menart
Neural Information Processing Systems (NeurIPS), 2022
Theory and Practice of Differential Privacy (TPDP), 2022

Adversarial Robustness is at odds with Lazy Training
with Yunjuan Wang, Poorya Mianjy, Raman Arora
Neural Information Processing Systems (NeurIPS), 2022

Clustering with Approximate Nearest Neighbour Oracles
with Harry Lang, Raman Arora, Vladimir Braverman
Transactions of Machine Lerning (TMLR), 2022

Generalization Bounds for Kernel Canonical Correlation Analysis
with Raman Arora
Transactions of Machine Leraning (TMLR), 2022

Machine Unlearning via Algorithmic Stability
with Tung Mai, Anup Rao, Ryan Rossi, Raman Arora
Conference on Leraning Theory (COLT), 2021,
Foundations of Responsible Computing (FORC), 2021

FetchSGD: Communication-efficient Federated learning with Sketching
with Daniel Rothchild, Ashwinee Panda, Nikita Ivkin, Vladimir Braverman, Joseph Gonzalez, Ion Stoica, Raman Arora
International Confernce on Machine Leraning (ICML), 2020

Communication-efficient Distributed SGD with Sketching
with Nikita Ivkin, Daniel Rothchild, Vladimir Braverman, Ion Stoica, Raman Arora
Neural Information Processing Systems (NeurIPS), 2019

Improved Algorithms for Time-Decay Streams
with Vladimir Braverman, Harry Lang, Samson Zhou
International Conference on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2019

Streaming Kernel PCA with \(\tilde O(\sqrt{n})\) Random Features
with Poorya Mianjy, Teodor V Marinov, Raman Arora
Neural Information Processing Systems (NeurIPS), 2018

Service
Program Committe
Theory and Practice of Differetial Privacy (TPDP), 2024
Workshop on Updatable Machine Learning, 2022
Reviewer
ICML, NeurIPS, ICLR, AISTATS

website credits