Enayat Ullah  
                  
                      firstname@meta.com 
          
            Please enable Javascript to view.  
I am a Research Scientist at  Meta . I am interested in various theoretical and practical aspects of Machine Learning, Optimization and Differential Privacy. 
  Previously, I was a 
Ph.D. student in the department of Computer Science at the  Johns Hopkins University , advised by  Raman Arora . Before that,
  I graduated from the Indian Intitute of Technology Kanpur  with a Bachelors and Masters degree in Mathematics and Computing, with minors in Computer Science and English Literature.   
  At IIT Kanpur, I worked with Purushottam Kar , Debasis Kundu  and  Prateek Jain.  
  I have undertaken internships and visits at  Google Research , with  Peter Kairouz  , Sewoong Oh  and Christopher Choquette-Choo ;  Adobe Research , with Anup Rao , Tung Mai , and  Ryan Rossi ; the Institute for Advanced Study, Princeton , in the Special Year on Optimization, Statistics, and Theoretical Machine Learning  program and the Simons Institute for the Theory of Computing  in the Learning and Games  program.
       
               
          
           
            
              
                Publications/Preprints 
               
             
Public-data Assisted Private Stochastic Optimization: Power and Limitations  
 (Under Submission) 
 
 
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates  
Algorithmic Learning Theory (ALT), 2024 
 
 
Optimistic rates for Multi-task Representation Learning  
Neural Information Processing Systems (NeurIPS), 2023 
 
 
Private Federated Learning with Autotuned Compression  
International Confernce on Machine Leraning (ICML), 2023 
 
 
From Adaptive Query Release to Machine Unlearning  
International Confernce on Machine Leraning (ICML), 2023 
Workshop on Updatable Machine Leraning, 2022 
 
 
Faster Rates of Convergence to Stationary Points in Differentially Private Optimization  
International Confernce on Machine Leraning (ICML), 2023 
 
 
Differentially Private Generalized Linear Models Revisited  
Neural Information Processing Systems (NeurIPS), 2022  
Theory and Practice of Differential Privacy (TPDP), 2022  
 
 
Adversarial Robustness is at odds with Lazy Training  
Neural Information Processing Systems (NeurIPS), 2022 
 
 
Clustering with Approximate Nearest Neighbour Oracles  
Transactions of Machine Lerning (TMLR), 2022 
 
 
Generalization Bounds for Kernel Canonical Correlation Analysis  
Transactions of Machine Leraning (TMLR), 2022 
 
 
Machine Unlearning via Algorithmic Stability  
Conference on Leraning Theory (COLT), 2021, 
 Foundations of Responsible Computing (FORC), 2021 
 
 
 
FetchSGD: Communication-efficient Federated learning with Sketching  
International Confernce on Machine Leraning (ICML), 2020 
 
 
Communication-efficient Distributed SGD with Sketching  
Neural Information Processing Systems (NeurIPS), 2019 
 
 
Improved Algorithms for Time-Decay Streams  
  International Conference on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2019 
 
 
Streaming Kernel PCA with \(\tilde O(\sqrt{n})\) Random Features  
Neural Information Processing Systems (NeurIPS), 2018