Parthe Pandit

Montecito 2022 

Thakur Family Chair Assistant Professor

Center for Machine Intelligence and Data Science (C-MInDS)

Indian Institute of Technology, Bombay

email: pandit@iitb.ac.in

office: Kanwal Rekhi building (SIA-420)

contact: +91 22 2159 (IIT ext. 3776)

About Me

I am a core faculty member at C-MInDS, IIT Bombay.

My current research focusses on understanding the generalization behaviour of interpolating machine learning model classes such as neural networks and Reproducing Kernel Hilbert Spaces (RKHS) (a.k.a kernel methods). I also develop algorithms for training kernel models for large scale applications.

Prior to joining IIT Bombay, I was the Simons Postdoctoral Fellow with HDSI at UCSD where I primarily worked with Misha Belkin. Before that I obtained a PhD in ECE and MS in Statistics from UCLA, and a B.Tech+M.Tech in EE from IIT Bombay, with a minor in CS.

News

  • [Mar 2024] I have been awarded the INSPIRE faculty fellowship by DST.

  • [Mar 2024] Science published our paper on the mechanism of feature learning in neural networks.

  • [Jan 2024] Paper accepted at AISTATS 2024 'On the Nyström Approximation for Preconditioning in Kernel Machines.'

  • [Nov 2023] Zhichao Wang presented our contributed talk at DeepMath 2023 based on our work 'Quadratic Approximation of Random Inner-Product Kernel Matrices.’

  • [Nov 2023] I am now with C-MInDS at IIT Bombay.

  • [Apr 2023] Paper accepted at ICML 2023 titled 'Toward large kernel models.'

  • [Apr 2023] Paper accepted at SIMODS journal titled 'On the inconsistency of kernel ridgeless regression in fixed dimensions.'

  • [Feb 2023] We have a new training algorithm for large kernel models with linear memory footprint! arxiv:2302.02605

  • [Dec 2022] New paper on feature learning kernel machines beating neural networks! See arxiv:2212.13881

  • [Sept 2022] 2 papers accepted at NeurIPS! See arXiv:2208.09938 and arXiv:2207.06569

  • [Aug 2022] I gave talks at TIFR, IISc, Google AI, IIT Bombay, IISER Pune, and IIT Kanpur on performance analysis of learning GLMs in high dimensions

  • [July 2022] I am visiting the Simons Institute at UC Berkeley as part of the summer cluster on Deep Learning Theory

  • [June 2022] Libin Zhu, Misha Belkin and I wrote a note on the bilinear nature of bottleneck networks arXiv:2206.15058

  • [May 2022] Daniel Beaglehole, Misha Belkin and I submitted a paper on the inconsistency of kernel interpolation in low dimensions arXiv:2205.13525