Aditya Somasundaram

as7458 [at] columbia [dot] edu

About


I'm Aditya Somasundaram, a graduate student at Columbia University. I'm interested in learning algorithms, building real intelligence, and making this affordable and accessible to the world. Someday, I hope to create Jarvis. I am currently advised by Prof. Micah Goldblum. I'm also an intern at Numenta's ML team where I work on proprietary algorithms to make training LLMs simultaneously faster, and cheaper.

Profile photo

Projects & Publications


Latest News

🎉 My research showing how to pretrain Large Language Models (LLMs) with small batch sizes and simple SGD (without momentum) has been accepted as a poster at NeurIPS 2025! Find our manuscript here. Huge shout out to my mentor Prof. Micah Goldblum and my collaborators at the Wilson Lab at NYU Courant.


Small Batch

Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful

Martin Marek, Sanae Lotfi, Aditya Somasundaram, Andrew Gordon Wilson, Micah Goldblum

Advances in Neural Information Processing Systems (NeurIPS), 2025

Writing


This website was built with ❤️