I am a First Year Master's student in the Institute for Computational and Mathematical Engineering (ICME) at Stanford University. Previously, I was a Research Fellow at Microsoft Research India, where my research focused on advancing the capabilities of Large Language Models (LLMs), particularly in the context of code understanding and generation. I worked with Dr. Aditya Kanade, Dr. Nagarajan Natarajan, and Dr. Abhijeet Awasthi in the AI4CODE team.
Broadly, my research interests span machine learning and its applications, with a focus on reinforcement learning, graph machine learning, and foundation models. I am interested in developing principled methods that improve model efficiency, reliability, and interpretability, as well as in applying ML techniques to real-world systems and decision-making problems. I am particularly drawn to work that combines theoretical insights with practical impact.
I earned my B.Tech. in Mathematics and Computing from IIT Goa, India, in 2023. For more details about my background, see my CV. If you'd like to discuss my work or research interests, feel free to get in touch.
Experience
-- Created a benchmark to evaluate the abilities of LLMs in generating functionally correct and optimized code.
-- Fine-tuned LLMs for code-editing, achieving up to 15–20% improvement and creating pipeline for synthetic
data-generation using GPT-4 and Llama, presented this work at ICLR25 Singapore.
-- Worked on improving the reasoning of multimodal LLMs.
-- Automated process of allocating drivers optimally to Metro trains by formulating constraints in Gurobipy solver.
-- Restructured the problem using Max Flows reducing timetable preparation time from few days to a few seconds.
-- Deployed the algorithm in Bengaluru Metro Rail Corporation Limited (BMRCL).
-- Contributed to the backend of the Questa compiler in C, optimized coverage calculations to achieve a 3x improvement.
-- Designed 50+ test cases, identifying and resolving 10+ JIRA issues, significantly improving the system reliability.
NoFunEval: Funny How Code LMs Falter on Requirements Beyond Functional Correctness
Manav Singhal, , Abhijeet Awasthi, Nagarajan Natarajan, Aditya Kanade
COLM'24 PDF
Robust Learning of Diverse Code Edits
, Swayam Singh, Abhijeet Awasthi, Aditya Kanade, Nagarajan Natarajan
DL4C @ ICLR'25, ICML'25 PDF
Language Models' Factuality Depends on the Language of Inquiry
, Kumar Tanmay, Ayush Agrawal, Kumar Ayush, Hamid Palangi, Paul Pu Liang
Arxiv Preprint PDF
PASS: Presentation Automation for Slide Generation and Speech
, Aarohi Bhand
Arxiv Preprint PDF