We propose techniques to systematically resolve UnderEdit and OverEdit issues in model editing, improving both precision and generalization.
We investigate biases in human vs AI-generated student summaries, proposing fairness metrics and improving reflection generation systems.
We propose methods for fair interpretation of memes by jointly modeling image and text, focusing on bias mitigation across sensitive attributes.
We propose intent-focused semantic parsing and zero-shot out-of-domain detection strategies to enhance the robustness of spoken language understanding systems.
We introduce a smart stacking approach for intent-slot extraction in multi-intent spoken language understanding tasks, improving extraction granularity.
A modular and extensible LoRA fine-tuning framework for question-answering tasks with PEFT integration. Demonstrates parameter-efficient training with configurable LoRA parameters and structured evaluation metrics.
Tools: PyTorch, Transformers, PEFT, LoRA, Datasets, Pandas
Designed modular prompting strategies enabling the agent to reason over multi-step flight search actions based on dynamic browser observations and user goals, enhancing the agent's temporal and spatial reasoning capabilities.
Tools: BrowserGym, Gradio, OpenAI GPT-4o, PyTorch
Built a pipeline to generate and visualize concept maps from Wikipedia by extracting entities and semantic relations using entity linking, word embeddings, and syntactic parsing.
Tools: PySpotlight, FastText, Stanford CoreNLP
Built a Streamlit web application for AI-powered text completion using Meta's Llama-3.2-1B model with automatic GPU/CPU detection and intuitive interface.
Tools: Streamlit, PyTorch, Transformers, Meta Llama-3.2-1B
Developed a web application that fetches real-time tweets based on user queries and classifies their sentiment (positive, negative, neutral) using a Naive Bayes classifier.
Tools: Tweepy, NumPy, Scikit-learn, Flask
A modular and extensible LoRA fine-tuning framework for question-answering tasks with PEFT integration. Demonstrates parameter-efficient training with configurable LoRA parameters and structured evaluation metrics.
Tools: PyTorch, Transformers, PEFT, LoRA, Datasets, Pandas
Designed modular prompting strategies enabling the agent to reason over multi-step flight search actions based on dynamic browser observations and user goals, enhancing the agent's temporal and spatial reasoning capabilities.
Tools: BrowserGym, Gradio, OpenAI GPT-4o, PyTorch
PhD in Computer Science
GPA: 3.5/5
August 2023 - April 2027
M.Tech in Computer Science
GPA: 8.82/10
July 2017 - May 2019
B.Tech in Computer Science
GPA: 6.82/10
June 2013 - June 2017
Graduate Research Assistant
August 2024 – Present
PA, USA
Lead NLP Engineer
June 2019 – August 2023
Bangalore, India
Machine Learning Intern
May 2018 – July 2018
Bangalore, India