We propose techniques to systematically resolve UnderEdit and OverEdit issues in model editing, improving both precision and generalization.
We investigate biases in human vs AI-generated student summaries, proposing fairness metrics and improving reflection generation systems.
We propose methods for fair interpretation of memes by jointly modeling image and text, focusing on bias mitigation across sensitive attributes.
We propose intent-focused semantic parsing and zero-shot out-of-domain detection strategies to enhance the robustness of spoken language understanding systems.
We introduce a smart stacking approach for intent-slot extraction in multi-intent spoken language understanding tasks, improving extraction granularity.