RespAI Lab

Welcome to the RespAI Lab!

logo_respai.png

KIIT Bhubaneswar, India

Welcome to our research lab, led by Dr. Murari Mandal. At RespAI Lab, we focus on advancing large language models (LLMs) by addressing challenges related to long-content processing, inference efficiency, interpretability, and alignment. Our research also explores synthetic persona creation, regulatory issues, and innovative methods for model merging, knowledge verification, and unlearning.

Motto of RespAI Lab: Driving technical breakthroughs in AI through cutting-edge research and innovation, with a focus on solving complex challenges in LLMs and other generative models and contribute to top-tier conferences (ICML, ICLR, NeurIPS, AAAI, KDD, CVPR, ICCV, etc.) in the pursuit of cutting-edge advancements.

"When you go to hunt, hunt for rhino. If you fail, people will say anyway it was very difficult. If you succeed, you get all the glory"

Ongoing Research at RespAI Lab

  • Addressing Challenges in Long-Content Processing for LLMs: Investigating solutions to performance bottlenecks, memory limitations, latency issues, and information loss when dealing with extended content lengths in large language models (LLMs).

  • Optimizing LLM Inference Efficiency: Developing strategies to reduce the computational cost of LLM inference, focusing on improving speed, memory usage, and leveraging smaller models for complex tasks.

  • Interpretability and Alignment of Generative AI Models: Exploring the interpretability of generative AI models, aligning their outputs with human values, and addressing the issue of hallucinations in model responses.

  • Synthetic Persona and Society Creation: Creating and studying synthetic personalities, communities, and societies within LLMs, and analyzing the behaviors and dynamics of these synthetic constructs.

  • Regulatory Challenges in LLMs: Investigating regulatory concerns surrounding LLMs, including the implementation of unlearning techniques to comply with data privacy regulations and enhance model fairness.

  • Model Merging and Knowledge Verification: Developing methods for merging multiple models, editing model behavior, and verifying the accuracy and consistency of the knowledge they generate.

Recent News

Mar 20, 2025 Preprint and Source Code of “Guardians of Generation: Dynamic Inference-Time Copyright Shielding with Adaptive Guidance for AI Image Generation” is available!
Mar 17, 2025 RespAI Lab offering “Introduction to Large Language Models” at KIIT Bhubaneswar this Spring 2025. Course Website - https://respailab.github.io/llm-101.respailab.github.io
Feb 07, 2025 Preprint of “ReviewEval: An Evaluation Framework for AI-Generated Reviews” is available on Arxiv.
Jan 20, 2025 Preprint of “ALU: Agentic LLM Unlearning” is available on Arxiv.
Dec 17, 2024 Paper accepted in main track AAAI-2025, Philadelphia, Pennsylvania, USA [Acceptance Rate - 23.4%]. Congratulations Yash!
Oct 25, 2024 Preprint of UnStar: Unlearning with Self-Taught Anti-Sample Reasoning for LLMs is available on Arxiv.
Oct 16, 2024 Vikram received PhD Admission to University of Cambridge.
Oct 10, 2024 Preprint of ConDa: Fast Federated Unlearning with Contribution Dampening is available on Arxiv.
Sep 16, 2024 Ayush joins EPFL, Switzerland as a PhD Candidate.
Sep 11, 2024 Preprint of “Unlearning or Concealment? A Critical Analysis and Evaluation Metrics for Unlearning in Diffusion Models” is available on Arxiv.

Selected Publications

  1. deep-regression-unlearning.png
    Deep Regression Unlearning
    Ayush Kumar Tarun ,  Vikram Singh Chundawat ,  Murari Mandal , and 1 more author
    In Proceedings of the 40th International Conference on Machine Learning , 23–29 jul 2023
  2. ecoval.png
    EcoVal: An Efficient Data Valuation Framework for Machine Learning
    Ayush K Tarun ,  Vikram S Chundawat ,  Murari Mandal , and 3 more authors
    23–29 jul 2024
  3. bad_teaching.png
    Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks Using an Incompetent Teacher
    Vikram S Chundawat ,  Ayush K Tarun ,  Murari Mandal , and 1 more author
    Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023