The field of Artificial Intelligence (AI) has witnessed tremendous growth, with AI models achieving remarkable performance in diverse tasks. However, the opaque nature of these models raises concerns about explainability and interpretability. This research project delves into Explainable AI (XAI) techniques, aiming to shed light on how AI models arrive at their decisions. The successful candidate will explore various XAI approaches, including model-agnostic methods and model-specific techniques. Model-agnostic methods, such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), provide explanations for individual predictions without relying on the internal workings of the model. On the other hand, model-specific techniques leverage knowledge about the model's architecture to generate explanations. The candidate will be involved in: