Delving into the intersection of Artificial Intelligence, Machine Learning, and Computer Vision to create fair, reliable, and robust solutions for real-world challenges.
Bias mitigation is a crucial area of research aimed at making computer vision algorithms more equitable and fair. As AI systems are increasingly used in real-world applications, it becomes essential to address and reduce biases that may be inherently present in datasets or arise during model training. These biases can lead to discrepancies in performance across different demographic groups, such as gender, race, or age, potentially causing harm or reinforcing existing inequalities. My work focuses on developing novel methods to identify, measure, and mitigate these biases in computer vision models. By ensuring that AI systems perform fairly and consistently for all groups, this research seeks to contribute to the creation of more trustworthy and inclusive AI technologies, ultimately promoting equity and reducing the risk of unintended consequences in their deployment.
Deep learning models often face challenges when exposed to data that differs from what they were trained on, a scenario known as Out-of-Distribution (OOD) detection. To address this, my research focuses on enhancing the ability of these models to identify and respond to distributional shifts in a reliable manner. In real-world applications, AI systems may encounter novel or unexpected inputs that can lead to uncertainty or errors in decision-making. By developing advanced techniques for detecting such shifts, this work ensures that models remain robust and can confidently navigate unfamiliar data. The goal is to improve the overall safety and reliability of AI systems, equipping them to perform effectively in diverse and unpredictable environments
How can we trust AI models when their predictions directly influence critical decisions? My research in uncertainty quantification aims to address this by developing techniques that assess model robustness through predictive confidence intervals. By quantifying uncertainty, we can better understand when a model’s predictions are reliable and when they should be treated with caution. This work is particularly important in high-stakes environments, where informed decision-making is crucial to ensure safety and positive outcomes. Through this research, I aim to enhance the transparency and trustworthiness of AI systems, making them more dependable in uncertain or unpredictable situations.