Abbhinav (123 of AI), and co-host Debayan (Microsoft) are joined by AI researcher Kritika Prakash, now pursuing her Ph.D., at the University of Chicago.
The discussion revolves around Kritika’s academic journey before and at IIIT-Hyderabad, differential privacy in AI, and her thoughts on the need for regulations to ensure a machine learning model's trustworthiness. We discuss the privacy issues in machine learning, including the challenges of protecting textual data and the impact of large language models on a user’s privacy.
“I did have exposure... where I can appreciate... lot of South Indian cultures.”
“I found electronics too hard... shifted to Computer science.”
“Repeating an extra year: It felt like I am not really losing time... just learning more.”
“Talking to him [father]... understanding his perspective on things... He urged me... go for this risky thing.”
“I got do computer science but my first 3 semesters I was just so bored.”
“Discovered that there is this element of strategy & games that you can work with.”
“Explored various research areas before delving into differential privacy.”
“You might not have anything to hide. But, you do have something to protect”
“Adding noise during training reduce your accuracy; it actually helps improve generalization.”
“The smaller the epsilon value indicates tighter or stronger [differential]privacy.”
“Real-world applications include healthcare research... where preserving individual [data]privacy is crucial.”
“Differential Privacy is not going cover all kinds of cases; we are looking at because this is the worst case guarantee.”
”The text itself doesn't have clear distinctions between people's data, and internet’s data; it's a huge mess just because that's how the text domain is.”
“Privacy will always be important no matter where the machine learning field is headed.”
123 of AI’s official website: https://www.123ofai.com/contact
Ever wondered why your AI model makes certain predictions? From LIME and SHAP to Grad-CAM and Attention Maps, this guide demystifies model explainability—helping you uncover both local and global insights for transparent, trustworthy AI. 🚀
Unlock AI transparency with tools like LIME and SHAP that deliver clear, personalized insights. Boost trust and innovation across healthcare, finance, and more.
Gain mastery in Machine Learning with premium hand-written notes, slides, and code workbooks!