Interpretability in machine learning is important because it aids in trust when relying on machines for critical tasks. Last week, DSI's Director of Graduate Studies, Andras Zsom, visited Boston for the Open Data Science Conference to lead a workshop on interpretability. The annual conference offers lectures, training, and networking opportunities in data science. Seventeen data science master’s students from Brown also attended and most of them were volunteers during the conference.
In the introductory workshop, Professor Zsom explained that it is often not enough to provide predictions with machine learning models. If the model impacts human lives, it is crucial to understand how and why the predictions were generated. For example, if a machine learning model predicts that a patient has cancer, the doctor needs to be able to explain to the patient why the model believes so. He reviewed various machine learning tools and methods to generate model explanations and how such methods can improve trust in ML.