summary_plot: Visualize the explanation. explain: Provide the explanation of the data provided. fit: Performs and training for the explainer (This is not required for Shap explainers). init: Instantiate the class of explainer of your choice. The interface provides the same methods for each of the methods such that you can use the same python methods in your scripts to set up an explainer for each interpretability method. This repository includes a common python interface for the following interpretability methods: SimplEx, Dynamask, shap, and Symbolic Pursuit. LIME Source Code (pytorch implementation: Captum Lime)įigure 3 shows a flowchart to help with the process of selecting the method that is most appropriate for your project.įigure 3: Interpretability Method selection flowchart. Integrated Gradient Source Code (pytorch implementation: Captum Integrated Gradients) SHAP Source Code (pytorch implementation: Captum GradientShap) But any of the other methods can also be implemented by using the code in the GitHub column of the table below. SimplEx, Dynamask, shap, and Symbolic Pursuit have a common python interface implemented for them for ease of implementation (see Interface above and Implementation and Notebooks below). This means that you can save your explainer and give the file to a less python-literate colleague and they can see the results for themselves simply by drag-and-dropping it into the Upload your own Explainer tab.įigure 2: An example of the Upload your own Explainer tab on the user interface from the SimplEx page Explainersĭifferent model architectures can require different interpretability models, or "Explainers".īelow are all the explainers included in this repository, with links to their source code and the papers that introduced them. This user interface not only demonstrates the methods and how they are used on example datasets, but it also gives the user the ability to upload their own explainer to visualize the results. To guarantee compatibility with the app please create your explainers using the interpretability_suite_app branch.įigure 1: The Interpretability Suite User Interface Landing page Each of these methods are also included in our user interface. The Interpretability Suite provides a common python interface for the following interpretability methods: SimplEx, Dynamask, shap, and Symbolic Pursuit. It discusses why ML interpretability is so important and shows the array of different methods developed by the van der Schaar Lab This video is a quick introduction to our Interpretability Suite. Pytorch versions of the other methods are available on public libraries, such as captum. All the linked van der Schaar Lab repositories on this page are pytorch compatible. This GitHub repository aims to act as a home for interpretability methods, where the state-of-the-art modelsĬan be found for every application. Therefore making the decisions of "Black-Box" models more transparent is of vital importance. On that decision with full confidence, particularly in a high-stakes environment such as medicine. Understand why a decision has been made by a machine, they cannot be reasonably expected to act Settings outside of the ML community faces a key barrier: Interpretability. The Machine Learning (ML) community has produced incredible models for making highlyĪccurate predictions and classifications across many fields.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |