White Paper
Why Human-interpretable Models are a Myth and that’s Totally Okay
Written by Jay Budzik wh y human-interpr et able models ar e a my th and that’ s t ot ally oka y Or: How I stopped worrying and started trusting Isaac NewtonC o p y r i g h t © 2 0 2 0 Z e s t A I — w w w . z e s t . a i — h e l l o @ z e s t . a i 0 1 m o d e l i n t e r p r e t a b i l i t y i s n ' t r e a l Whet her y ou f ear or f a v or ar tificial int elligence, high-stak es cr edit decisions such as who get s a car loan or mor t gage ar e incr easingly dependent on pr edictiv e algorit hmic models t o mak e t he right call. As humans, w e deser v e t o understand t hese models so w e can trust t hem. F or tunat ely , machine learning r esear ch has significant ly adv anced our understanding of ho w models mak e decisions b y applying rigor ous mat hematics under t he umbr ella