David Shipley
2 min readJun 13, 2024

--

I thought your article on how we can understand what’s going on in machine learning was really spectacular. You started with the simple items such as decision trees and then moved us all the way to the complex concentrated environment, or world, of neural networks, which really made me think harder about it all.

We can take as a definite certainty that understanding these challenges isn't easy--but you've shown there's a solid way to break them down: when it comes to making it easier to explain how these machine learning models think, you should try to tackle it from a few different angles to reduce any issues and make sure everything's crystal clear. Start by including some advanced tools, SHAP, LIME, and even those attention mechanisms, into how you do things to peel back the layers of your model’s decisions. On top of that, it would be brilliant to have a checklist or some sort of routine check-up using these high-technology gadgets to continuously confirm that the machine’s logic is on point and makes sense when measured against the reality and what we expect it to know.

And note, for items where the stakes are really high, putting together a plan to have people keep an eye on what the models decide is extremely important.

There is a possible false positive-positive self-confidence hidden as self-improvement. By piling up these technology strategies with some solid rules and human checks, we can take as a definite certainty that you’ll get the amazing earnings balanced right with staying comfortable, safe, and accountable.

If you’re interested, feel free to check out my stories too!

--

--

David Shipley
David Shipley

Written by David Shipley

Fiction Writer 📚✨ | Crafting captivating tales of the future | 100% of my story's are written by me. 🤍

Responses (1)