Jump to content

The problem with ‘explainable AI’


NelsonG

Recommended Posts

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems’ decisions. Consumers should own their data and should be privy to the myriad ways that businesses use and sell such information, which is often done without clear and conscious consumer consent. Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make.

On the algorithmic side, grandstanding by IBM and other tech giants around the idea of “explainable AI” is nothing but virtue signaling that has no basis in reality. I am not aware, for instance, of any place where IBM has laid bare the inner workings of Watson — how do those algorithms work? Why do they make the recommendations/predictions they do?

There are two issues with the idea of explainable AI. One is a definition: What do we mean by explainability? What do we want to know? The algorithms or statistical models used? How learning has changed parameters throughout time? What a model looked like for a certain prediction? A cause-consequence relationship with human-intelligible concepts?

Each of these entail different levels of complexity. Some of them are pretty easy — someone had to design the algorithms and data models so they know what they used and why. What these models are, is also pretty transparent. In fact, one of the refreshing facets of the current AI wave is that most of the advancements are made in peer-reviewed papers — open and available to everyone.

What these models mean, however, is a different story. How these models change and how they work for a specific prediction can be checked, but what they mean is unintelligible for most of us. It would be like buying an iPad that had a label on the back explaining how a microprocessor and touchscreen works — good luck! And then, adding the layer of addressing human-intelligible causal relationships, well that’s a whole different problem.

Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software.

The second overarching factor when considering explainable AI is assessing the trade-offs of “true explainable and transparent AI.” Currently there is a trade-off in some tasks between performance and explainability, in addition to business ramifications. If all the inner workings of an AI-powered platform were publicly available, then intellectual property as a differentiator is gone.

Imagine if a startup created a proprietary AI system, for instance, and was compelled to explain exactly how it worked, to the point of laying it all out — it would be akin to asking that a company disclose its source code. If the IP had any value, the company would be finished soon after it hit “send.” That’s why, generally, a push for those requirements favor incumbents that have big budgets and dominance in the market and would stifle innovation in the startup ecosystem.

Please don’t misread this to mean that I’m in favor of “black box” AI. Companies should be transparent about their data and offer an explanation about their AI systems to those who are interested, but we need to think about the societal implications of what that is, both in terms of what we can do and what business environment we create. I am all for open source, and transparency, and see AI as a transformative technology with a positive impact. By putting such a premium on transparency, we are setting a very high burden for what amounts to an infant but high-potential industry.

Techcrunch?d=2mJPEYqXBVI Techcrunch?d=7Q72WNTAKBA Techcrunch?d=yIl2AUoC8zA Techcrunch?i=Jfj6TkuQO9I:_MxJxpqPySg:-BT Techcrunch?i=Jfj6TkuQO9I:_MxJxpqPySg:D7D Techcrunch?d=qj6IDK7rITs
Jfj6TkuQO9I

View the full article

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Our picks

    • Wait, Burning Man is going online-only? What does that even look like?
      You could have been forgiven for missing the announcement that actual physical Burning Man has been canceled for this year, if not next. Firstly, the nonprofit Burning Man organization, known affectionately to insiders as the Borg, posted it after 5 p.m. PT Friday. That, even in the COVID-19 era, is the traditional time to push out news when you don't want much media attention. 
      But secondly, you may have missed its cancellation because the Borg is being careful not to use the C-word. The announcement was neutrally titled "The Burning Man Multiverse in 2020." Even as it offers refunds to early ticket buyers, considers layoffs and other belt-tightening measures, and can't even commit to a physical event in 2021, the Borg is making lemonade by focusing on an online-only version of Black Rock City this coming August.    Read more...
      More about Burning Man, Tech, Web Culture, and Live EventsView the full article
      • 0 replies
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
    • Post in What Are You Listening To?
      Post in What Are You Listening To?
×
×
  • Create New...