• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Articles
  • News
  • Events
  • Advertize
  • Jobs
  • Courses
  • Contact
  • (0)
  • LoginRegister
    • Facebook
    • LinkedIn
    • RSS
      Articles
      News
      Events
      Job Posts
    • Twitter
Datafloq

Datafloq

Data and Technology Insights

  • Categories
    • Big Data
    • Blockchain
    • Cloud
    • Internet Of Things
    • Metaverse
    • Robotics
    • Cybersecurity
    • Startups
    • Strategy
    • Technical
  • Big Data
  • Blockchain
  • Cloud
  • Metaverse
  • Internet Of Things
  • Robotics
  • Cybersecurity
  • Startups
  • Strategy
  • Technical

The Dilemma Of Unexplainable Artificial Intelligence

Bill Franks / 4 min read.
July 24, 2017
Datafloq AI Score
×

Datafloq AI Score: 76

Datafloq enables anyone to contribute articles, but we value high-quality content. This means that we do not accept SEO link building content, spammy articles, clickbait, articles written by bots and especially not misinformation. Therefore, we have developed an AI, built using multiple built open-source and proprietary tools to instantly define whether an article is written by a human or a bot and determine the level of bias, objectivity, whether it is fact-based or not, sentiment and overall quality.

Articles published on Datafloq need to have a minimum AI score of 60% and we provide this graph to give more detailed information on how we rate this article. Please note that this is a work in progress and if you have any suggestions, feel free to contact us.

floq.to/ZxZwh

Artificial intelligence has quickly become one of the hottest topics in analytics. For all the power and promise, however, the opacity of AI models threatens to limit AI’s impact in the short term. The difficulty of explaining how an AI process gets to an answer has been a topic of much discussion. In fact, it came up in several talks in June at the O’Reilly Artificial Intelligence Conference in New York. There are a couple of angles from which the lack of explainability matters, somewhere it doesn’t matter, and also some work being done to address the issue.

AI Explainability From the Analytics Perspective

From a purely analytical perspective, not being able to explain an AI model doesn’t matter in all cases. To me, the issue of explainability is very similar to the classic problem of multicollinearity within a regression model. I recall having drilled into my head in graduate school the distinction between (1) Prediction and (2) Point Estimation.

If the main goal of a model is to understand which factors influence an outcome and to what extent, then multicollinearity is devastating. The variables that are inter-correlated will have very unstable individual parameter estimates even when the model’s predictions are consistent and accurate. Conceptually, the correlated variables are almost randomly assigned importance. Running the model on one subset of data can lead to very different parameter estimates from another subset. Obviously, this is not good and we spent a lot of time learning how to handle such data to get an accurate answer and also be able to explain it. The point is that multicollinearity made it very hard to pinpoint the drivers of the models, even if the models were extremely accurate.

This is very much like artificial intelligence. You may have an AI process that is performing amazingly well. However, accurately teasing out what factors are driving that performance is difficult. As I’ll discuss later, there is work being done to help address this. But, AI models leave one in a similar spot as multicollinearity did. Namely, a great set of predictions whose root drivers can’t be well explained and specified.

Notice, however, that this issue only matters if you need to explain how the answer is derived. Multicollinearity is not a problem if all you care about is getting good predictions. If the individual parameters don’t matter, then model away. The same is true with AI. If you only care about predicting who will get a disease, or which image is a cat, or who will respond to a coupon, then the opacity of AI is irrelevant. It is important, therefore, to determine up front if your situation can accept an opaque prediction or not.

What’s Being Done to Make AI More Explainable?

As one would expect, there are a lot of smart people working to develop ways to help determine what’s really driving an AI model under the hood. One of the most interesting examples I’ve come across is a process known as Local Interpretable Model-Agnostic Explanations (LIME). What LIME does is to make slight changes to input data in order to see what the impact on the predictions ends up being. Repeat this many times and eventually, you get a good feel for what is really driving a model. See the picture below from the above linked LIME article to get an understanding of what we’re talking about.


Interested in what the future will bring? Download our 2023 Technology Trends eBook for free.

Consent

In this case, you can see that the upper face and eyeball subset at the top of the image has a strong influence in the model determining that this is a frog. Some of the other information in the picture actually causes the model to do worse. For instance, the heart being held in his hand certainly wouldn’t be typical of a frog.

While this example focuses on image recognition, a very similar process could be used with a problem based on classic data. For example, the difference in predicted response probability for a customer could be examined as input variables are perturbed in different combinations. Certainly, this isn’t quite as satisfying as a classic parameter estimate. But, it does take AI a long way towards being understood.

Note also one very important point about LIME, which is the model agnostic component. LIME really has nothing directly to do with AI and doesn’t know what AI is or does. It is simply a way to take a predictive algorithm and test out how different data causes changes. Therefore, it can be applied to any situation where there is a need to add transparency to an opaque process. It can even be used in situations where firm parameter estimates do exist in order to validate how well it works.

The Problems That Won’t Go Away

No matter how neat LIME might seem, it isn’t good enough to pass muster with laws and regulations. In many cases, such as credit scoring and clinical trials, amazing predictions mean nothing in absence of clear explanation of how the predictions are achieved. As a result, we’ll have to examine our laws and our ethical guidelines to determine how they might be altered to allow AI to be utilized effectively while still keeping the proper checks and balances in place. We are certain to have AI that will be capable of solving very valuable problems sooner than we’ll be allowed to actually put those models to use. It will be necessary to find the right balance of laws, ethics, and analytics power so we can make progress. But, that’s a topic for another blog!

For now, if you’re considering using AI as it exists today, just make sure that what you really care about is simply a solid model that predicts well. If you actually have to be able to explain how the model works and what drives it, you should stick to more traditional methods for the foreseeable future.

Originally published by the International Institute for Analytics

Categories: Artificial Intelligence
Tags: analytics, Artificial Intelligence, Big Data

About Bill Franks

Bill Franks is an internationally recognized chief analytics officer who is a thought leader, speaker, consultant, and author focused on analytics and data science. Franks is also the author of Winning The Room, 97 Things About Ethics Everyone In Data Science Should Know, Taming The Big Data Tidal Wave, and The Analytics Revolution. His work has spanned clients in a variety of industries for companies ranging in size from Fortune 100 companies to small non-profit organizations. You can learn more at https://www.bill-franks.com.

Primary Sidebar

E-mail Newsletter

Sign up to receive email updates daily and to hear what's going on with us!

Publish
AN Article
Submit
a press release
List
AN Event
Create
A Job Post
Host your website with Managed WordPress for $1.00/mo with GoDaddy!

Related Articles

The Advantages of IT Staff Augmentation Over Traditional Hiring

May 4, 2023 By Mukesh Ram

The State of Digital Asset Management in 2023

May 3, 2023 By pimcoremkt

Test Data Management – Implementation Challenges and Tools Available

May 1, 2023 By yash.mehta262

Related Jobs

  • Software Engineer | South Yorkshire, GB - February 07, 2023
  • Software Engineer with C# .net Investment House | London, GB - February 07, 2023
  • Senior Java Developer | London, GB - February 07, 2023
  • Software Engineer – Growing Digital Media Company | London, GB - February 07, 2023
  • LBG Returners – Senior Data Analyst | Chester Moor, GB - February 07, 2023
More Jobs

Tags

AI Amazon analysis analytics app Apple application Artificial Intelligence BI Big Data business China Cloud Companies company content costs court crypto customers Data digital environment future Google+ government industry information learning machine learning market mobile Musk news Other public research sales security share social social media strategy technology twitter

Related Events

  • 6th Middle East Banking AI & Analytics Summit 2023 | Riyadh, Saudi Arabia - May 10, 2023
  • Data Science Salon NYC: AI & Machine Learning in Finance & Technology | The Theater Center - December 7, 2022
  • Big Data LDN 2023 | Olympia London - September 20, 2023
More events

Related Online Courses

  • Oracle Cloud Data Management Foundations Workshop
  • Data Science at Scale
  • Statistics with Python
More courses

Footer


Datafloq is the one-stop source for big data, blockchain and artificial intelligence. We offer information, insights and opportunities to drive innovation with emerging technologies.

  • Facebook
  • LinkedIn
  • RSS
  • Twitter

Recent

  • 5 Reasons Why Modern Data Integration Gives You a Competitive Advantage
  • 5 Most Common Database Structures for Small Businesses
  • 6 Ways to Reduce IT Costs Through Observability
  • How is Big Data Analytics Used in Business? These 5 Use Cases Share Valuable Insights
  • How Realistic Are Self-Driving Cars?

Search

Tags

AI Amazon analysis analytics app Apple application Artificial Intelligence BI Big Data business China Cloud Companies company content costs court crypto customers Data digital environment future Google+ government industry information learning machine learning market mobile Musk news Other public research sales security share social social media strategy technology twitter

Copyright © 2023 Datafloq
HTML Sitemap| Privacy| Terms| Cookies

  • Facebook
  • Twitter
  • LinkedIn
  • WhatsApp

In order to optimize the website and to continuously improve Datafloq, we use cookies. For more information click here.

Dear visitor,
Thank you for visiting Datafloq. If you find our content interesting, please subscribe to our weekly newsletter:

Did you know that you can publish job posts for free on Datafloq? You can start immediately and find the best candidates for free! Click here to get started.

Not Now Subscribe

Thanks for visiting Datafloq
If you enjoyed our content on emerging technologies, why not subscribe to our weekly newsletter to receive the latest news straight into your mailbox?

Subscribe

No thanks

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

Marketing cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!