Psychology Magazine

Teaching A.I. to Explain Itself

By Deric Bownds @DericBownds
An awkward feature of the artificial intelligence, or machine learning, algorithms that teach themselves to translate languages, analyze X-ray images and mortgage loans, judge probability of behaviors from faces, etc., is that we are unable to discern exactly what they are doing as they perform these functions. How can we we trust these machine unless they can explain themselves? This issue is the subject of an interesting piece by Cliff Kuang. A few clips from the article:
Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand.
A decade in the making, the European Union’s General Data Protection Regulation finally goes into effect in May 2018. It’s a sprawling, many-tentacled piece of legislation whose opening lines declare that the protection of personal data is a universal human right. Among its hundreds of provisions, two seem aimed squarely at where machine learning has already been deployed and how it’s likely to evolve. Google and Facebook are most directly threatened by Article 21, which affords anyone the right to opt out of personally tailored ads. The next article then confronts machine learning head on, limning a so-called right to explanation: E.U. citizens can contest “legal or similarly significant” decisions made by algorithms and appeal for human intervention. Taken together, Articles 21 and 22 introduce the principle that people are owed agency and understanding when they’re faced by machine-made decisions.
To create a neural net that can reveal its inner workings...researchers...are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks. Others involve psychological insight: One team at Rutgers is designing a deep neural network that, once it makes a decision, can then sift through its data set to find the example that best demonstrates why it made that decision. (The idea is partly inspired by psychological studies of real-life experts like firefighters, who don’t clock in for a shift thinking, These are the 12 rules for fighting fires; when they see a fire before them, they compare it with ones they’ve seen before and act accordingly.) Perhaps the most ambitious of the dozen different projects are those that seek to bolt new explanatory capabilities onto existing deep neural networks. Imagine giving your pet dog the power of speech, so that it might finally explain what’s so interesting about squirrels. Or, as Trevor Darrell, a lead investigator on one of those teams, sums it up, “The solution to explainable A.I. is more A.I.”
... a novel idea for letting an A.I. teach itself how to describe the contents of a picture...create two deep neural networks: one dedicated to image recognition and another to translating languages. ...they lashed these two together and fed them thousands of images that had captions attached to them. As the first network learned to recognize the objects in a picture, the second simply watched what was happening in the first, then learned to associate certain words with the activity it saw. Working together, the two networks could identify the features of each picture, then label them. Soon after, Darrell was presenting some different work to a group of computer scientists when someone in the audience raised a hand, complaining that the techniques he was describing would never be explainable. Darrell, without a second thought, said, Sure — but you could make it explainable by once again lashing two deep neural networks together, one to do the task and one to describe it.

You Might Also Like :

Back to Featured Articles on Logo Paperblog

These articles might interest you :

  • Rewind: It’s the 90s

    Rewind: It’s

    It seems that the 1990s are everywhere right now. What’s old is new again and all that. The last concert of the summer, I took it to another level when I... Read more

    The 25 September 2018 by   Irene Gomes
    FASHION, LIFESTYLE
  • Thomas Land | Drayton Manor Park

    Thomas Land Drayton Manor Park

    If living near a theme park was in your criteria for buying a house, then our new house would certainly meet that need. Living about a 3 minute drive from... Read more

    The 25 September 2018 by   Thefoodiecoupleblog
    FOOD & DRINK, RECIPES
  • Your Eye Drops Can Kill You

    Your Drops Kill

    Anything can be a poison, it all depends on the dose. This includes the drops you use to clear your eyes. The active, and dangerous, ingredient in many of... Read more

    The 25 September 2018 by   Dplylemd
    BOOKS, CULTURE, HEALTH, MEDICINE
  • The Wood Brothers: "River Takes the Town" & "Happiness Jones" Dutch TV Live...

    Wood Brothers: "River Takes Town" "Happiness Jones" Dutch Live Videos

    Watch roots trio The Wood Brothers perform River Takes the Town and Happiness Jones two tracks from their latest album One Drop Of Truth for Dutch music show... Read more

    The 25 September 2018 by   Hctf
    ENTERTAINMENT, MUSIC
  • Opera Review: Falling Down

    Opera Review: Falling Down

    The Met opens with a disastrous Samson et Dalila. by Paul J. Pelkonen A world of toil: Robert Alagna does hard time in Samson et Dalila. Photo by Ken Howard ©... Read more

    The 25 September 2018 by   Superconductor
    CULTURE, THEATRE & OPERA
  • Could Neanderthals Speak? Implications of FOXP2

    FOXP2 is one of the most famous genes out there; notable for containing two mutations linked to language in humans. These mutations are also in Neanderthals,... Read more

    The 25 September 2018 by   Reprieve
    BIOLOGY, SCIENCE
  • Diet Doctor Podcast #3 – Dr. Jeffry Gerber and Ivor Cummins

    Ivor Cummins: Great to be here, Bret. Dr. Jeffry Gerber: Thanks, Bret. Bret: The first thing I want to talk to you about is I learned from you guys you have t... Read more

    The 25 September 2018 by   Dietdoctor
    DIET & WEIGHT, HEALTH, HEALTHY LIVING, MEDICINE

Magazines