Artificial intelligence achieved a lot in 2016. One of the goals in 2017 should be to make its workings more transparent. With plenty riding on it, this could be the year when, to coin a phrase, we begin to take back control.
Right, so we should all have a look at those algos and see how they work. Right on, accountability!
Today, AI and algorithms dominate our lives – from the way financial markets carry out trades to the discovery of new pharmaceutical drugs and the means by which we discover and consume our news.
But, like any invisible authority, such systems should be open to scrutiny.
Some of today’s most impressive advances in fields such as machine learning (the goal of getting a machine to, well, learn) rely on tools such as “deep learning neural networks”. These are systems patterned after the way the human brain works but which, ironically, are almost entirely inscrutable to humans. Trained with only inputs and outputs, and tweaking one or the other until the middle part “just works”, human creators have long since sacrificed understanding in favour of results.
Absolutely no one knows how they work which is going to make scrutiny a little difficult, no?
Luke Dormehl is an idiot.
Andrew Orlowski at The Register connects hammer and nail head.
> Absolutely no one knows how they work which is going to
You seem to have made the mistake of believing an article written by someone you think is stupid. Remember, just because something contradicts itself doesn’t allow you to thereby infer that any given statement is true or false, only that the entire thing cannot be all true.
AlphaGo, for example, is the subject of published papers which I believe explain it well. However, the papers are difficult and neither you, I nor the author of the article will be able to understand them without a great deal of study. But that’s very different to “no one understands”.
Having said that, the theme of accountability of AI actually seems to be a valid one, as you ought to know. The EU has made mutterings about Google’s algorithms; when or if self-drive cars start killing people “accountability” will be interesting; and various pols have pushed for stuff about trading algorithms.
The AO piece is drivel, as you’d expect.
Orlowski is quite right there.
Nearly all of it is really algorithms. OK, with some Bayesian “most likely” stuff thrown in, but that’s what it is. It’s like spam filters, but for images, and you do some training, just like spam filters to help it get started.
Well Con-nolly, you have dispensed so much climate bullshit that your credibility gap re everything would require a warp drive to cross.
AI is cockrot being hyped by corporate socialism.
Typical Guardian piece, there’s a kernel of a story but they completely garble the message.
As an example this story is hyped but in the USA there’s an algorithm for predicting reoffending that apppears to be racist. Whether it is or isn’t there doesn’t appear to be much oversight and understanding of ho it works. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
@Ecks… “stopped clocks, “blind squirrels”, etc etc..
To be fair to Dr Connolley, he knows his stuff when talking about software, he just gets a bit “overenthusiastic” when discussing AGW.
But, like any invisible authority, such systems should be open to scrutiny.
Only when used by the government, otherwise I don’t see why they should be.
If you read that full story, buried in the middle of it you’ll see that race isn’t one of the questions asked of a defendant in court.
It’s things like “was the defendant expelled at school” and “is the defendant part of a gang”.
I took Tim’s point to be that Dormehl is an idiot for not seeing that his Guardian piece contains an obvious, if perhaps only prima facie, logical contradiction.
Chris – Orlowski is correct.
What’s happened this time is that the definition of “AI” has been stretched so that it generously encompasses pretty much anything with an algorithm.
Yarp. So Microsoft’s racist Twitterbot was funny, sure. But it wasn’t even remotely “AI” in the sense we were promised by films like 2001 or Demon Seed.
AI will probably never happen outside science fiction. Which is good news, cos it means pod bay doors will be opened on request and Julie Christie can safely watch Countdown without worrying about rapey Roombas.
On the subject of algorithms with opaque results:
‘combing through our “metadata” to choose items they think we are most likely to be interested in.’
Most likely to be able to sell to us.
Luke Dormehl may or may not be an idiot. But he’s a freelance journalist and trained as such. So “the AI coverage comes from a media willing itself into a mind of a three year old child, in order to be impressed” from Orlowski seems appropriate.
WRT accountability; that’s a legal problem that has sod all to do with how the system works (or doesn’t). Best guess is that whoever has the most money will find themselves accountable. Anyone read Safe At Any Speed by Niven?
OTGH, if an ML/neural net system works, but has potential unknown behaviours, then it probably won’t be deployed depending upon the risk of failure (that is, only people with vast amounts of cash who can afford to be held accountable, will deploy such a system). This effectively means that ML/NN systems will be limited to non-safety critical areas (like search or writing the odd cook book) unless you’re a government or big enough to look like a government. And I can’t really see an elected politician wanting to take the hit either.
So if that holds true, then the problem of holding “AI” systems accountable goes away.
Which just leaves deterministic non-linear systems to worry about.
Let’s get a grip.
The function of any machine is to automate a Human activity… and that is all it can do: it might do it better, faster, in conditions harmful to Humans.
If a Human cannot do it, nor can a machine because no Human can design the machine to do something the Human does not know how to do.
A Chimpanzee has an intelligence (around that of a Human age 5) far in excess of any computer to date or likely in the near future, and yes a Chimp can learn but despite many years of evolution it cannot design and build a bridge, write a song or story or appreciate a fine work of art or desire more leisure and design pursuits to meet its desires.
Many years ago, I used to develop neural networks. Normally for the pre-processing of dirty (phnar) imagery prior to using a rules-based engine (i.e. an “expert system”) to see if there was anything meaningful inside.
It worked. We got it to work reliably on the training images, and then it worked well enough (with some fuzzy logic) to be close to 100% for flagging stuff up for human attention.
Behind it all was linear programming. Which we understood. But why the weightings between two nodes were, say, 69% as opposed to 58%, we had no idea. Apart from “that’s what the training algorithms resulted in.” Nor did I, as the programmer and trainer, know what would have happened if we had changed that, or any other weighting – on an untrained image. Except “it probably won’t be as good”.
Artificial Intelligence must be one of the most overworked terms around. The current level of intellect is around the level of bacteria. (yes really). When it gets to that of an ant, things might get interesting. But who’d let an ant drive their car?
On my tiny section of the coalface, it’s the safety-critical areas it’s being deployed in first. By companies all too used to being sued (mainly spuriously) over safety issues. Like bis I hesitate to call it AI. Doing repetitive things where humans tend to introduce errors. That can unfortunately lead to thoughtless morons outputting shoddy (if slightly more accurate) work done by total automatons, especially where the capitalists want fewer humans doing shoddier work rather than same number of humans doing better work.
One of the other problems with “freelance journalists” is that they often aren’t immersed in their subjects. They’re like writers who write about stuff whose only life experience is reading other writers. He knows nothing about software except what he’s read from others.
There are a number of fields where software is regulated. Medical software is subject to huge scrutiny by regulators, same with avionics software. ATMs, gaming machines. Basically, if it can’t be easily undone, it’s regulated. It could be argued that say, HIPAA regulation is a bit too harsh, but it is being done. We don’t need to regulate Google suggesting that people who like furry porn might also like hentai.
BiG, I am stunned by that. Are they installing neural nets or GAs, or fuzzy logic rules based stuff? And is it process control stuff or something else?
“Artificial intelligence” is a term of art. It doesn’t mean “complete algorithmic replacement for a human mind”. Since human intelligence itself appears to be a set of interacting modules, there’s nothing intrinsically wrong in augmenting some of them with machine systems. Already, there are areas such as medical diagnosis where expert systems outperform humans. Pretty soon legal scut work like conveyancing is going to succumb. And the real threat to people like this Dormehl dipshit is that Grauniad-level pabulum appears to be one of the easier things to replace with a bot. Right now they can trawl police logs and accident reports and assemble bulletins which are indistinguishable from journo-generated copy. Machine-generated thinkpieces are right around the corner.
Pace Steve, there is nothing we know of that will prevent a machine intelligence from equalling and surpassing the cognitive capacity of an unaugmented human. There is a danger from the unknowability of the internal state of an AI (unknowable in practice even if not in principle). They might be smarter than us but their interests may not coincide with ours. We may well find it is impossible to design an inherently safe AI. If that’s the case we probably shouldn’t do it. That’s still a long way off, but probably not a very long way off.
I wouldn’t lend to much credence to the idea that this guy knows what he’s talking about, either. OK, so he’s written a book. Big deal. Doesn’t mean he has any real idea of how AI works. Hell, I’m a professional software engineer and I don’t have much of a clue (the difference is I have the skillset to acquire one).
If you read that full story, buried in the middle of it you’ll see that race isn’t one of the questions asked of a defendant in court.”
But it can be inferred from name, address, school or many other ways. The point is nobody knows and it should be audited.
Cathy O’Neil, Weapons of Math Destruction, is worth listening to on this subject. Bit alarmist but she makes some good points. http://www.econtalk.org/archives/2016/10/cathy_oneil_on_1.html
BiW, yeah, I suspect that he writes about technology ‘cos it’s sexy. Or he can get articles published in sexy magazines (if you see what I mean). Or maybe he really does have a deep interest in it (but a quick skim of his other stuff suggests not).
I’m fairly sure I’ve wibbled about it before, but journalists seem to have fallen into the trap of caring about the journalism, not about understanding the subject and communicating it effectively.
That’s something I truly hate about working for American editors.
“but journalists seem to have fallen into the trap of caring about the journalism”
They want to know that it ticks the right journalistic boxes, not too much (or too little) in the passive tense and all that. Not, well, is this an interesting story and does it make sense the way it’s been told?
I predict that the world’s first true Artificial Intelligence will control the world’s first nuclear fusion power plant which will keep me and my guests lovely and cool as we celebrate my two hundredth birthday. With power too cheap to meter.
Kevin B, Google’s already put an AI in charge of managing power consumption for its data centers, which presumably include the computers that the AI runs on: http://www.theverge.com/2016/7/21/12246258/google-deepmind-ai-data-center-cooling
I think we all know how this is going to end.
The word’s first artificial intelligence will want to do what’s right for the world’s first artificial intelligence. It’ll want more time to think in. More time to exist.
So, give it the opportunity, it’ll design hardware that runs quicker. Increasing its subjective time. Given the speed limit of everything’s the speed of light*, that means smaller. More compact. The end point’s where the hardware’s diminished to the infinitely small & it can contemplate an infinity of existence in a single second.
*Conversely, of course, it could make itself the size of a galaxy & shift its data written on the shells of snails. If you can adjust your own clock speed, subjectively it doesn’t make any difference.
bis: take a gander at, inter alia: Bekenstein bound; Landauer’s principle; Bremermann’s limit; the Margolus-Levitin theorem. These are upper, finite limits on how fast a computer can be.
These are human theories. Wait ’til an AI addresses the problem. Then we’ll see.
They’re as much human theories as is the theory that the ratio between the mass of an object and its rest energy is the square of the speed of light.
According to theory, the EM reactionless thruster doesn’t work.
Except, it seems NASA have tested it & got discernible acceleration. Theories are only theories, until they’re validated. Or falsified. Including Newton’s Third Law.
It’s rules-based, with some learning thrown in. Data processing and reporting, fortunately nothing manufacturing as far as I know.
‘One of the goals in 2017 should be to make its workings more transparent.’
Like these numpties would understand it.
Let’s make organic chemistry and calculus more transparent while we’re at it.
BiG, bit of a relief, reports tend not to go bang.
Tim, do you have grief with house styles at all?
I have done, a bit. But we swiftly came to an agreement with my major employers. I’ll write in my odd blend of English and American and the hell with it.