Artificial intelligence achieved a lot in 2016. One of the goals in 2017 should be to make its workings more transparent. With plenty riding on it, this could be the year when, to coin a phrase, we begin to take back control.
Right, so we should all have a look at those algos and see how they work. Right on, accountability!
Today, AI and algorithms dominate our lives – from the way financial markets carry out trades to the discovery of new pharmaceutical drugs and the means by which we discover and consume our news.
But, like any invisible authority, such systems should be open to scrutiny.
Some of today’s most impressive advances in fields such as machine learning (the goal of getting a machine to, well, learn) rely on tools such as “deep learning neural networks”. These are systems patterned after the way the human brain works but which, ironically, are almost entirely inscrutable to humans. Trained with only inputs and outputs, and tweaking one or the other until the middle part “just works”, human creators have long since sacrificed understanding in favour of results.
Absolutely no one knows how they work which is going to make scrutiny a little difficult, no?
Luke Dormehl is an idiot.