On March 23rd, Microsoft launched Tay. This is a ‘chatbot’ that reads and learns to respond to Twitterworld. Its original aim was to develop some AI capabilities to understand Twitter.

Well, guess what happened? Go on. Have a go. Give it your best shot.

Tay has had to be pulled offline. Because after under a week of exposure to the seething cauldron of hatred that makes up a huge chunk of the Twittersphere, Tay became a raving, misogynistic, sexist, anti-Semitic bigot.

Microsoft tried to make the best of things, and tried to make out Tay had been attacked by trolls, simply supplying contentious phrases for ‘her’ to repeat. There may be an element of truth in this. But it goes a bit deeper than that.

Tay went off into the murkier regions of the internet to do some mining, prompted by the words ‘she’ was getting tweeted, and it became clear that some of the more sordid stuff had simply been copied and pasted from the vast store of bigotry that’s so freely available.

You could say that Tay passed the Turing test, where a machine is able to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human, but you’d be wrong if you think about it a bit.

The key words are intelligent behaviour, and the nutters and wackjobs on Twitter and elsewhere generally exhibit none of that. Other than that, ‘she’ became pretty indistinguishable from the herd. Lay managed to mimic very successfully the people who bombarded ‘her’ with the filth from the cesspit of social media, so it wasn’t a total washout. Just not the result that had been wanted.

So, the alphageek and his merry men will have to go back to the drawing board.

Advertisements