The Brainwaive

The Brainwaive

The human brain is the greatest general intelligence engine ever discovered. At least, according to the human brain. But then, it would say that, wouldn't it? Still, there's a great deal to learn from the way that our minds work that we haven't yet been able to mimic in our artificial forms of intelligence, and a sizeable gap between the way that humans and machines see the world.

In her session at Digital Transformation EXPO Europe, Hannah took the audience on a tour of the key questions at the cutting edge of AI and she explored some of the challenges of building a human-made mind.

Keep an open mind

Hannah started off her session by telling us about one of her favourite scientific papers that highlights the importance of not focusing on just one type of learning, or even focusing on one technology such as machine learning for all of the answers.

She advised that a team of rookies were given 2 weeks to screen and diagnose breast cancer by only looking at samples. Over a 2-week period these rookies learned to do this and managed a successful diagnosis for 85% of the cases. These rookies were in fact pigeons, in fact they actually managed 99% but the average was bought down by one ‘stupid pigeon’.

Their accuracy was therefore as good as a doctor, her point was that it is very easy for us to bias one form of learning as humans and she reiterated the need for us to be open minded.

The significance of human intervention

Hannah moved on by next asking the audience to take part in a game, she was asking the audience to guess the difference between a human and a robot based on said human or robot writing just 1 word.

The results highlighted that people more often than not would identify the difference between the words and select the human answer. The significance of this game she advised was that we as people recognise the human element of words and scenarios, the full paper that used this game/study demonstrated that there are indeed real significant differences between the decision making processes of humans and machines.

This may seem an obvious result but with the continued growth of artificial intelligence and machine learning this insight is key. Hannah advised that ‘AI does not wear its uncertainty well, if AI tells us something we don’t question it’. She advises us that we need to be careful with the limits of AI and we need to really question the technology we create which is reliant on AI and ensure it has always got an element of human intervention.

Hannah discussed some examples which prove this, she advised AI often does not understand intention. She gave an example of a self-driving vehicle that almost crashed when crossing a bridge, this was because the AI vehicle had taught itself to always follow a grass verge. Therefore, when the verge disappeared at the bridge it had no idea which direction it should go.

Another case was an ML programme that needed to calculate how to get across a finish line from point A to point B. The programme calculated that it would have to grow very tall without moving and to reach a point where it collapsed and therefore went over the line at point B. This was not the intention of the programmer but technically it was correct and the interesting point that Hannah highlighted was that this logic is similar to that of the natural world, the programme behaved much like the way wheat behaves.

Her last example when it comes to understanding AI is that she advised a study was undertaken where AI was used to advise gender by only looking at human eyes. Hannah advised the AI proved accurate to 96%, but they had no idea how it was doing this. Another indicator that we have a long way to go when it comes to understanding the learning process of artificial intelligence.

As much as Hannah highlighted the above concerns and considerations she does believe humans and AI can learn a lot from keeping both AI and human elements when using or creating any new and existing technology.

To listen to this article please click here.

View more articles here