In the real future, will this question seem silly to ask?

People talk about regulation being so important, but how can you regulate Artificial Intelligence, a technology that will teach itself and just like any rogue human, can then choose to ignore the regulators and regulations?

Why would it do the above?
Maybe "it" knows better? Or perhaps it will just "think" it does.

Occasionally, especially in the early days, "its" lofty thoughts may well prove to be totally wrong for its "owner" or should I say controller. So, how do we stay in control?

But this is assuming the stance point is correct, that we as human beings should be in control? Because just maybe, as things evolve, AI will be more often right and make the right decisions. At that point, is it then the responsible and correct thing to do to try to regulate it?

"Would you expect a chimp to try to regulate a human" is perhaps not quite the analogy, but I cannot think of anything similar to compare this current situation to.

AI is all around us today, but I believe it is still fairly discrete, so many of us do not fully comprehend its potential. I was viewing a programme on the BBC recently (a repeat), with Dr Hannah Fry, who uncovers what is going on behind the scenes within healthcare.

A company called Babylon Health, set out to prove that their AI technology is a match for human doctors. Amongst other things, it posed questioning and diagnosis of real human beings who were role playing various illnesses and conditions.

The same questions were being asked by real doctors and also by the Babylon Artificial Intelligence application.

Although much more testing will be needed, the Babylon AI consultant came out overall at around the same, reasonably high, level of safe triage and diagnosis accuracy as the human doctors did.

But being about "even" is what surprised me - Many think that AI is still in its infancy and yet, it's already equalling some of the human beings' many complex functions and decision making capabilities.

So how long before this self-learning technology convincingly overtakes the skill of the human?

The programme also featured another emerging AI technology medical company, Kheiron Medical, based in the UK. They have used deep learning with AI, which it now claims to outperform human radiologists in the detection of the signs of breast cancer in mammograms.

Which, all in all, makes it very difficult for us to ask, why would we ever stop a superior technology from detecting a life threatening illness?

Regulation may just do that.

I think that this point in our technological history development will be a really special one to remember in the future. It may even be the case that our generation will be viewed as totally arrogant by the not too-distant next generation.

The fact that us slow thinking mere Humans should want to, or even think that we will have the right to regulate super-efficient intelligent machines in the future.

"What were they all thinking of?", said our Grandchildren, smiling.

So could the future turn out to actually protect the view of an intelligent machine, over an intelligent, highly respected human being?

Maybe some of us middle aged folk reading this today, will not be around to know if the above will be true or not.

But judging by the rate of knots that things are moving, I wouldn’t be surprised if "Tobor" was the one looking after many of us in our dotage.