More infromation on AI

More infromation on AI

Realistically we could have a long way to go: AI has been an emerging field with an idealistic future since its conception. In his book, Supertineglirence Nick Bostrom from the Oxford Martin School’s Future of Humanity Institute cites the probability of reaching human-level machine intelligence as 10% by 2022, 50% by 2040 and 90% by 2075.

The sample sizes were small and the opinion in the field seems more varied and ranges from impossible to very probable which is probably why we find extremely erudite people warning us of what may be ahead.

Beyond this, superintelligence will probably come quite quickly, an intelligence explosion that could lead to exponential growth in the level of machine intelligence.
It’s probably worth thinking about these time frames if we are to consider the dangers. Especially as AI comes into its own before it even reaches anywhere near superintelligent levels of power.

Early signs of our insignificance in the face of AI could come in the form of it taking up jobs currently done by humans. One study from Oxford Martin claims that up to 47% of jobs in the US are at risk from artificial intelligence.
If machines start churning people out of jobs It’s not going to be the machines making the products that are going to buy the products, yet.

This sort of system would need a paradigm shift in the way in which we structure our society and the roles we structure for ourselves in the face of such automation. However, it is always important to look at the benefit and risks of artificial intelligence.

Automation in warfare is another concern raised by the increasing use of AI. A report by Human Rights Watch stated that: “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes”. Lethal autonomous weapons have been addressed by the UN a concern as it raises both legal and moral implications around the conduct of warfare.

So whilst we may be a long way off from producing superintelligent levels of artificial intelligence, the likes of Gates, Hawking and Musk are probably right to warn us of the implications we face by developing such technology.
Regulatory oversight may be a good idea amongst many because once it’s has been produced it will be impossible to put back.

I wouldn’t feel too safe within an AI-powered Google car driven by something like KITT from knight rider that harboured bad intentions against me, or if the most intelligent computers find themselves as miserable as Marvin from in the Hitchhiker’s Guide to the Galaxy. We just don’t know.