If AI Out-Smarts Humans, What Then?

What can happen when Artificial Intelligence (AI)starts to think for itself? How will it change human civilization: Can you imagine?

Dee Wilson
5 min readApr 23, 2023
Elon Musk — artificial intelligence
Elon Musk — Exit — Image composition by Author

While some experts believe that such a scenario is unlikely or even impossible, Elon Musk argues that it is a real possibility in the future. He is confident that Artificial Intelligence (AI) has the potential to become more intelligent than humans. It sounds crazy — but what if he is correct? What happens when artificial superior brain power outpaces our thinking abilities and manages to dethrone us?

Wow — Let’s slow down a bit. Could such phenomena escalate into a future where we’d be forced to hand over the wand to a technological singularity — and anything can happen? However, we might already live in the Twilight Zone of gender ideology rather than biology. At least we can still argue about it. Not so — if we’re no longer the most competent beings on Earth — dominant over animal and plant kingdoms, the oceans, liberties, elections, free speech, clean air, or free will. Will humanity end up in a dystopian future to merely fit a role of “useful things,” like the cattle on our fields and Chickens in the barn? Can we be consumed? It seems ridiculous, but is it?

So what is a technological singularity?

The event when technical singularity occurs:

A technological singularity happens when artificial intelligence (AI) rapidly surpasses human intelligence.

In the context of technology, singularity is associated with the creation of superintelligent machines that can improve themselves and make further technological advancements at an exponential rate, far surpassing the capabilities of human beings. The term refers to the point of rapid and profound change beyond which it is impossible to predict future events.

And indeed, the potential development of superintelligent AI raises questions about the nature of humanity and our place in the universe.

Creating more intelligent and capable beings than us, what does that say about our abilities and limitations?

Are we willing to cede control to machines that are beyond our comprehension? And what role will we play in a world where machines can think and outsmart us? How can we turn them off?

As AI becomes more intelligent and capable, we must grapple with questions of morality and ethics, such as the potential impact of AI on privacy, freedom, and equality. This thinking goes beyond the potential impact of AI on the labor market. Automation is already taking place. The consequences of economic disruption due to job losses if new job creation can’t keep up with its’ pace is inevitable. This — we can deal with.

More important is how humans can stay in charge of AI and reap the fruit of AI’s significant advances in science, medicine, and technology.

So many questions that we need the answer to. But I’m sure it’s unwise to ignore Elon Musk’s warning. However you feel about the man, he is a visionary we should listen to.

To quote Elon Musk in my own words, “I hope to build an AI that is not manipulated in its’ thinking or output — adheres to the truth — and overall — will embrace humanity."

We can only hope that Elon builds in the Reset button as well.

Isaac Asimov comes to mind:

Isaac Asimov’s “Three Laws of Robotics”
Do not injure a human being or, through inaction, allow harm to a human being.
You must obey orders given to it by human beings unless such orders conflict with the First Law.
You must protect your existence unless it conflicts with the First or Second Law.

1942 short story Runaround

The “Three Laws of Robotics” are a fictional construct designed to explore ethical and moral issues in science fiction, and they are not currently implemented in AI systems. While it is possible to program ethical rules into AI systems, there needs to be a consensus on what those rules should be and how they should be implemented.

One challenge of implementing ethical rules in AI is that different cultures and individuals may have other moral priorities. Therefore, determining whose ethical framework should be used as a basis for the rules can be challenging. Additionally, there is always the risk that the AI could misunderstand ethical practices, leading to unintended consequences.

Instead of attempting to implement a set of specific rules, many researchers are focusing on developing ethical frameworks and decision-making models that can guide the behavior of AI systems.

These frameworks prioritize values such as fairness, transparency, and accountability and aim to ensure that AI is used to benefit humanity while minimizing harm. However, these frameworks are still in the early stages of development, and much work must be done to ensure they are practical and widely adopted.

Elon Musk warns us to be vigilant about A.I.:

Elon Musk recalled a conversation with Google co-founder Larry Page in an interview. For contemplating AI’s potential to harm humanity, Musk was called a specieist by Page, who was ready to create the most brilliant, self-learning AI possible.

Yes, I’m human, that makes me a me a specieist.

Calling for stops of AI Development, Citing’ Profound Risks to Society. Elon Musk, Tech leaders, researchers, and others, signed an open letter urging a stop to the development of superintelligent machines. Read here.

Other concerns are for AI to become so advanced that it could control the world’s financial systems, manipulate elections, and even cause global conflicts.

Musk advocates for regulations to prevent the development of superintelligent AI.

However, Musk is not entirely pessimistic about the future of AI. Instead, he advocates for developing advanced AI systems aligned with human values that can be used to improve people’s lives. He also stresses the importance of installing ethical standards and guidelines for developing and deploying AI systems.

Elon Musk believes that AI can revolutionize many aspects of human life but also presents significant risks and challenges that must be addressed.

I love reading your comments on how you think the future will evolve with AI and how it can help humans prosper…:)

--

--

Dee Wilson

In my photography or writing, I focus on the sharpest image possible. I use my macro lens when learning about Blockchain, Web3 —Future — Science — Technology…