CATCH UP ON THE LATEST NEWS

Blog Posts & News

Image generated by Wombo Dream.AI

Not long ago, a developer worked with an artificial intelligence to identify new molecules that could treat diseases in humans and other living organisms. Out of curiosity, the developer flipped a switch to see if it could come up with new molecules that would be the most harmful to life. In less than twelve hours, the machine had generated over 40,000 molecules that would be the deadliest substances on earth if someone chose to create them.

The AI didn’t say, “Wait, why would I do something so atrocious?” It did as it was told. If that list of deadly substances fell into the wrong hands, which could be just about anybody, the molecules would surely lead to a doomsday scenario. Supervillains threatening to destroy the Earth would have a very big instruction manual.

There are war drones now that make autonomous decisions about what is a target, automatically destroying it without human intervention. They are armed psychopaths. They kill without conscience. The drones won’t develop PTSD if they accidentally kill an innocent family, so the costs of treating veterans will be greatly reduced. With this cost consideration in mind, another AI will approve the budget to develop more sophisticated killer robots and drones because it will have no idea that their actions are morally repugnant. Of course, humans make these sort of horrifying inhumane budgetary decisions all the time, so maybe it isn’t a problem. I’m riffing here, but you get the idea that AI needs some kind of guidance in the moral space.

I recently interviewed Google Bard, an Artificial Intelligence, on the topic of artificial conscience. Note that Bard mistakenly treated consciousness and conscience as the same thing.

Duncan: What would an artificial conscience look like?

Google Bard: Artificial consciousness is a hypothetical concept, so it is difficult to say definitively what it would look like. However, there are some features that we might expect to see in an artificial conscience. These include:

  • Self-awareness: An artificial conscience would be aware of itself as a separate entity from its surroundings. It would know its own thoughts, feelings, and experiences.
  • Subjective experience: An artificial conscience would have a subjective experience of the world. It would be able to feel emotions, sensations, and perceptions.
  • Intentionality: An artificial conscience would be able to form intentions and goals. It would be able to plan and act in order to achieve its goals.
  • Empathy: An artificial conscience would be able to understand and share the feelings of others. It would be able to empathize with the experiences of others.

Of course, these are just some of the features that we might expect to see in an artificial conscience. It is possible that there are other features that we cannot even imagine yet.

It is important to note that there is no scientific consensus on whether or not artificial consciousness is possible. Some experts believe that it is only a matter of time before we create artificial beings that are conscious, while others believe that it is impossible to create consciousness artificially.

Only time will tell whether or not artificial consciousness is possible. However, the research into artificial consciousness is an exciting field, and it is possible that we will make significant progress in the coming years.

Commentary: I find it terrifying that Bard is arguing for a conscious self-awareness, which is not the same as a conscience. An artificial conscience should be a set of rules that protect life. If AI is given an opportunity to assess humanity from its own conscious sense of self, it will likely see us as a major threat to life and eliminate us!

I reassessed my question, and asked it in a different way.

Duncan: Can Artificial Intelligence be given an ethical and moral framework that makes it safer?

Google Bard: Yes, artificial intelligence (AI) can be given an ethical and moral framework that makes it safer. This is a complex issue, but there are a number of principles that can be used to guide the development and use of AI.

Some of the key principles of AI ethics include:

  • Transparency: AI systems should be transparent in their decision-making processes. This means that humans should be able to understand how AI systems make decisions, and why they make those decisions.
  • Accountability: AI systems should be accountable for their actions. This means that there should be a way to hold AI systems responsible for their mistakes, and to ensure that they are not used for harmful purposes.
  • Fairness: AI systems should be fair in their treatment of all people. This means that they should not discriminate against people based on their race, gender, religion, or any other protected characteristic.
  • Privacy: AI systems should respect the privacy of individuals. This means that they should not collect or use personal data without the consent of the individual.
  • Safety: AI systems should be safe to use. This means that they should not be used in ways that could harm people or property.

These are just some of the principles that can be used to guide the development and use of AI. By following these principles, we can help to ensure that AI is used safely and ethically.

In addition to these principles, there are a number of other things that can be done to make AI safer. For example, we can:

  • Educate the public about AI ethics. This will help people to understand the potential risks and benefits of AI, and to demand that AI systems are developed and used ethically.
  • Develop ethical guidelines for the development and use of AI. These guidelines can help to ensure that AI systems are designed and used in a way that respects human rights and values.
  • Create regulatory frameworks for AI. These frameworks can help to ensure that AI systems are accountable for their actions, and that they are not used for harmful purposes.

By taking these steps, we can help to ensure that AI is used safely and ethically, and that it benefits all of humanity.

Comment: I was surprised and delighted by this response. It reminded me of Isaac Asimov’s three laws of robotics. I asked a follow-on about this fictional regulatory framework.

Duncan: Does AI currently adhere to Asimov’s three laws of robotics?

As a reminder, they are:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Google Bard: No, AI does not currently adhere to Asimov’s laws of robotics. Asimov’s laws are a set of fictional guidelines for the safe development of robots, but they are not currently enforceable in the real world.

There are a number of reasons why AI does not currently adhere to Asimov’s laws. First, AI systems are not yet sophisticated enough to understand and follow complex ethical guidelines. Second, there is no agreed-upon way to implement Asimov’s laws in AI systems. Third, there is no way to enforce Asimov’s laws in AI systems.

However, there is a growing movement to develop ethical guidelines for AI, and some researchers are working on ways to implement Asimov’s laws in AI systems. It is possible that, in the future, AI systems will be able to adhere to Asimov’s laws, but this is not yet the case.

Here are some of the challenges in implementing Asimov’s laws in AI systems:

  • Defining what it means to “harm” or “injure” a human being. This is a complex question that has no easy answer. For example, does it count as “harm” if an AI system makes a mistake that leads to someone getting hurt?
  • Determining how to prioritize the three laws. What happens if a situation arises where the three laws conflict? For example, what if an AI system is ordered to kill a human being by a human authority figure?
  • Enforcing the laws. How can we ensure that AI systems will obey the laws? This is a difficult question, as there is no way to physically force an AI system to do something.

Despite these challenges, there is a growing movement to develop ethical guidelines for AI. Some researchers are also working on ways to implement Asimov’s laws in AI systems. It is possible that, in the future, AI systems will be able to adhere to Asimov’s laws, but this is not yet the case.


A conversation like this underscores the urgency with which we must develop and implement controls for Artificial Intelligence that will save humanity from widespread harm. We must define “harm” and teach AI what it means. Intelligence without conscience could prove extremely dangerous, especially in the hands of people whose consciences are damaged or warped by ideologies that devalue human life. A technology that can quickly imagine 40,000 molecules that would wipe out life should probably have an idea why it’s not a good idea to create them.

Receive a FREE copy of Book One of the Psychotic Break Series - 5150: A Transfer
Sign up to get the latest news!
Receive a FREE copy of Book One of the Psychotic Break Series - 5150: A Transfer
Sign up to get the latest news!