The Ethics of AI in the 21st Century

AI in the Making

Little did we know what an earth-moving change we were making when we started developing AI in the 1950s. But we still knew we were tackling an ethical dilemma. 

We posed questions about artificial intelligence long before we made the first neural network, as attested in ancient mythology. Do you know when the first AI novel was written? In 1872!

Alan Turing, the father of AI, presented the design of the first stored-program computer in 1946. Just four years later he published his famous Turing test in which he implies that once we are no longer able to discern between a person and a machine in an "imitation game'', we have effectively created an artificial intelligence.

That was over 70 years ago. We have come a long way since then.

Imagining the AI

For centuries our fiction has weaved the idea of AI into potential future scenarios, from Mary Shelley's Frankenstein to TV series like Love, Death & Robots. The self-learning nature of neural networks, an essential part of artificial intelligence, has produced some pretty wild ideas like the technological singularity, a scenario in which the AI advances enough that it becomes super-intelligent.

This AI might become an essential tool in shaping human society in the future. With its efficiency and all the possible breakthrough innovations it comes up with, it would be capable of creating a society of utopian standards. Imagine eradicating poverty, diseases, climate change, and wars as remnants of an older, inefficient world!

The AI Shaping Our Morals

But we don't need to venture into fiction to find lasting influences of AI on our ethics–some questions are open for us right now.

Autonomous or self-driving cars rely on AI to ultimately replace humans at the steering wheel. These cars run with unparalleled efficiency, which saves energy, reduces traffic jams, and provides much more comfort to the people who travel by car. It is a ground-breaking improvement in transportation.

No system is perfect, however, and accidents have occurred. When the first person got hurt by a self-driving car, a dilemma ignited in the public. Who was to blame? The driver who could have overridden the controls? The company that put the vehicle on the road? Or could we possibly acknowledge the AI as an entity that we could then hold responsible for the event?

BlenderBot3, a Field Training in the Ethics of AI

No matter how capable of learning the AI is, it remains limited to the scope of data that is being fed to its neural network. This also implies that, if the data it learns from contains any type of bias, e.g. racial or gender, it is bound to inherit it. Unless we take some precautions, of course.

Cue BlenderBot3, a chatbot developed by Meta. Designed for entertainment and communication, it works by learning from conversations with users. Besides, it also has the capability to search the web for clues and data to include in its responses. The web is its oyster!

Somewhat hilariously, though, BlenderBot3 soon started lying and engaging in politics. One user has shared how the chatbot said it has published a dozen romance novels and is currently writing its 9th, while another user became witness to the bot endorsing Trump for a third term presidential term. The bot went as far as to harshly criticize Mark Zuckerberg, the CEO of the company that made it.

Instead of terminating the bot, Meta stated that it will use the feedback to improve on the concept and tackle the issue of bias in its AI.


Skippet, in comparison, implements AI as a bridge between the user and the computer, giving them the ability to speak the same language allowing Skippet to help you manage data and solve problems in ways you’ve been unable to before. Join our waitlist to see for yourself!

Check out Skippet in action.