An overview of Artificial Intelligence Rights
Artificial Intelligence rights appear to be quite a touchy subject. Top researchers in the field have had much debate on whether or not a sentient, thinking robot should have the same right as humans, or if it even needs it. AI has without a doubt had a drastic effect on the world. With almost everything we do online, there is some form of AI there assisting us. Furthermore, there are some huge benefits that AI brings to places like the medical industry. While we don’t have any “super intelligent” AI yet, what happens if we do create one? Should sentient AI be given the same rights as humans? How should humans treat Artificial Intelligence?
With that said, it hasn’t stopped some from going ahead. Saudi Arabia became the first nation to give a “robot” citizenship back in 2017. The robot named Sofia was given citizenship as a way of promoting Saudia Arabia as a destination to develop Artificial Intelligence.
Programming AI and it’s effects
At the end of the day, a machine smart or not will have been programmed by someone. It would not be impossible for programmers to hard-code ethical decisions into an AI. For example, it may be possible to program an AI to not kill, never to kill and never to think about killing. But does this, itself infringe the rights of AI? Is it truly sentient if its physically unable to make certain decisions that us humans don’t want it to make?
Let’s say for example that AI does have rights. If an AI goes to vote, who is it going to vote for? Will it vote for what it really wants, or would its decision be programmed by its creators? It would prove very difficult to definitively know whether or not an AI is a truly sentient, or if it was just programmed to appear that way.
Are Artificial Intelligence Rights even required?
The need for Artificial Intelligence rights comes down to whether or not we can actually build sentient AI. After all, why would you ever need a sentient robot, when you could use some machine learning algorithms to do a much better job of the task. In other words: would you rather an AI with the intelligence of a human do a task, or an algorithm(AI) specifically designed to do that task much faster and more efficiently than any human? The choice seems fairly obvious.
It doesn’t really matter what you’re talking about either. If you want a robot to be a bartender, why would you make it sentient? Why would you program it to do anything else other than serving customers drinks? It just doesn’t make any sense to do that.
Will there ever be sentient AI?
So with all that said, is it possible for Artificial Intelligence to be truly sentient? First, we need to define exactly what a sentient AI would be. Researches have categorized this into two areas. Weak AI and Strong AI.
Strong AI would possess the same cognitive abilities as humans. They would have a conscious and be self-aware. Strong AI is what most would call sentient AI. It would have the ability to think and reason on its own, and possibly even modify itself to be better.
Weak AI is defined as a computer program, which can seem very intelligent, but is limited by its lack of “mind”. This type of AI is unable to experience the world qualitatively. In other words: it can only do what its told. It can’t think or reason for itself.
Strong AI vs Weak AI
Right now, you could classify all existing AI as “weak AI”. And while there are plenty of AI out there that do look and sound convincing(Just take a look at Sophia above) if you dive into it, none of them think or reason on their own. None of them have a motive to do anything on their own. They can only do what they’ve been programmed to do.
This is just how computers work. A computer is never going to understand what it’s processing. It’s just going to process it, and spit out the results.
Some researchers believe for us to ever have the chance to make a Strong AI, we would first need to understand the human brain in full. We would need to know why it works and how it works exactly. This is something researchers have been trying to solve for a long time, but we’re still a way off. After all, humans are the only intelligent beings that we know of, so attempting to replicate them would give the best results, one would think. However, there is still much debate on whether it would be possible, even knowing all this.
Artificial Intelligence rights is a very hot topic these days. It seems no-one can come to any kind of agreement on the subject. But AI rights may not ever be required. Why would you build a sentient robot to do a task that an algorithm, specifically designed to do that task, could do much, much better? It will never make sense to do that. Robots rights will always be a hot topic, but its just not necessary. Whether or not sentient AI could ever exist is another hot topic, but based on research done over the past few years, it seems unlikely that true sentient AI will ever be real, at least in our lifetime. So the idea of having artificial intelligence rights just seems unnecessary.