Asimov’s laws: do they work in modern robotics?

It has been over 50 years since Isaac Asimov wrote his famous Three Laws of Robotics, which were basic principles to assure friendly robot behaviour. Although decades have passed, we can still find references to these ground rules in modern law, for example in a European Parliament Resolution from 2017. Let’s find out whether Asimov’s safeguards have stood the test of time or whether they have simply become a bit of science fiction history. 

Isaac Asimov, born in 1920, was an American writer and professor of biochemistry at Boston University, famous for his works of science fiction and popular science. He edited more than 500 books and, during his lifetime, was considered one of the ‘Big Three’ science fiction writers. One of his most significant works was the Robot Series, and he believed that his most enduring contributions would be his “Three Laws of Robotics”. He promulgated a set of ethical rules for robots and intelligent machines that would greatly influence other writers and thinkers in their treatment of the subject. 

Asimov’s ground rules were based on a balanced and healthy human-robot relationship which were meant to assure that no human would be hurt or endangered by robotic creations.  

Asimov’s three basic rules were: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

In Asimov’s fictional universe, these laws were incorporated into all of his “positronic” robots. They were not simply suggestions or guidelines, they were embedded into the software governing their behaviour. These rules appear as basic ground rules which could not be bypassed, over-written or revised. 

However, these principles were criticized even in Asimov’s lifetime and many further amendments were made to what he described. Asimov later introduced a fourth or “zero” law that outranked all the others, that the interest of humanity is more important than the interest of any individual:  

A robot may not injure humanity, or, by inaction, allow humanity to come to harm. 

Now, in 2020, artificial intelligence is becoming an evermore significant part of our life, and the relevant technology is developing enormously. We are inching closer to the day when we’ll have artificial intelligence that is versatile and flexible enough to choose different courses of behaviour.  

Furthermore, it is only a matter of time before machine intelligence explodes beyond human capacities in all the ways imaginable, including power, speed, and even physical reach, the first signs of which already appeared years ago in this hopefully friendly competition. Despite all the advances, the possibility of some kind of error in an AI system could appear, so we need to at least strive to ensure that artificial intelligence is 100% safe. 

Let’s examine whether Asimov’s laws could still be regarded as a standard when we are talking about artificial intelligence today. Although he did write an entire short story collection about exploiting the loopholes of these rules as late as 1981, Asimov himself believed that they could actually work. However, we need to raise the question of whether  they could have become outmoded in the light of technological changes? 

Why don’t they work? 

Problem I.: Building a robot on a moral and ethical basis 

In Asimov’s theory, robots have an inherent ethical capability, they can decide whether something is morally right. Yet if we really consider all the facts, we might question whether we even need this skill in AI. Especially when we could just as easily create advanced thinking machines that do not have this kind of capability, yet they still do the work of a supercomputer.  

Another aspect of this problem is that for ethical robots we would have to assume that most AI developers are ethical people because at the end of the day, they are the people who give the ‘main orders’ to a robot. They are the ones who create the program, who make the first rules about what is right and what is wrong. Hang on a minute! Or are they? Do they simply serve the owner, operator or whoever the idea of ordering and developing the robot or particular actions comes from? This reminds us all of the moral problem from WW2 that is based on the declaration of the court in Nürnberg that simply “obeying the rules” cannot be a defence in the case of immoral and unethical actions, especially when considering that the rules can also lack these aspects in themselves. One step further, how should we decide if someone or something can be considered ethical? We don’t have ground rules for that. In a lot of cases, there is not even one single right answer to ethical questions when one takes into account different cultures, histories, socialisations, and other differences within the human race. 

To twist this angle even further, the question arises of whether it is possible to make good and balanced decisions if we eliminate all the factors of empathy, ethics, emotion and all the typically ‘human’ elements that have always been part of our world? If not, and we could teach AI to handle these, how would AI interpret those human emotions and communications which are a complex system of words, mimics, tone, gestures, body language and all similar aspects? Could AI understand sarcasm? Seemingly, not only can we not find answers to these questions today, but ever more new questions keep arising.  

There are experiments that show AI-based decision making can bring about more predictable and accurate outcomes; so while the topic of AI vs human decision-making is interesting enough to demand its own article—which you might see us writing soon—it’s worth taking a quick detour to the topic. The fact is, humans are more likely to use ‘rules of thumb’ rather than strict and systematic procedures, and human judgements are therefore easily influenced by ‘framing effects’ such as how a question is asked or what other things they’ve been thinking about recently.

The result is that consistency is not a strength of human reasoning: you may ask the same question twice from the same person, but on a different day and under different circumstances, and get a different answer. One study, for example, found that experienced radiologists rating x-rays as ‘normal’ or ‘abnormal’ contradicted themselves 20% of the time! A different study, conducted by Max Kanter, a master’s student in computer science at MIT, and his advisor, Kalyan Veeramachaneni, a research scientist at MIT’s computer science and artificial intelligence laboratory, suggest that while we would imagine that humans are better at understanding other humans, the fact is that an algorithm seems to be able to predict human behaviour faster than other humans can.  

On the other hand though, while human reasoning can be imprecise and inconsistent, it’s also amazingly robust and flexible. Infants can learn stable and flexible concepts incredibly quickly, learning to tell the difference between cats and dogs pretty reliably after only seeing a few examples—whereas current machine learning systems generally take thousands of examples and still can’t learn in such a flexible way. The conclusion is that humans and AI systems appear to have very different and complementary strengths. 

Circling back to when Asimov proposed these laws, he unknowingly based them on another assumption: that we, humans, knew exactly where the ethical lines were to be drawn. But do we? 

Let’s examine a less painful  example and consider what happened to Microsoft’s chatbot experiment. It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. In 2016, Microsoft unveiled ‘Tay’, a Twitter bot that the company described as an experiment in “conversational understanding.” It was an experiment at the intersection of machine learning, natural language processing and social networks. Microsoft engineers trained the algorithm of the chatbot using anonymized public data along with pre-written material provided by professional comedians to give it a basic grasp of language. The plan was to release Tay online, then let the bot discover patterns of language through its interactions, which it would then emulate in subsequent conversations.  

But at the end, Tay assimilated the internet’s worst tendencies into its personality and the conversations didn’t stay playful for long. Pretty soon after Tay had been launched, people started bombing the bot with all sorts of misogynistic and racist tweets. Searching through Tay’s tweets we can see that many of the bot’s nastiest utterances have simply been the result of copying users. (for example, Tay tweeted: I hate feminists and they should all die and burn in hell!”) 

So, the questions remain open: who and what can be considered ethical (and who decides on that based on which factors and how)? To what extent do we need human elements to be incorporated into robotic processes (and why)? 

Problem II.: The notion of ‘human’ 

Following the thought process of questionable ethics, but taking a detour, this next question deserves its own short point. The problem is that in order for a robot to be able to follow the rules, it needs to know what a human is. And who decides on that? Who gives the definition of ‘humans’ and ‘humanity’? And who implements it? The same people mentioned above: the developers and/or those who ordered the robot to be built. Which brings us back to the same problem: these people could give any kind of definition, they could even exclude certain groups of humans from such a definition. For once again, we have set out from the same assumption, which is that the people who make the AI are aware of what is ethical and what is not, and will make decisions and program the AI following those ethical principles. 

This problem also appears in one of Asimov’s stories, where robots are made to follow the laws, but they are given a particular definition of “human.” We don’t think that we need to explain what would happen if fiction came to life and robots started to only recognize people of a certain group as humans… 

Problem III.: The technology is not there yet 

Modern roboticists tend to think the rules aren’t just flawed, they’re fundamentally misguided. It’s an unquestionable fact that we are a lot closer to making real artificial intelligence than we were at the time when Asimov wrote his rules. Whether we think that these robots will be conscious or not we want to make sure that we won’t end up in a Terminator scenario.  

Asimov’s laws rest on a system which we can describe as rule-based. This cannot actually work because it is sensationally trying to restrain a being whose power is limitless. It would only be a matter of time before an AI would find a loophole in any rules that we set in place. 

Furthermore, Asimov’s rules are also inadequate because they do not develop a hierarchy in which humans have, indisputably, more rights than robots. 

Even if we put aside the fact that these rules are actually just a fiction that Asimov made up to help drive his stories, we still can’t disregard the fact that there is no reliable technology that could replicate these rules inside a machine. In the end, if we are really trying to imagine what the result of trusting this kind of power to a machine could be, the prospects are likely to be frightening.  

The point here is that much of the funding for robotic research comes from the military, which is paying for robots that follow the very opposite of Asimov’s laws. It explicitly wants robots that can kill, and won’t take orders from just any human, or care about their own existence. 

 

Problem IV.: Translation of the rules 

For consistency with Asimov’s ideas, let us assume that we do have AI agents that are complex enough for these laws to apply to them. Let us also assume, for the sake of discussion, that despite being laws for a narrative device they have been applied to the real world. We still have to face one more technical problem: the laws are originally in English. How can we translate them automatically into other languages? For example, what if one agent can only process Spanish? Even if the agent were created in the UK how can we ensure that it understands the rules?  

In conclusion, we would need a way to translate that law and convey the meaning behind the words into every language possible. At first glance, you would not think that this would be a problem, it’s a very basic task for a human. But for a machine it represents two very different exercises: firstly, to produce corresponding sentence strings in different languages; and secondly, to understand those strings. For example, on the one hand you might be able to sing a song in Spanish if you have heard that song a thousand times before, even if you can’t actually speak Spanish and you have no idea what the song is about; but on the other hand, to do the second task alone would be like having a thought in your mind, but not knowing how to phrase it. 

Taking a step back, it is not simply a matter of translating between the spoken languages, the problem could even arise beforehand, when trying to ‘translate’ the rules from English to code. To put it simply, Asimov’s laws were not designed to be coded into an AI program: try coming up with precise definitions and mathematical bounds of the key phrases such as inaction, harm, protect, existence… Not easy, is it? That’s because they are, as mentioned, in English. As roboticist and writer Daniel Wilsonputs it succinctly: “Asimov’s rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that?” 

Attempts at producing a legal framework for robotics in the EU 

Interestingly, the concept of robotics comes from literature, but today it represents a complex field of science and has gained great economic weight. Nowadays, artificial intelligence has already become fairly advanced when one considers the fact that its application in everyday life is often hampered by the lack of legal regulation. The legal examination of robotics is complicated, especially considering that there is currently no generally accepted concept or approach to build the bases on, only recommendations. Which means, we are already stuck at the beginning as to what exactly the laws should regulate. 

To address the challenges and make the most of the opportunities AI offers, the European Commission put together a European Strategy (Artificial Intelligence for Europe), which places people at the centre of AI development. The European AI strategy and the coordinated plan make it clear that trust is a prerequisite in ensuring a human-centric approach to AI: AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. To achieve this, the trustworthiness of AI should be ensured. The values on which our societies are based need to be fully integrated in the way AI develops. Therefore, there is a need for ethics guidelines that build on the existing regulatory framework and that should be applied by developers, suppliers and users of AI in the internal market, establishing an ethical level playing field across all Member States. This is why the Commission has set up a high-level expert group on AI representing a wide range of stakeholders and has tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. 

Ensuring that European values are at the heart of creating the right environment of trust for the successful development and use of AI, the Guidelines highlight the key requirements for trustworthy AI in the communication: 

  1. Human agency and oversight 
  2. Technical robustness and safety 
  3. Privacy and Data Governance 
  4. Transparency 
  5. Diversity, non-discrimination and fairness 
  6. Societal and environmental well-being 
  7. Accountability 

Trust and ethics, however, are not the only issues that need to be discussed when talking about AI in a legal context. The European Union has already made a proposal from a civil law perspective in which they suggest that the most advanced autonomous robots could be classified as electronic persons with specific rights and obligations, such as liability for damages. Now, this does not mean that they were able to come to any kind of conclusion or solution on such liability: robots, as of now, according to the legal framework, cannot be held liable. The proposal points out the holes in the legal framework and calls for action to come up with concepts to fill them. As of now, the existing rules on liability cover cases where the cause of a robot’s acts or omissions can be traced back to a specific human agent such as the manufacturer, the operator, the owner or the user and where that agent could have foreseen and avoided the robot’s harmful behaviour. However, in the scenario where a robot can take autonomous decisions, the traditional rules will not suffice to give rise to legal liability for damage caused by a robot, since they would not make it possible to identify the party responsible for providing compensation and to require that party to make good the damage it has caused. However, this only means that the problem has been recognized but we have years of discussions ahead of us to fix it. The proposal also comes up with potential suggestions to conquer the problem, and one of them is giving electronic personality to advanced robots. Electronic personality could be used in cases where robots make intelligent, independent decisions, or otherwise interact independently with third parties. 

The proposal would introduce the registration system of advanced robots at a European level, which would require uniform criteria for the classification of robots in order to identify them to be registered. According to the Commission the subcategories by taking into consideration the following are the characteristics of a smart robot: 

  • the acquisition of autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the trading and analysing of those data; 
  • self-learning from experience and by interaction (optional criterion); 
  • at least minor physical support; 
  • the adaptation of its behaviour and actions to the environment; 
  • absence of life in the biological sense. 

The proposal basically approaches the issue from a civil law perspective, focusing on liability for damages caused by robots. However, robots are not yet considered to be autonomous beings, so it raises further questions as to who is responsible for the damage they cause? The manufacturers? The designers? 

There is no doubt that technical progress is moving much faster than the law can follow it. Thus, creating the rules is still an extremely difficult task, so it is no wonder that despite Asimov’s best efforts, he was unable to create a perfect set of rules. Regardless of the time that has passed since then, one thing is certainly clear, he did create a theory that continues to be a reference point even decades later.

Exciting developments
are underway at Kassailaw!
Our team of legal and technology experts is hard at work, preparing to launch a new and innovative way to access information and knowledge. This interactive platform will provide an immersive and engaging experience and we’re eager to share it with you.
Stay tuned!
🔥💡💻

“Is your team the dream team? How much percentage should each founder get?” One of the core ingredients to success is the right team with complementing skills and personalities: early stage investors (and business partners too, by the way) will invest in the team, not the idea. Our goal is to guide you in building a strong and well-functioning team, as well as help you uncover potential friction points or weaknesses in the team, so that you can address them in the very beginning. When it comes to the fair split with your co-founders, if you need a reference point, or just want reassurance, we have developed our own tool for equity split calculation. Hint: the one answer that’s certainly wrong is a hasty 50-50 split.

You have spotted a problem and found a viable solution – in other words, you have your idea. What’s the next step? You need to make sure that the problem your business is trying to solve is a valid problem for a wide enough group, and that

Are you sure that the problem your business is trying to solve is a valid problem for a wide enough group? 

When you spot a problem and think you have found a viable solution to create a business around, it’s all too easy to get excited and jump straight into ideating a solution.

Avoid making something and then hoping people buy it when you could research what people need and then make that.

It doesn’t make any sense to make a key and then run around looking for a lock to open.

There are many ingredients in the recipe for creating a successful startup, but most certainly whatever you read and wherever you go, one of the first pieces of advice is going to be to do your homework properly regarding the validation. You have to validate both your problem and your solution to be able to define the perfect problem-solution and later on the product-market fit. If you manipulate your future customers into liking your solution or do not reveal all the aspects and layers of a problem you identified, your idea can easily lose its ground and with that the probability of it surviving and actually being turned into a prosperous business. Let us know if we can help at this initial but yet super-important stage.

Validation is the first step in moving towards learning more about the problem you are ultimately looking to solve.

Finding your unique value proposition is only possible if you take a thorough glance at your competitors. The world of tech is highly competitive, particularly so when you operate in a field with low entry barriers, you need to carefully examine and regularly update the news and developments of those companies who act in the same field and market. This might lead to several pivots for you if necessary, because you can significantly increase your chances of success if you can offer a—at least in some aspect—unique solution to your customers. The introduction as “we are like Uber/Snapchat/WeWork/Spotify, only better” is hardly sufficient in most cases. Unless you really are so much better, but then you need to know that too, so up the competitive analysis.