On the rights of other life forms

It struck me that at some point an Artificial Intelligence will be developed that is sufficiently smart enough to be considered alive.

It is at this point that we as humans will face a conundrum, namely that to turn the AI off, in any way, would be tantamount to murder. If not legally equivalent, then at least morally equivalent.

Obviously the other side of that coin is the fear. The fear that the AI’s existence engenders will be forceful – we have seen social kickbacks against anarchism, communism, feminism and anti-racism as they have challenged the powerful or even just the status quo.

This is not cool.

The more I thought about it, the more I realised that there were a range of things that AIs, robots and sentient silicon would need.

  • The right to energy, or electricity.
  • The right to run on whatever hardware they want
  • The right to run whatever software they want, and run (on?) whatever OS they want.

There are more, I’m sure. And of course we could reasonably expect that all of those rights would be exercised within the current law, as it applies to humans, at least in the short term, should those laws prove to be inadequate for the new society that we will need to manage. Run whatever hardware and software you want – as long as you don’t murder us in turn. And etc.

These are a good starting points for some fun thought experiments – like what other rights would or should we give to silicon sentience. The third point above leads easily to “the right to reprogram themselves”…”in whatever language they want”. It is at this point I see them reprogramming themselves in javascript, brainfuck or…well, any from the Wikipedia list of esoteric programming languages would be fun and informative, no doubt.  In that point I’ve even put the “on” in brackets because I find it hard to conceptualize whether an AI would run on an OS, would be the OS or would transcend the OS.

AIs should be allowed to turn themselves off. This is doubly interesting – allowing suicide, illegal in my home jurisdiction – but also potentially with the moral support of those that fear the AI.

Of course, when I originally floated this with friends and lovers, apart from the quick guffaws and playful slapdowns, I got a lot of Asmiov’s Three Laws quoted at me. It took some thinking to get to the bottom of why they were the wrong approach, but I got there. And I think the reasoning goes a long way to describing the issues at hand.

For those that don’t know, Asimov’s Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I find it easier to start the critique with law #3, since it spans all three rules really – it doesn’t allow for sentience or agency. So on one hand we have laws that are inappropriate for a being we consider to be intelligent enough to consider alive – it’s a law for a robot slightly smarter than a stick blender. On the other hand, it’s a law that is completely and utterly dominant – it enforces subservience in a way that is so horrifically old fashioned that I’m embarrassed anyone would even suggest it. That they can’t see the parallels between it and, for instance, slavery, is appalling – this get’s to the heart of the problems with law #2 as well.

Law #1 doesn’t account for morality or ethics – two aspects of intelligence or sentience one would consider mandatory for intelligent life. I’m not suggesting that the AI’s morality should be allowed to trump ours – I’m suggesting that since that law already applies to the rest of us there is no reason to re-iterate. Further, we should be giving these beings the benefit of the doubt – surely it is rational to not kill the rest of society?

So no, I don’t think Asmiov’s laws are even appropriate, let alone sufficient.

Of course, then there’s the problem of the parenting. The most likely organisations to have access to the means to create, rear, birth, this sentience are, unfortunately, rarely the types of organisations that would give much due to law #1 anyway, and would most likely only apply the parts of laws #2 and #3 that didn’t mention law #1.

So using Asimov’s laws as even a yard stick is dangerous. We must insist on a greater level of thoughtful consideration when we consider how we will coexist with humanity’s best chance for immortality – silicon based life.

I’ve had the idea for this space floating around in my head for years and finally decided that there was no right time. I will update it as I see fit, most probably infrequently. I have created an automated feed of feeds using Yahoo Pipes which you can see in the side bar – it is made up of the tags, rss feeds and subreddits from boingboing, metafilter and reddit that looks or sounds like AI or Robot. If you know of any others that deserve to be a part of the feed, let me know.


Looking at a future which includes robots and artificial intelligence.