Ryan Abbott: I personally think if you play it out, that means that machines have solved every social problem we have from disease to climate change. And that we have the luxury of spending all day deepening our connection to other people, engaging in self-improvement or, you know, without me telling you what to do with your life, enjoying the meta-verse.
Lise McLeod: What comes to mind when you think of artificial intelligence? Is it something along the lines of HAL from the movie '2001: A Space Odyssey'? Artificial intelligence has been around since the 1950s, when Alan Turing first queried whether machines think. Now its market size is around 328 billion US dollars.
I am Lise McLeod and Professor Ryan Abbott joins me today to talk about his book ‘The Reasonable Robot’ where he argues that the law should not discriminate between AI and human behaviour and proposes a new legal principle that will ultimately improve human wellbeing.
Hello, Ryan, welcome to the podcast. It's a pleasure to have you here today. And to have you talk to us about artificial intelligence.
Ryan Abbott: Well, thank you. I'm delighted to be here.
Lise McLeod: How about we start at the beginning. If you could outline for us your definition of artificial intelligence from your book ‘The Reasonable Robot’, you also cover the concept of whether artificial intelligence thinks or not. Could you please also expand on that?
Ryan Abbott: Yes, happy to, and you know, it's interesting that 60 years after people have started using the term, or so, people still have very different views of what AI is. Which is interesting academically, but also problematic when you get to making regulations around AI and people are talking about different things.
And if people are having trouble defining AI, they're having a really hard time talking about artificial intelligence-generated inventions and meaning the same thing. For me, I think of AI very functionally, similar to its original definition as a machine or an algorithm that does something we would ordinarily expect cognition or human cognition for.
And that could be something as simple as operating a cash register to something as complex as driving a car or finding a new drug. But I think about AI in terms of what it does, functionally, not how it is structured or operates. This gets into too some longstanding philosophical questions about whether what machines do are similar to what people do when they complete the same sorts of tasks. Whether machines, in other words, think like a person or whether what machines do is really very different than what a person does. I mean, clearly machines don't do exactly the same thing a person does, but I'd rather take the view of Alan Turing, who wrote a paper in the 1940s on this and said, you know, we really don't know what it means to think, and it doesn't really matter that much if what you want out of some system, particularly a legal system, is to promote some sort of behaviour.
You know, for example, with patent law, we want more socially valuable innovation to be created. It really doesn't matter to me whether if a machine is to come up with that innovation. Whether it does something very similar to how a person innovates or something really very different, you know, in other words, it wouldn't matter to me whether an invention comes from a person, a group of people, a room full of monkeys on typewriters or a computer.
You know, if we're looking for a new antibody to treat COVID-19 we just want to promote some sort of behaviour that results in that sort of outcome. So for me, the fascinating philosophical question of whether machines think is less relevant to us creating legal systems to promote social benefits from artificial intelligence.
Lise McLeod: For you it's less relevant, but it's seemingly relevant to the systems currently in place. How is artificial intelligence functioning in relation to creativity and innovation at the moment?
Ryan Abbott: Well, it may matter to some of the systems, although frankly, some of these issues are still being worked out. So for example, people have claimed to use machines to make creative works, creative works that would traditionally attract copyright attention or subsistence of copyright, like a new painting or a song.
Since the fifties, sixties, and seventies, you know, today you can go to a variety of commercial tools online, press a couple of buttons and out comes something that really appears very creative. You could ask about that process, or you could think about that process in two sorts of ways, you know, one is, is something functionally creative being made by a machine. Or two you know, do machines go through the traditional way these tests have been phrased by courts. For example, you know, are things happening in the mind of a machine. If you want to think about it in a more philosophical sense, that might prevent something like copyright from existing.
The U.S. Copyright Office, for example, holds that a computer-generated work, something made without a traditional human author can't get protection. And they base that policy on very old Supreme Court case law that says, you know, things that get copyright protection are things that come from the mind of an author, and the copyright office says "well machines don't have minds, so therefore they can't have copyright." And neither do monkeys for that matter, if we have time for that.
The United Kingdom, for example, has a law that explicitly provides protection for computer-generated works, and they do not get into questions of, you know, originality or what is in a machine's mind if you were to create an analogy for something like that. So the way you think about these things can impact the sorts of real world outcomes from AI behaviour.
Lise McLeod: In the book, you mentioned the Siemens case study, where they were unable to file for protection on multiple artificial intelligence-generated inventions because the natural person couldn't be identified as an inventor. You state that, meanwhile, patent offices have likely been granting patents on AI- generated inventions for decades. Could you please talk a little bit about?
Ryan Abbott: Sure. Well, the Siemens case study was one that I became aware of from attending the WIPO 'First Conversation on AI and IP', which is a good reason why one should always attend WIPO events. But the chief IP counsel of Siemens explained to the group that they had an industrial design generated from an artificial intelligence that they wanted to patent. And none of the people involved in it were willing to list themselves as inventors. Essentially, they felt that they had done nothing requiring inventive skill and it would be wrong for them morally to be inventors on it.
They had an AI that optimized industrial components, they gave it publicly available information. They told it what they were looking for, which was well-known, and the AI spit out a design that was obviously valuable. There's a lot of interesting issues with that. The commercial issue is that Siemens thus doesn't get a patent and now would have a hard time commercializing that product because once they sold it once anyone could copy it and Siemens couldn't stop them. But it isn't just, you know, the moral feelings of engineers. In the United States, if you list the wrong inventor on a patent that patent becomes unenforceable, if you did it, you know, without doing it in good faith. And if you do it deliberately and accurately, it opens you up to criminal sanctions.
So, you know, on the patent front, as well as the copyright front, you have issues with whether intellectual property rights can exist at all, as you have machines doing human-like sorts of things. And a whole variety of other interesting issues that this sets off throughout, you know, even the way IP applies to natural persons.
Now, people looking at machines, making creative and inventive things was part of my research agenda. And as part of that, I spoke to people who in the 1990s, eighties, and even seventies claimed they were having machines do the thing that traditionally makes a person an inventor and when they did that, they unanimously said, you know, our lawyer said "You don't want to say that. Just say, you're the inventor. There's no rule about this. It's fine. Otherwise you're not going to get a patent”. And so that's what they did.
Interestingly, Lord Justice Birss in the United Kingdom, who we may get to in a moment, spoke a week ago at a conference and mentioned that he had had a case involving an AI-generated work about a decade ago involving a car brake components. You know, both parties had agreed that the design essentially just came out of an AI, but the existence of patent protection for an AI-generated work wasn't an issue in that case.
Lise McLeod: Am I understanding correctly, that in order to get a patent for an invention produced by artificial intelligence then the listed inventor still needs to be a human?
Ryan Abbott: Well, these are all somewhat open questions that aren't harmonized right now in different jurisdictions. And some jurisdictions at the moment are taking the position that if you have an AI make some inventive thing that would normally get a patent, it can't be protected at all because you don't have a traditional human inventor.
Other jurisdictions have said the manner in which an invention is made shouldn't interfere with getting a patent, which is the way we have traditionally done things, but are struggling to figure out well, how exactly this works once machines are doing that inventive part of all this. Inventorship and entitlement issues vary by jurisdiction. Sometimes they're not really that well formalized and they aren't necessarily harmonized. You know, it is usually the case that an inventor is the first owner of a patent, but not always, for example, in the United Kingdom inventions first belong to employers when something is done within the scope of employment and never owned by inventors at all.
In fact, it is the case that most patents are not owned by natural persons or inventors. They're owned by artificial persons in the form of corporations. So companies own the vast majority of patents and a fairly small number of companies own a fairly large number of patents. You know, companies like IBM and Siemens and Huawei.
So the idea that an inventor wouldn't own their patent is, is an old one, not a new one, but if a machine does invent something, it can't own a patent because it doesn't have legal personality or the ability to own property. It also wouldn't really make sense to have the machine own a patent because it wouldn't care about getting a patent and it couldn't exploit a patent. Now, you could change a law to give a machine a right, again, that isn't something unprecedented, lots of companies have rights. You know, you could give a self-driving car an insurance policy, for example, but I think it still doesn't really make a lot of sense. You know, it makes a lot more sense to have the person owning the AI or the company owning the AI, own any patentable inventions that come out of it.
And that's pretty consistent with both the way patents work with humans and the way that property works generally. So for example, if I own a 3D printer and it makes me a beverage container, I own that beverage container. If I own an AI that makes a design for a beverage container, no reason I shouldn't own that design, at least that is our argument right now in terms of who should own this sort of thing.
Lise McLeod: In the book you outline how artificial intelligence can safeguard human moral rights. Could you expand on that for us?
Ryan Abbott: Sure. Well, if we're back to the patent context, there's two broad sorts of categories of rights people think about with patents, you know, one is economic rights and in places like the United States or the United Kingdom, that's very much considered the dominant motivation behind patents. And patents do three things. They incentivize people to invent. They encourage people to disclose information. And they encourage people to commercialize new products.
You know, so for example, if you were a pharmaceutical company, if you invent a new cure for COVID-19 and you want to get it approved by a drug regulatory authority, that sort of thing takes a lot of time and money. And without a patent, or something kind of like a patent, you may not be able to recoup your investment in that if you don't have intellectual property protection.
So this patent encourages you to invest in hiring research scientists to find new things, to publish things that you would otherwise keep as confidential information, and then to invest in getting something like a drug approved. And that really works the same, whether you are hiring a team of people to do that, or employing, so to speak and AI to do that sort of thing.
The other bucket of rights people associate with patents are moral rights. And this is generally the reason that you have inventors listed at all. For example, when a company owns something, although not every jurisdiction requires that - Israel, for example, doesn't require an inventor to be listed - but if I invent something for my university, I want to be listed on that patent. And at least that is a moral right of acknowledgement that, you know, says to the world, I invented this thing and it can also have economic benefits for me for signalling my productivity, for example, to future employers.
It is also possible that I get some sort of financial benefit from being an inventor either because of a contract I have with my employer or in jurisdictions, like the European Union, there are rules that inventors can get, you know, essentially some financial benefit from very successful patents. Now, if you have an AI invent something, you have a couple options about how you might list an inventor.
If I own an AI that invents 10,000 new drugs, for instance, I could try and list myself as an inventor on all of those things. And if I did that, it would make me the most prolific pharmaceutical researcher of all time, but it wouldn't really be credit that I deserve, if all I was doing was telling an AI to solve a problem. You know, by contrast, if an AI invent something, listing it as an inventor would keep someone from claiming credit for work they haven't done. And it would prevent the dilution and diminishing of legitimate human ingenuity. It would also be transparent to the public about how an invention was made and it would facilitate, you know, claims of entitlement or ownership by establishing an ownership structure with what AI solved something.
Lise McLeod: I think this would be a good time for you to share how you've been testing the system with your own artificial intelligence project.
Ryan Abbott: Sure. I and a group of patent attorneys filed for two patents for inventions made by an AI under circumstances such that we didn't have a traditional human inventor. At least in our opinion, not according to the US or UK law, although that does vary by jurisdiction.
And we did this for a few reasons. One was over the course of several years of me working on the project, this had gone from an issue that people sometimes found vaguely interesting to one that certain companies saw as a significant problem for them. And these were largely companies whose business model in some way, was having AI come up with patentable new things, always with people somehow in the mix there.
But often in ways where it would be difficult or in ways where it could be difficult to put a finger on someone and say, well, that person's an inventor, you know, not these other people. And so there was this unanswered question because there had never been a case on this, which was, well, if you don't have a traditional human inventor, what do you do about that? And can you even get a patent?
We filed these two patents in the United Kingdom Patent Office and the European Patent Office. And we did that because they would examine them without looking at inventorship for up to 18 months. So they examined the patents and they found that they were substantively patentable, which meant they were new, they had an inventive step and they were useful. And generally, if a patent has those or have an application, has those characteristics, you get a patent. I could have then put my name on all of them and we would have had a bunch of patents and no one would have thought twice about this, at least until you tried to sue under one of them, but we can come back to that.
So we then corrected the inventorship and said "Well, we don't have a human inventor for this, a machine made this", and those two offices then denied those applications on a formalities basis. And we filed them in 15 other jurisdictions around the world. And they had been denied by the US, the UK, Europe, Germany, and Australia in final decisions of patent offices. And all of those decisions are currently under judicial appeal.
Lise McLeod: I admire your perseverance Ryan.
Ryan Abbott: Well, the machines are very patient and it has, you know, we have already had some interesting decisions on this. So while those five offices rejected them, and while we have 11 offices still either thinking about them or in some process of dealing with them, we received a patent in South Africa in July and the AI, which in our case is named DABUS, is listed as the inventor on that patent. And the owner of the AI, Dr Stephen Thaler is listed as the owner of the patent. So the patent went to the person owning the AI, to Dr Thaler, but the AI is listed as the inventor.
And again, we argued, this is the right way to handle something like this - to be transparent about how invention occurred, and to keep someone from taking credit for work they haven't done. But, ultimately if you let the person who owned the AI own the patent, it encourages them to invest in the making and using inventive machines to solve socially useful problems.
And ultimately, again, coming back to our first topic. What we want as a society out of a patent system is largely more socially valuable innovation, whether that's cleaner energies or cures for diseases or faster vehicles. And some of that, particularly in the future is going to increasingly come from having machines deeply involved in that process.
Three days after that the Federal Court of Australia and Justice Beach wrote a 41 page reason decision holding essentially to the same extent that there's no reason why one shouldn't be able to get a patent on an AI-generated invention. That there's no reason why an AI can't be listed as an inventor, and at least in our case, that Dr Thaler had the clearest claim of entitlement to owning that. But the denials have been upheld, so far in the US where it's now been appealed to the Federal Circuit, which is the intermediate federal court, and they were recently upheld, the denials, by the UK Court of Appeal, although that court split with Justice Birss, who I mentioned previously, holding that there should be no prohibition on patenting this sort of thing. The other two judges disagreed, and we're now seeking lead to appeal with the Supreme Court.
Lise McLeod: We will certainly be watching to see how this progresses for all of you.
Can we talk a little bit about another part from the book in how artificial intelligence could change the standard of the skilled person? Could you expand on what is a skilled person in patent law and how that could potentially evolve with artificial intelligence and what direction do you think AI could take it?
Ryan Abbott: Sure. Well, let me start with the standard. So, there are a number of requirements you have to meet to get a patent, but the most significant one tends to be that your invention has to be non-obvious to a skilled person in your field. So, something has to be new. Something has to be industrially applicable or useful, but it almost always is. And you have to imagine that if you had essentially your average researcher in your field and they saw your invention, they would think that it was inventive and not obvious. And that obviously is a very challenging test to do, because it is a bit more subjective than the other tests.
But that's the primary way that most jurisdictions determine whether or not what you have done is worthy of getting a patent because patents have social costs and we don't want to grant them for any advancement. You know, we want to grant them for advancements, economically speaking, that wouldn't have come about or would have come about much slower without having this patent incentive.
Now, that skilled person depends on what field you're in. So it could be kind of a regular person if we were looking at a very simple sort of invention, you know, for something around the house that makes something easier. If we're looking at chemical engineering or, you know, petroleum engineering, then you're looking at a very highly trained person who has a deep knowledge of literature in that field.
And this standard evolves over time. So in Europe, for example, we have already decided that the skilled person may be a team of people where it is standard in some areas that research is done by groups of people and obviously to a group of people more will be obvious than would be obvious to a single person, right?
And the better trained someone is and the more educated, the more will be obvious to them. Now, AI to me has already entered the picture. And in fact, I think already that sometimes the skilled person may be a skilled team of people using AI because AI does two things that make more obvious. On the one hand, it gives people access to more prior art or existing information.
So if you're looking to design a new chemical, you know, as a catalyst in an oil manufacturing facility or an oil processing facility, you might not be looking at culinary science, but if you have an AI that's, you know, providing input for you researching this area, it might be pulling information from everywhere.
And so you get basically a superhuman amount of information. And the other thing it does is it gives people certain problem solving capabilities that they don't otherwise have, like pattern recognition and large datasets could be trivial for AI, but very difficult for a person. And so, in some sense, as the facts change on the ground and people start using AI more, this will start making it harder to get patents on things.
In the future when AI is not just sort of inventing occasionally, but it has become the standard way that problems are solved in a field. So for example, let's say “COVID-25” comes along and instead of Pfizer and Johnson & Johnson and Moderna going to large teams of people to find new vaccines, they all just use an AI presented with a pathogen and say, now what vaccine should we use? Well, then the skilled person has effectively become a skilled person using an inventive machine, or maybe just an inventive machine. And to that inventive machine, a lot more is going to be obvious because it does a better job at researching than a person, at least if we are in a world where machines have replaced people at doing that.
And I don't think that's anywhere right now, but it may well be the case in the not too distant future, at least in certain areas where machines have these natural advantages. For example, optimizing some sorts of industrial company, or for finding some new chemicals or repurposing new chemicals as medicines. In some sense though, that is not connected to the inventorship issue because skilled persons are explicitly not inventors because to inventors a lot more would be obvious that would be obvious to skilled persons.
Lise McLeod: You also mentioned that there's potentially no end of the sophistication of artificial intelligence and that it may be difficult for a person alone to come up at some stage with anything that is not obvious. I would like to highlight what you mentioned in the book that artificial intelligence may one day greatly alter the current patent system as we know it, but that this should not be a cause for concern. Could you please develop that idea a little bit for us?
Ryan Abbott: Sure. Well, there's a couple of concepts in computer science, you know, one is the idea of general artificial intelligence and one is the idea of super intelligence. The idea of a general artificial intelligence is that right now, what we have is narrow artificial intelligence. We have AI that does certain tasks. So, you know, Google's deep mind or Alphabet's deep mind has an AI that could beat any human being at a game of Go and it plays Go better than anything has ever played Go in human history, you know. But that Go playing AI couldn't, you know, design a better brake pad or, you know, counsel you on your medication usage. It just does that one thing.
There is a theory that we will develop a machine that won't necessarily think like a person or be like a person, but that could do any intellectual task, a person could do. So, you know, it could operate a cash register, drive a car and make a painting, just depending on what you ask it to do.
And if we do ever get that AI and experts are divided on this, but most of them seem to think we'll have it this century, the first thing you might ask it to do would be to improve its own programing. And it could do that hypothetically at a, you know, exponential pace, such that in the not too distant future, from having general artificial intelligence, you have a super intelligent AI that could not only really do anything. It could solve the problems we can imagine and the ones we can't. And at that point you wouldn't really need a person to do anything because machines would so dramatically out-compete us. Now that's been explored a bit in the computer science and philosophy literature and so forth, but not so much in the patent literature.
If we do ever have a super intelligent AI, everything will be obvious to a super intelligent AI, which will effectively be the skilled person. So no, you really couldn't get patents anymore. And, and that would be an okay outcome because the cost of innovation would be trivial, and to the extent that you do need some further incentive for some sorts of things like getting new drugs approved, we might have to rely on other sorts of mechanisms, like providing a period of market exclusivity for protected drugs.
You know, some people view that as a dystopia where people are out of work. I personally think if you play it out, that means that machines have solved every social problem we have from disease to climate change. And that we have the luxury of spending all day deepening our connection to other people, engaging in self-improvement or, you know, without me telling you what to do with your day, enjoying the metaverse.
Lise McLeod: In addition to our listeners checking out your book ‘The Reasonable Robot’ where else would you suggest that they take a look to learn more about your work? To keep up with the developments of DABUS and the like.
Ryan Abbott: Well, anyone who's interested in the test cases, I operate a website, artificialinventor.com, which when I get to it, updates someone on the cases going around the world, but it is pretty up-to-date.
And then me personally, my website is ryanabbott.com or I'm @DrRyanAbbott on Twitter, or on LinkedIn, and would love to hear from anyone who has an interest in this.
Lise McLeod: Thank you, Ryan. We'll be sure to have a look.
Ryan Abbott: Thank you.
Lise McLeod: I hope that you enjoyed my conversation with Professor Ryan Abbott. Since we spoke there has been some news regarding the Artificial Inventor Project in that the Federal Court of Australia issued their decision in the appeal to allow an AI system to be named as a patent inventor. The panel held that “only a natural person can be an inventor”, so there are plans to appeal to the High Court.If you would like to learn more about this fascinating topic, his book ‘The Reasonable Robot’, published by Cambridge University Press, can be found through all major book channels, including in audio version.
For more books about IP-related topics, check out the knowledge repository at the WIPO Knowledge Center webpage
Until next time, and the next Page Points, bye for now.