Search
Godfather of AI Geoffrey Hinton speaking about AI counterfeit content and other AI issues.
28 June 2023; Geoffrey Hinton, Godfather of AI, University of Toronto, on Centre Stage during day two of Collision 2023 at Enercare Centre in Toronto, Canada. Photo by Ramsey Cardy/Collision via Sportsfile
Godfather of AI Geoffrey Hinton speaking about AI counterfeit content and other AI issues.
28 June 2023; Geoffrey Hinton, Godfather of AI, University of Toronto, on Centre Stage during day two of Collision 2023 at Enercare Centre in Toronto, Canada. Photo by Ramsey Cardy/Collision via Sportsfile

How the Godfather of AI Believes We Should Handle AI Counterfeit Content

Featured image credit: Ramsey Cardy / Collision via Sportsfile, CC BY 2.0, via Wikimedia Commons

Artificial intelligence technology can provide many benefits to people in the modern world, but it can just as easily be used to cause harm. Geoffrey Hinton, the godfather of AI, has some ideas on how that harm should be dealt with – in terms of AI counterfeit content at least. And he’s not the only one. Legislative bodies, both national and on the state level, are also looking into ways to regulate and deal with the consequences of this technology. Unfortunately, the machine learning industry grows quickly, and counteracting all the harm that could be caused will be difficult.

 

Content created with AI has already sparked lawsuits relating to copyright infringement by artists, writers, and content creators, but other problems are looming under the new wave of artificial intelligence as well. Potential issues in misuse of facial recognition software, illegal swaying of voting populations, discrimination, large-scale job loss, and counterfeit content are only a small handful of the issues we’ll likely face. And, while a full committee and detailed plans would be needed to combat each issue individually, Hinton offered his advice for how we should handle the AI counterfeit content problem in a recent interview. At least that’s one thing to check off the list.

AI bot that could easily be made with ai counterfeit content
Image by Alexandra_Koch from Pixabay

Geoffrey Hinton’s ideas for penalizing AI counterfeit content

Since stepping away from Google’s artificial intelligence research team to openly speak about the dangers of AI and machine learning, Geoffrey Hinton has become a sort of media sensation. He’s appeared in interviews for most major news outlets, and we want to assure you that this isn’t just a fad. Hinton has worked with AI since the ‘70s and his ideas have led the development of machine learning programs into their modern iteration. His expertise in the field has earned him a Touring Award, one of the highest honors in the realm of computer science.

When speaking with Amanpour and Company for an interview in 2023, Hinton talks about wanting AI counterfeit content on the same level as counterfeit currency. He fears too much counterfeit content could skew the knowledge of what’s true and what isn’t, like a type of information gaslighting. “You go to jail for ten years if you produce a video with AI and it doesn’t say it was made with AI,” Hinton says. “That’s what they do for counterfeit money, and this is as serious a threat as counterfeit money.” Hinton also touches on how knowingly passing along counterfeit content should fall into the same realm.

The logic behind his idea

Honestly, this makes sense to some degree. Besides the possible spread of misinformation, if you’re selling counterfeit work as original, that’s comparable to taking money out of the hands of those who uniquely make content. Especially, if you’re attempting to pass it off as someone else’s original piece, squeeze creators out of a market, or maliciously flood a market to drop the value of a creator’s work. Of course, not much is being done to stop these things, despite Hinton’s views.

 

An important note: Hinton believes Google is behaving responsibly. He says he simply stepped away so he didn’t have to worry about saying something that might affect the company in one way or another.

Join the Obscurix Newsletter!
AI Ethics typed out on a typewriter -- the main concern with AI counterfeit content
Image by Markus Winkler from Pixabay

Current legislative efforts for regulating AI in the United States

The current legislative efforts to regulate AI in the US look nothing quite like Hinton suggests. What did you expect? There are multiple unsettled lawsuits against generative AI for copyright infringement, and until those play out, we likely won’t have a precedent for this sort of thing. You have to remember that these companies are owned by fairly rich individuals, and as the United States has shown many times, it can be difficult to regulate highly profitable markets. Even when they’re allegedly stealing intellectual property.

 

Of course, this doesn’t mean nothing is happening. According to the National Conference of State Legislatures at the time this piece was written, the National Institute of Standards and Technology is trying to put together standards that will hold AI companies accountable at a federal level if their machine learning technologies prove untrustworthy. The White House is likewise working on an “AI Bill of Rights,” which was a phrase we never thought we’d write. The plan is supposed to address AI used for discriminating practices and the implementation of opt-out policies among other things.


At face value, the AI Bill of Rights doesn’t look like the most comprehensive plan, but the regulations will likely develop further with time. It’s still early in the AI game. We’re sure there’ll be plenty of hits and misses along the way. As long as one of those misses isn’t an A.W.O.L. AI that destroys humanity, it’ll probably get better.

AI face drawn in green energy
Image by Techmanic from Pixabay

Individual states are also trying to legislate AI

Some states seem to move faster than the federal United States government, but that’s just how it goes sometimes. Several states already have some AI regulation – with “some” being an important descriptive. As the National Conference of State Legislatures explains, 18 states put AI regulations into action in 2023. Granted, almost all of these are minor and don’t do much to regulate the state of machine learning. This likely has to do with how new AI consumer technology is, with most of it having gained popularity in the same year the legislation was put into place. We’ll likely see more regulations soon.

 

As the Electronic Privacy and Information Center shows, the majority of these state regulations have to do with which information AI is allowed to retain – information gathered from the public. That’s a good step since, like data mining, AI could very easily pass on personal information gained from private interactions to its owning company. That information could then be sold or used for marketing purposes. Ever wonder how you get so many spam phone calls? A lot of that is caused by information you didn’t even know you gave away being sold off or collected for spam purposes.

These data protection regulations seem rather difficult to enact since they would require consistently monitoring vast amounts of data. With the number of wrongful data collection lawsuits on the rise, we wouldn’t necessarily put all of our trust into these companies. Regulations or not.

European Union Parliament, where they wrote the AI Act
Image by Erich Westendarp from Pixabay

Europe was the first to pass legislation regulating AI

Europeans are always beating us Americans to the punch it seems, and they’ve done it once again by passing comparatively comprehensive AI regulating legislature before the US could get its shit together. The AI Act is meant to separate machine learning systems into levels of risk – eg. low, medium, high, etc. – and based on the level of risk the EU decides, different regulations will apply. So, if a system is deemed high-risk, it’ll allegedly be monitored throughout the life of the system. These high-risk systems include programs that interact with infrastructure and certain manufactured goods.

Lower-level AI systems might have to be inspected before launch. But other than an issue being raised with these systems, such as the counterfeit content issues Hinton is worried about, they most likely won’t have to be inspected at all. These include generative content systems. It would seem the little guy (creators) might still be out of luck.

At the end of the day, it’s the human element using generative content that’s the real issue, and it would be difficult for any government to monitor the constant generative content produced by these programs. Honestly, it’s not feasible to do on the program side since no one knows what the content is being used for until it’s actually put into use. This is likely why Hinton suggests such harsh terms for those who make AI counterfeit content for distribution. Wouldn’t it be nice if legislation followed his advice? It could, after all, help ensure the survival of your favorite media creators.

Follow us on social media!