In spite of sky-high costs and little in the way of profits, generative AI systems continue to proliferate. The Trump administration has called for a national AI Action Plan to guide America’s burgeoning AI industry, and OpenAI was happy to use that as an opportunity to decry the negative effect of copyright enforcement on AI development. Google has also released its policy proposal, which agrees with OpenAI on copyright while also prompting the government to back the AI industry with funding and policy changes.
Like OpenAI, Google has been accused of piping copyrighted data into its models, but content owners are wising up. Google is fighting several lawsuits, and the New York Times’ lawsuit against OpenAI could set the precedent that AI developers are liable for using that training data without permission. Google wants to avoid that. It calls for “balanced copyright rules,” but its preference doesn’t seem all that balanced.
The dearth of available training data is a well-known problem in AI development. Google claims that access to public, often copyrighted, data is critical to improving generative AI systems. Google wants to be able to use publicly available data (free or copyrighted) for AI development without going through “unpredictable, imbalanced, and lengthy negotiations.” The document claims any use of copyrighted material in AI will not significantly impact rightsholders.
According to Google’s position, the federal government’s investment in AI should also extend to modernizing the nation’s energy infrastructure. Google says AI firms need more reliable power to keep training and running inference to advance AI. The company projects global data center power demand will rise by 40 gigawatts from 2024 to 2026. It claims the current US infrastructure and permitting processes are not up to the task of supplying the AI industry.
If the government truly supports AI, according to Google, it will also begin implementing these tools at the federal level. Google wants the feds to “lead by example” by adopting AI systems with a multi-vendor approach that focuses on interoperability. It hopes to see the government release data sets for commercial AI training and help fund early-stage AI development and research. It also calls for an increase in public-private partnerships and greater cooperation with federally funded research institutions with initiatives like government-funded competitions and prizes for AI innovation.
Google’s position on AI regulation: Trust us, bro
If there was any doubt about Google’s commitment to move fast and break things, its new policy position should put that to rest. “For too long, AI policymaking has paid disproportionate attention to the risks,” the document says.
Google urges the US to invest in AI not only with money but with business-friendly legislation. The company joins the growing chorus of AI firms calling for federal legislation that clarifies how AI firms can operate. It points to the difficulty of complying with a “patchwork” of state-level laws that impose restrictions on AI development and use. If you want to know what keeps Google’s policy wonks up at night, look no further than the vetoed SB-1047 bill in California, which would have enforced AI safety measures.
According to Google, a national AI framework that supports innovation is necessary to push the boundaries of what artificial intelligence can do. Taking a page from the gun lobby, Google opposes attempts to hold the creators of AI liable for the way those models are used. Generative AI systems are non-deterministic, making it impossible to fully predict their output. Google wants clearly defined responsibilities for AI developers, deployers, and end users—it would, however, clearly prefer most of those responsibilities fall on others. “In many instances, the original developer of an AI model has little to no visibility or control over how it is being used by a deployer and may not interact with end users,” the company says.
There are efforts underway in some countries that would implement stringent regulations that force companies like Google to make their tools more transparent. For example, the EU’s AI Act would require AI firms to publish an overview of training data and possible risks associated with their products. Google believes this would force the disclosure of trade secrets that would allow foreign adversaries to more easily duplicate its work, mirroring concerns that OpenAI expressed in its policy proposal.
Google wants the government to push back on these efforts at the diplomatic level. The company would like to be able to release AI products around the world, and the best way to ensure it has that option is to promote light-touch regulation that “reflects US values and approaches.” That is, Google’s values and approaches.