Survey: 84% of tech execs back copyright law overhaul for AI era


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A new survey reveals that U.S. business leaders are increasingly calling for robust AI regulation and governance, highlighting growing concerns about data privacy, security risks, and the ethical use of artificial intelligence technologies.

The study, conducted by The Harris Poll on behalf of data intelligence company Collibra, provides a comprehensive look at how companies are navigating the complex landscape of AI adoption and regulation.

The survey, which polled 307 U.S. adults in director-level positions or higher, found that an overwhelming 84% of data, privacy, and AI decision-makers support updating U.S. copyright laws to protect against AI. This sentiment reflects the growing tension between rapid technological advancement and outdated legal frameworks.

“AI has disrupted and changed the technology vendor/creator relationship forever,” said Felix Van de Maele, co-founder and CEO of Collibra, in an interview with VentureBeat. “The speed at which companies — big and small — are rolling out generative AI tools and technology has accelerated and forced the industry to not only redefine what ‘fair use’ means but retroactively apply a centuries old U.S. copyright law to 21st century technology and tools.”

Van de Maele emphasized the need for fairness in this new landscape. “Content creators deserve more transparency, protection and compensation for their work,” he explained. “Data is the backbone of AI, and all models need high quality, trusted data – like copyrighted content — to provide high quality, trusted responses. It seems only fair that content creators receive the fair compensation and protection that they deserve.”

The call for updated copyright laws comes amid a series of high-profile lawsuits against AI companies for alleged copyright infringement. These cases have brought to the forefront the complex issues surrounding AI’s use of copyrighted material for training purposes.

In addition to copyright concerns, the survey revealed strong support for compensating individuals whose data is used to train AI models. A striking 81% of respondents backed the idea of Big Tech companies providing such compensation, signaling a shift in how personal data is valued in the AI era.

“All content creators — regardless of size — deserve to be compensated and protected for use of their data,” Van de Maele said. “And as we transition from AI talent to data talent — which we’ll see more of in 2025 – the line between a content creator and a data citizen — someone who is given access to data, uses data to do their job and has a sense of responsibility for the data — will blur even more.”

Regulatory patchwork: The push for state-level AI oversight in the absence of federal guidelines

The survey also unveiled a preference for federal and state-level AI regulation over international oversight. This sentiment aligns with the current regulatory landscape in the United States, where individual states like Colorado have begun implementing their own AI regulations in the absence of comprehensive federal guidelines.

“States like Colorado — the first to roll out comprehensive AI regulations — have set a precedent — some would argue prematurely – but it’s a good example of what has to be done to protect companies and citizens in individual states,” Van de Maele said. “With no concrete or clear guardrails in place at the federal level, companies will be looking to their state officials to guide and prepare them.”

Interestingly, the study found a significant divide between large and small companies in their support for government AI regulation. Larger firms (1000+ employees) were much more likely to back federal and state regulations compared to smaller businesses (1-99 employees).

“I think it boils down to available resources, time and ROI,” Van de Maele said, explaining the disparity. “Smaller companies are more likely to approach ‘new’ technology with skepticism and caution which is understandable. I also think there is a gap in understanding what real-world applications are possible for small businesses and that AI is often billed as ‘created by Big Tech for Big Tech’ and requires significant investment and potential disruption to current operating models and internal processes.”

The survey also highlighted a trust gap, with respondents expressing high confidence in their own companies’ AI direction but lower trust in government and Big Tech. This presents a significant challenge for policymakers and technology giants as they work to shape the future of AI regulation.

Privacy concerns and security risks topped the list of perceived threats to AI regulation in the U.S., with 64% of respondents citing each as a major concern. In response, companies like Collibra are developing AI governance solutions to address these issues.

“Without proper AI governance, businesses are more likely to have privacy concerns and security risks,” Van de Maele said. He went on to explain, “Earlier this year, Collibra launched Collibra AI Governance which empowers teams across domains to collaborate effectively, ensuring AI projects align with legal and privacy mandates, minimize data risks, and enhance model performance and return on investment (ROI).”

The future of work: AI upskilling and the rise of the data citizen

As businesses continue to grapple with the rapid advancement of AI technologies, the survey found that 75% of respondents say their companies prioritize AI training and upskilling. This focus on education and skill development is likely to reshape the job market in the coming years.

Looking ahead, Van de Maele outlined key priorities for AI governance in the United States. “Ultimately, we need to look three to five years into the future. That is how fast AI is moving,” he said. He went on to list four main priorities: turning data into the biggest currency, not constraint; creating a trusted and tested framework; preparing for the Year of Data Talent; and prioritizing responsible access before responsible AI.

“Just like governance can’t just be about IT, data governance can’t just be around the quantity of data. It needs to also be focused on the quality of data,” Van de Maele told VentureBeat.

As AI continues to transform industries and challenge existing regulatory frameworks, the need for comprehensive governance strategies becomes increasingly apparent. The findings of this survey suggest that while businesses are embracing AI technologies, they are also keenly aware of the potential risks and are looking to policymakers to provide clear guidelines for responsible development and deployment.

The coming years will likely see intense debate and negotiation as stakeholders from government, industry, and civil society work to create a regulatory environment that fosters innovation while protecting individual rights and promoting ethical AI use. As this landscape evolves, companies of all sizes will need to stay informed and adaptable, prioritizing robust data governance and AI ethics to navigate the challenges and opportunities that lie ahead.



Source link

About The Author

Scroll to Top