The UK government is poised to introduce targeted binding requirements for select companies developing highly capable artificial intelligence (AI) systems, marking a significant step in its response to the AI whitepaper consultation. With a commitment to invest over £100 million in supporting its proposed regulatory framework for AI, the government aims to ensure the safe and innovative use of AI technology across various sectors.
Government response to whitepaper consultation
Following the publication of the AI whitepaper in March 2023, which outlined pro-innovation proposals for regulating AI, the government received 406 submissions during the public consultation period. Reaffirming its commitment to the whitepaper’s proposals, the government emphasized an approach that empowers existing regulators to tailor rules to the specific contexts in which AI is utilized. The aim is to maintain agility while fostering safe and responsible AI innovation, positioning the UK as a leader in the field.
Consideration of binding requirements
In its response, the government outlined its preliminary plans for introducing binding requirements for developers of advanced AI systems, particularly those deemed highly capable. Acknowledging the rapid development of AI technology and the evolving risks associated with it, the government highlighted the need for targeted measures to ensure accountability and public safety. While existing regulations may address certain aspects of AI deployment, the government recognized the limitations in effectively mitigating risks posed by highly capable general-purpose AI systems.
Investment in AI safety and research
To support its regulatory framework, the government announced substantial funding initiatives, including nearly £90 million for the establishment of nine research hubs across key fields such as healthcare and mathematics. An additional £19 million will be allocated to accelerate the deployment of responsible AI projects, with £2 million designated for defining responsible AI through Arts & Humanities Research Council (AHRC) funding. Furthermore, £10 million will be dedicated to preparing and upskilling UK regulators to effectively monitor and address AI usage within their sectors.
In a bid to enhance transparency and collaboration, key regulators such as Ofcom and the Competition and Markets Authority (CMA) have been tasked with publishing their approaches to managing AI by April 30, 2024. This initiative aims to foster trust among businesses and citizens while ensuring regulatory readiness in the face of rapid technological advancements.
Challenges in copyright regulation
However, the government faces challenges in formulating a code of conduct for the use of copyrighted material in AI training models. Disagreements between industry stakeholders regarding fair compensation for copyrighted materials have led to delays in finalizing the voluntary code of practice. Despite efforts to engage with AI firms and rights holders, reaching a consensus remains elusive.
Concerns have been raised by the House of Commons Science, Innovation, and Technology Committee regarding the adequacy of existing copyright laws in managing the use of copyrighted material in AI models. The committee urged the government to address these challenges promptly, emphasizing the importance of protecting the rights of content creators while promoting innovation in AI development.
As the UK government navigates the complex landscape of AI regulation, it remains committed to fostering innovation while prioritizing safety and accountability. With targeted binding requirements under consideration and significant investments in AI safety and research, the UK aims to maintain its position as a global leader in AI development. However, challenges such as copyright regulation underscore the need for continued engagement and collaboration between policymakers, industry stakeholders, and rights holders to address emerging issues effectively.
Source: https://www.cryptopolitan.com/uk-government-considers-binding-requirements/