BY PASCHAL OCHANG
The RAISE project recently contributed to the “Consultation on Copyright and AI” issued by the UK Intellectual Property Office (IPO). This consultation seeks input from stakeholders on how copyright law interacts with the growing use of artificial intelligence. In their response, RAISE provided valuable insights, focusing on how copyright frameworks should evolve to address the unique challenges posed by AI technologies.
Key points highlighted by RAISE include the need for clearer guidelines on the ownership of works generated by AI, the balance between fostering innovation and protecting creators’ rights, and the importance of ensuring that copyright laws remain adaptable to the rapid advancements in AI. The RAISE team emphasised the importance of forward-thinking policies that not only safeguard intellectual property but also encourage the responsible use of AI in creative and innovative fields.
This engagement marks an important step in ensuring that the evolving landscape of AI is thoughtfully addressed within the realm of copyright law.
The response by the RAISE project is as follows
The RAISE (Responsible Generative AI for SMEs in the UK and Africa) project (https://raise-project.uk/) situated under the University of Nottingham comprising of researchers from the Responsible digital futures group (https://www.responsible-digital-futures.org/) explores ethical, legal, and technical considerations surrounding generative AI adoption by SMEs. Our research emphasises the importance of providing guidance, particularly for SMEs. Based on insights from the RAISE Guidelines, our human-centric design workshop, and case related studies, we provide the following responses to the Intellectual Property Office’s consultation. Our key focus is on technical standards, encouraging research and innovation, and promoting transparency
Technical Standards
These questions relate to section C.2 Technical standards of the Consultation on Copyright and Artificial Intelligence.
Is there a need for greater standardisation of rights reservation protocols?
Yes, greater standardisation is essential to ensure a transparent and fair system for both AI developers and content creators especially those situated under the SME and startup ecosystem. The current fragmented approach where multiple SMEs and business focused on AI adopt different rights reservation protocols creates uncertainty and inconsistency in the adoption of guidance, transforming this into standard operating procedures and enforcement. The RAISE project guidelines clearly emphasise the importance of clear governance structures in AI adoption, particularly for SMEs that lack the resources to navigate complex, non-standardised protocols.
A standardised rights reservation protocol should:
- Be interoperable across platforms and across business adopting or applying AI under the same context or application area e.g health, financial investment, or art.
- Support granular control, allowing right holders to specify permissible uses (e.g., search indexing vs. generative AI training).
- Include machine-readable metadata that AI systems must recognise and respect.
- Provide transparent compliance mechanisms, ensuring AI firms disclose their content usage practices.
How can compliance with standards be encouraged?
Ensuring compliance requires a combination of technical solutions, regulatory incentives, and industry collaboration. Drawing from the RAISE guidance framework, we propose the following:
- Industry-led frameworks & accreditation:
- AI developers should adopt certification schemes that verify compliance with standardised rights reservation protocols. These certification schemes should also be extended to organisations adopting various AI solutions in their services and products.
- An industry-wide code of conduct could encourage AI focused businesses and developers to commit to best practices.
- Technical enforcement & transparency mechanisms:
- Blockchain-based registries could be used to record and track permissions across platforms.
- AI firms should be required to publish transparency reports outlining their data sourcing and rights compliance efforts.
- Incentivising adoption among SMEs, Developers & Right holders:
- Many SMEs, startups and AI focused businesses struggle with navigating complex AI policies and frameworks. Clearer, standardised systems will reduce the burden on businesses especially SMEs, charities and startups that lack legal and technical expertise.
- Financial or regulatory incentives (e.g., tax benefits, compliance grants) could encourage smaller businesses to adopt best practices leading to greater compliance.
Should the government have a role in ensuring this and, if so, what should that be?
The government should play a facilitative role, ensuring that standards are adopted fairly, transparently, responsibly and equitably. However, regulation should be outcomes-focused rather than overly prescriptive, allowing flexibility for technical advancements. As identified in some of the outcomes of the RAISE workshops a key call by startups and SMEs is for self-regulation with the government overseeing the adoption of standards.
The following key recommended government actions can facilitate the government’s role in the promotion of technical standards:
- Mandate AI firms to recognise standardised protocols:
- Require AI developers to respect opt-out signals embedded in metadata.
- Introduce a minimum compliance requirement for AI firms operating in the UK while also ensuring compliance in any subsidiaries existing outside the jurisdiction.
- Promote international alignment:
- Engage with the EU, IAB, and other global bodies to develop harmonised frameworks that prevent regulatory fragmentation. Also key to this alignment are incubator hubs and businesses aiming to develop or adopt AI.
- Support SMEs & right holders in compliance efforts:
- Provide funding for SMEs and startups who integrate or are looking to integrate AI in their services and products to also integrate rights management tools.
- Develop accessible, government-backed guidance on navigating AI rights protections.
- Enforce transparency obligations:
- Require organisations using AI to disclose training data sources and provide accessible means for right holders to verify whether their content has been used leading to greater transparency while also enhancing accountability.
Encouraging research and innovation
These questions relate to section C.6 Encouraging research and innovation, of the Consultation on Copyright and Artificial Intelligence.
Does the existing data mining exception for non-commercial research remain fit for purpose?
While the UK’s current data mining exception (Section 29A CDPA) provides essential support for non-commercial research, its limitations may hinder AI-driven research and innovation. Many SMEs and research institutions conduct research that blends non-commercial and commercial applications. Therefore, this can result to the unintentional exclusion of commercial research which can be a key challenge. Also, unlike the EU’s Article 3 DSM Directive, the UK’s exception does not allow commercial research, creating barriers for SMEs exploring AI-driven innovation. Furthermore, The UK exception does not extend to databases, whereas the EU exception does. This creates difficulties in areas like healthcare AI, corporate intelligence, and financial analytics, where structured datasets are crucial. Our key recommendations will be to expand the UK exception to include commercial research, similar to the EU’s approach, to support SME innovation. Extend the exception to cover databases, ensuring fair access for AI-driven research beyond copyright-protected works. Introduce clearer licensing mechanisms to reduce legal ambiguity and allow SMEs and independent researchers to use AI responsibly.
Should copyright rules relating to AI consider factors such as the purpose of an AI model, or the size of an AI firm?
Copyright regulations should be proportionate and consider both the purpose of an AI model and the size of the firm. The RAISE Project’s research on SMEs highlights the risks of applying one-size-fits-all regulations that disproportionately impact smaller innovators. There should be a Purpose-driven differentiation, for example AI models used for research, healthcare, education, or financial risk assessment have different ethical and economic implications compared to large-scale generative AI models used in entertainment. Also, regulatory burdens should distinguish between AI models that contribute to the public good and those designed for commercial content generation. Therefore, regulations should support responsible but feasible compliance. Also, SMEs and startups, and even charities often lack the legal and financial resources to navigate complex copyright compliance requirements. Therefore, tiered compliance framework (like the EU AI Act) could reduce administrative burdens on SMEs while ensuring accountability for large AI firms.
Transparency
These questions relate to section C.4 Transparency, of the Consultation on Copyright and Artificial Intelligence.
Question 17: Do you agree that AI developers should disclose the sources of their training material?
AI developers should disclose the sources of their training data to ensure compliance with copyright law, foster trust, and enable accountability. Transparency in training data use helps right holders understand how their works are utilised and allows AI users such as business and end users of products built on such AI models to assess the reliability of generative outputs. However, the level of disclosure must be balanced to avoid excessive administrative burdens, especially for SMEs and smaller AI developers. While large AI firms should be required to publish structured summaries of their training data sources, smaller developers, SMEs, including charities who carry out in house development should have access to simplified reporting mechanisms that allow them to comply without significant resource constraints.
Question 18: What level of granularity is sufficient and necessary for AI firms when providing transparency over the inputs to generative models?
A summary-level disclosure approach is both sufficient and necessary for organisations to promote transparency through responsibility. This would require developers to publish a categorised list of the datasets and repositories used in training, along with an indication of whether the data was sourced from open-access materials, licensed content, or proprietary datasets. The model does not need to disclose every individual work used in training, as this would be impractical given the scale of data collection. Instead, transparency requirements should focus on ensuring that AI developers provide meaningful summaries that allow for public scrutiny while maintaining operational feasibility. The key focus here will be to promote responsibility which can indirectly promote transparency and accountability
Question 19: What transparency should be required in relation to web crawlers?
Web crawlers should be required to disclose their ownership, purpose, and the types of content they collect. They should also implement mechanisms that respect the rights of content creators, such as recognising and complying with robots.txt directives and metadata signals that indicate a preference for exclusion from AI training. Regular audits and reporting mechanisms should be introduced to verify that web crawlers operate within ethical and legal boundaries. By requiring AI based organisations and developers to provide transparency about their web crawling practices, content owners will have better control over the use of their works in AI systems, products and services.
Question 20: What is a proportionate approach to ensuring appropriate transparency?
A proportionate approach to transparency should differentiate between large-scale AI developers, SMEs, and research-focused AI initiatives. Large AI firms should be held to higher transparency standards, including detailed reporting on the datasets used and mechanisms for right holders to verify the use of their works. SMEs and startups, however, should not face excessive compliance burdens that could hinder innovation. A tiered reporting framework that scales requirements based on firm size and AI model complexity would ensure a balanced approach. Furthermore, government-backed initiatives, such as a centralised transparency registry, could provide standard reporting templates to simplify compliance for smaller organisations or AI based initiatives such as charities that make use of AI.
Question 21: Where possible, please indicate what you anticipate the costs of introducing transparency measures on AI developers would be.
The costs of introducing transparency measures will vary depending on the size and nature of the AI firm. Large AI companies may need to allocate significant resources to compliance teams, dataset documentation, and automated tracking systems, leading to estimated costs in the millions per year. SMEs and startups, including bootstrapped businesses and charities, however, may struggle with the financial burden of detailed data disclosure requirements. For smaller firms, the cost of implementing transparency measures could range from tens to hundreds of thousands of pounds, potentially creating barriers to entry in the AI market. To mitigate these challenges, the government should consider financial support mechanisms such as research and development tax credits or grants for SMEs implementing responsible AI practices. This is a key recommendation by the RAISE project.
Question 22: How can compliance with transparency requirements be encouraged, and does this require regulatory underpinning?
Compliance with transparency requirements can be encouraged through a combination of regulatory enforcement and incentive-based mechanisms. A voluntary certification scheme, such as a “Transparent AI” badge or a “Responsible AI” badge, could encourage firms to adopt best practices in data disclosure while allowing them to demonstrate their commitment to ethical AI development. Additionally, government-backed incentives, including funding for compliance tools and technical support for SMEs, would help ease the burden of transparency obligations. However, regulatory underpinning is necessary to ensure that larger AI firms comply with disclosure requirements and do not exploit loopholes. Any regulatory framework should be designed to support responsible innovation while maintaining accountability in AI development.
Question 23: What are your views on the EU’s approach to transparency?
The EU’s approach to transparency under the AI Act provides a useful model for balancing disclosure and feasibility. By requiring AI developers to provide a sufficiently detailed summary of their training data rather than exhaustive lists, the EU ensures that copyright compliance and accountability are maintained without imposing unmanageable burdens on AI firms. This approach aligns with international best practices and would facilitate interoperability between UK regulations and global AI standards which are beginning to emerge in different jurisdiction. However, it is important to ensure that SMEs and startups do not face disproportionate challenges in meeting transparency requirements. The UK should adopt a similar approach but include specific provisions to support smaller AI firms in fulfilling their obligations without stifling Responsible innovation.