Who owns parameters that have been trained by one person and fine-tuned by another?
This question touches on one of the many intellectual property challenges faced during the development of an “open-source AI,” which the Open Source Initiative (OSI) has examined in its work to define such technology.
Likewise, the Cigref explores this issue within the third part of its guide to implementing the AI Act. The focus here is on the contractual aspects of AI projects, viewed through the lens of this regulation.
Faults, Corrections… Concepts Potentially in Need of Redefinition
The phenomenon of the “black box effect” is discussed in the section about compliance assurance. The issue here is that this effect complicates identifying the source of anomalies. Consequently, it can be challenging for clients to establish whether a failure originates from the service provider, and equally difficult for providers to accept corrective measures without additional costs. Therefore, the concepts of fault detection and remediation may need to be redefined. This might also influence how compliance criteria are assessed, for example, by supplementing traditional Service Level Agreements (SLAs) with performance levels specific to AI outputs—such as acceptable levels of error or the generation of less relevant results.
Who Should Handle Corrective Actions?
The inherently non-deterministic nature of many AI systems makes predicting and understanding their behavior particularly challenging. As the Cigref notes, it could be beneficial to shift focus from expected results to the means and processes that ensure performance and reliability. The client responsible for testing may request the provider to assume an obligation to advise, while the provider could require the client to document these tests. Since fixing non-compliance might involve costly fine-tuning, it’s important to clearly define which corrective measures the provider will undertake and which adjustments might modify the scope of the project.
Anticipating Changes in Responsibilities and AI Autonomy
Regarding compliance, contracts could include provisions allowing both parties to revisit risk allocation and responsibilities in light of evolving system capacities and scope. They could also address potential increases in the system’s autonomous capabilities.
Additionally, clients might seek assurances about advisory obligations, especially considering their responsibility for the quality of training data. It could be useful to specify, for example through an annexed RACI matrix, the obligations of each party in data preparation, selection, and cleansing, and to clarify responsibilities in case of non-conformance resulting from poor data quality.
Beware of Third-Party Components
Contractual arrangements should consider the impact of third-party conditions applicable to certain AI components or underlying models. This is particularly important because such conditions can influence intellectual property rights. For example, open-source or open-weight models with “contaminated” licenses might impose restrictions, which need to be carefully evaluated.
Licensing for Customized Elements
If an AI system has been tailored—through specific software development, model modifications, or fine-tuning—the client might want rights to these adaptations. However, transferring rights for particular components can be complicated, especially when the customizations made by the provider are indistinguishable from standard system components. In such cases, the client might negotiate a license for use of the customized elements, specifying a timeline and territory.
Training Requirements for High-Risk AI Systems
Another vital consideration is training personnel, especially for high-risk AI systems. The AI Act mandates that these systems be provided to the deploying entity in a way that individuals responsible for human oversight can fully comprehend the system’s functioning and the outputs it produces. Contractual agreements should clearly define the scope of training obligations for both the provider and the client, the latter potentially committing to ongoing training programs.
Planning for Contract Termination
In accordance with the AI Act, certain obligations should be established contractually for high-risk AI systems, such as:
- Implementation of risk management processes
- Provision of technical documentation and user manuals
- Organizational and technical measures to maintain set standards for data accuracy and relevance
- Support in understanding the system’s operation and outputs
Additionally, provisions for contract termination are essential. Key questions include:
- Will the client continue to use the AI system post-termination?
- Is the client able to delegate ongoing operation to a third party?
- Can the provider ensure the complete erasure of trained AI models upon contract termination?
- How will training data and input data be handled?
- What about intellectual property rights if the client has modified the system?