EU is proposing regulations on AI
The European Commission is proposing the first ever legal framework on the use of artificial intelligence, aimed at promoting the development of AI and addressing the potential high risks it poses to safety and fundamental rights. “A highly relevant approach by the EU. Now, it´s important that the arbitration industry does not fall behind when it comes to leveraging the power of AI”, says SCC Head of Business Development, Lise Alm.
The European Commission has proposed a first-ever legal framework on AI and a new Coordinated Plan with Member States, it includes both initiatives to promote AI, and the legal framework intended to guarantee the safety and fundamental rights of people and business, while strengthening AI uptake, investment and innovation across the EU.
In the legal industry, AI is increasingly being used for tasks like research and translating and there is now also AI tools being developed specifically for the use of selecting experts, counsels and arbitrators and even for predicting outcome in cases. As there are so many possible applications for AI technology in arbitration, the SCC closely follow technological progress, current debates, political initiatives and the development of international legislation. Our Business Development Manager Lise Alm participates in several other working groups on these issues, for example within UNCITRAL.
– The European Commission proposal seems like a highly relevant approach, these areas are extremely important that we get right. I wonder, however, how high the barrier will be to create solutions in legal tech and especially for arbitration or the judiciary? With all the regular hoops you have to jump through to get your products out there, this will certainly add complexity and cost. It's important that the arbitration industry does not fall behind when it comes to leveraging the power of AI, says Lise Alm.
She summarizes the proposal as follows:
The framework follows a risk-based approach where AI systems are rated unacceptable, high, limited or minimal risk. While most applications are believed to be minimal risk, two examples of high-risk areas are extra relevant for arbitration.
- Law enforcement that may interfere with people's fundamental rights (e.g., evaluation of the reliability of evidence).
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High-risk systems will be subject to strict obligations before they can be put on the market, including:
- Adequate risk assessment and mitigation systems,
- High quality datasets
- Logging and traceability
- Detailed documentation
- Information to the user
- Appropriate human oversight
- Robustness, security and accuracy
Read more about the proposal for an >> Artificial Intelligence Act