What Defines High Risk AI in the EU AI Act?

The European Union recently passed the Artificial Intelligence Act, which is the first major law on AI regulation in Europe. The purpose of the Act is to ensure that the risks posed by AI are addressed and managed responsibly, while allowing for innovation in the field of AI. The Act has been designed as a flexible approach to addressing the potential harms posed by AI and provides safeguards to protect individuals and society from risks linked to the development and use of AI technologies.

The Act requires companies to take preventative measures when developing or deploying AI systems to mitigate potential risks, including those associated with data privacy, discrimination, and safety. Companies must also document how they assess the risks posed by their AI systems. Furthermore, businesses must inform users about the risks posed by AI technologies and obtain consent from them before using such technologies.

The Act also includes provisions aimed at ensuring the development of trustworthy AI technologies. Companies must adhere to certain standards and must provide evidence of their compliance with the criteria set out in the Act. Companies must also provide explanations for decisions made by AI systems, so that users can understand how and why such decisions were taken.

In addition, companies will be required to conduct human review processes on their AI systems and to carry out regular reviews to ensure they remain compliant with the standards set out in the Act. Companies will also need to provide information about the performance of their AI systems to authorities.

Finally, the Act includes provisions on the liability of AI developers and providers. It states that, in the event of harm caused by an AI system, developers and providers will be held liable for any foreseeable material damage, unless it is established that the damage was caused by a third party or exceeded the expected level of risk.

Overall, the EU's Artificial Intelligence Act is a positive step towards creating a safer environment for the development and use of AI technology. The Act sets out clear requirements for companies to follow when it comes to developing and deploying AI systems, as well as providing detailed guidelines on how to deal with potential risks. By doing this, the Act helps to ensure that AI technologies are used responsibly, while allowing for innovation in the field.

Read more here: External Link