The first Artificial Intelligence Law
The world’s first Artificial Intelligence Law
The first Law of Artificial Intelligence (AI) is a reality because AI is here to stay and not only that, but it is changing our way of working and researching (among many other areas) by leaps and bounds, with the advantages and disadvantages that this entails.
Regulating the uses and advances of Artificial Intelligence is not, and will not be, an easy process nor free of difficulties. However, the compelling reality of AI and the analysis of the potential risks to which our rights and freedoms may be exposed has led political wills to adopt a preliminary agreement for the first Artificial Intelligence Law in the world.
Europe regulates Artificial Intelligence
Last Friday, December 8, the European Parliament and the European Council reached a provisional agreement on the adoption of the pioneering new law on Artificial Intelligence (AI), which needs to be ratified by the Parliament and the CouncilThe new law, which needs to be ratified by the Parliament and the Council to become binding law, will enter into force 20 days after its publication in the Official Journal.
The IA Law would then be applicable two years after its entry into force, except for some specific provisions:
–6months after entry into force, Member States will phase out prohibited systems;
–12months: general purpose AI governance obligations apply;
– 24/36months: all the rules of the AI Law become applicable, including the obligations for those at high risk (Annex II and III).
Artificial Intelligence Act Highlights
Risk classification:
Minimal risk:
Artificial Intelligence can be developed and used subject to existing legal conditions. It is understood that AI systems to be used in Europe will be in this category.
High risk:
AI systems that may have an adverse impact on the safety of individuals or their fundamental rights.
Unacceptable risk:
A very limited set of particularly harmful uses of Artificial Intelligence that contravene established EU values as they violate fundamental rights and will be prohibited.
Prohibited actions
As a consequence of the risk posed by the use of Artificial Intelligence to our rights and freedoms, it was agreed to prohibit the following:
- Biometric categorization systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race);
- Non-targeted extraction of facial images from the Internet or CCTV images to create facial recognition databases;
- Recognition of emotions in the workplace and in educational institutions;
- Social scoring based on social behavior or personal characteristics;
- AI systems that manipulate human behavior to circumvent free will;
- Exploitation of people’s vulnerabilities (due to age, disability, social or economic situation).
Biometric identification systems (RBI)
These systems may be implemented in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined crime lists.
RBI can be used in two ways:
Post-remote:
Artificial Intelligence would be used strictly in the targeted search for a person convicted or suspected of committing a serious crime
In real time:
It would comply with strict conditions and its use would be limited in time and location, for the purposes of:
– targeted searches for victims (kidnapping, trafficking, sexual exploitation)
– prevention of a specific and present terrorist threat,
– the location or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crimes).
High-risk treatment
High-risk activities are those that have the potential to cause significant damage to health, safety, fundamental rights, the environment, democracy and the rule of law. In these cases, an EIPD will be mandatory.
Transparency must permeate all actions
Artificial Intelligence law distinguishes between:
General Artificial Intelligence Systems:
The necessary technical and regulatory documentation must be generated to demonstrate compliance with the provisions of the law.
High impact systems with systemic risk:
It has greater restrictions on its use, such as performing model evaluations, assessing and mitigating systemic risks, performing adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity, and reporting on its energy efficiency.
Support for SMEs
To enable small and medium-sized businesses to implement AI, without the pressures of the giants of the technology industries, by promoting so-called regulatory testing environments and real-world testing, established by national authorities to develop and train innovative AI before commercialization.
Sanctions
Failure to comply with the rules can result in fines ranging from EUR 35 million or 7% of global turnover to EUR 7.5 million or 1.5% of turnover, depending on the violation and the size of the company.
Pact on Artificial Intelligence: AI reliability
What does this Pact consist of?
The Covenant will encourage the early implementation of the measures foreseen by the IA Act to companies that choose to do so on a voluntary basis, in order to prepare them for compliance with the requirements of the new IA Act. This initiative will target key players in the EU and non-EU industry.
How will this Pact be implemented?
Through declarations of commitment by the companies, specifying the actions developed to comply with the IA Law.
The commission would publish these best practices in order to give visibility to the issue and generate an environment of trust and knowledge regarding the requirements of the new law.
When will this Pact work?
Following the formal adoption of the AI Act, the AI Pact will be officially launched and “pioneer” organizations will be invited to make public their first commitments.
If your company is interested in being part of this initiative, you can join here.
Contact Business Adapter®
Contact us by email: info@businessadapter.es, you can also call 96 131 88 04, or leave your message in this form:
[su_button url=”https://businessadapter.es/contacto” target=”blank” background=”#f6f903″ color=”#181818″ size=”7″ center=”yes” icon_color=”#000000″]Contact us, we will be pleased to help you.[/su_button]