EU passes draft law on AI regulation: What’s prohibited, company obligations and more – Times of India

European Union (EU) lawmakers voted in favor of new, tougher rules to regulate its use. artificial intelligence (AI) technology. He passed a draft law, known as AI ActWhich aims to place more restrictions on how the technology can be used based on acceptable levels of risk.
The landmark draft law includes what it bans outright, and makes it mandatory for companies like ChatGPT maker OpenAI to disclose the data they use for their training. ohmodel. The European Parliament has adopted the draft with an overwhelming majority: 499 votes in favor, 28 against and 93 against.
It should be noted that the final version of the AI ​​Act, seeking to establish a global standard for technology used in automated factories for AI chatbots and self-driving cars, won’t be passed until later this year. On Wednesday, the European Parliament concluded its deliberations on the Artificial Intelligence (AI) Act with 499 votes in favor, 28 against and 93 votes ahead of negotiations with EU member states on the final shape of the law. position adopted.
“The rules will ensure that AI developed and used in Europe is fully consistent with EU rights and values ​​including human oversight, security, privacy, transparency, non-discrimination and social and environmental well-being,” it added.
AI Act: Prohibited Practices
The rules follow a risk-based approach and establish obligations for those deploying AI systems. These obligations will depend on the level of risk posed by the AI.
This means that AI systems that demonstrate an unacceptable level of risk to people’s safety will be prohibited, such as those used for social scoring (classifying people based on their social behavior or personal characteristics). .

“Real-time” and “post” remote biometric identification systems in publicly accessible locations and biometric classification systems using sensitive characteristics (such as gender, race, ethnicity, citizenship status, religion, political orientation) banned under the upcoming Act Will be
Predictive policing systems (based on profiling, location or past criminal behavior) and emotion recognition systems will also be banned in law enforcement, border management, workplaces and educational institutions.
Lastly, inadvertent scraping of facial images from the Internet or CCTV footage to create facial recognition databases will also not be allowed, which essentially violates human rights and the right to privacy.
AI systems that cause significant harm to people’s health, safety, fundamental rights or the environment will be classified as high-risk applications. European lawmakers also added AI systems used to influence voters and election results and recommendation systems used by social media platforms to the high-risk list.

What should AI companies do
Companies such as Google, OpenAI, Microsoft and others will have to assess and mitigate potential risks (to health, security, fundamental rights, environment, democracy and rule of law) and submit their models to the EU before they can be released in the EU. Must be registered in the database. market.
Generative AI systems based on such models, such as ChatGPT, would have to comply with transparency requirements, including disclosing that the content was AI-generated and being able to distinguish ‘deep fake’ images from genuine ones. also helps.
Companies must also ensure safeguards against generating illegal material and must also make publicly available a summary of the copyrighted data used for their training.

what is allowed
Some exemptions have been made to promote AI innovation and support small and medium enterprises (SMEs), the lawmakers said.
The new law will boost regulatory sandboxes, or real-life environments, set up by public authorities to test AI before it is deployed. The rules also give citizens powers to lodge complaints about AI systems and seek clarifications on decisions based on high-risk AI systems.