The European Union (EU) is working to establish the scope of strict new rules on various artificial intelligence (AI) technologies such as ChatGPT, since it does not yet have regulations for them.
Two years ago, the EU Commission prepared the first legislative proposal for the framework of new rules on AI and presented it to the member states and the European Parliament, as recalled AA.
This proposal would introduce some limitations and transparency rules in the use of artificial intelligence systems. If the proposal becomes law, artificial intelligence systems like ChatGPT will also have to be used in accordance with these rules.
The new AI laws, which are expected to be applied in the same way in all member states, use a risk-based approach. In the commission’s proposal, AI systems are classified into four main groups: unacceptable risk, high risk, limited risk and minimal risk.
Possible bans
AI systems that are considered a clear threat to the safety of life, livelihoods, and the rights of individuals are in the unacceptable risk group. It is expected that the use of systems in these areas will be prohibited..
AI systems or applications that go against the free will of individuals, manipulate human behavior, or perform social scoring are also prohibited.
Next, Included in the high-risk group are areas such as critical infrastructure, education, surgery, CV assessment for recruitment, credit scores, testing, immigration, asylum and border management, travel document verification, biometric identification systems, and legal proceedings. and democratic.
Strict requirements would be placed on AI systems in the high-risk group before they are released to the market. These systems must be non-discriminatory and the results must be observable and subject to adequate human supervision.
According to the rules, security units will be able to use biometric identification systems in public areas in special cases such as terrorism and serious crimes. However, such uses of the AI systems will be limited and will be subject to the authorization of the judicial authorities.
In the proposal, chatbots like ChatGPT are also in the limited risk group. The goal is to let users conversing with chatbots know that they are interacting with a computer.