Egypt’s achievement of WHO’s ML3 contributes to attaining SDGs
Egypt has achieved a significant milestone in medicines regulation, attaining maturity level 3 (ML3) in ...
The European Artificial Intelligence Act (AI Act), entering into force, aims to foster responsible artificial intelligence development and deployment in the EU.
The AI Act addresses potential risks to citizens’ health, safety, and fundamental rights. It provides developers and deployers with clear requirements and obligations regarding specific uses of AI while reducing administrative and financial burdens for businesses.
The AI Act introduces a uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach; including minimal risk, specific transparency risk, high risk, and unacceptable risk.
As for minimal risk, most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
Concerning specific transparency risk, systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
High-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.
As for unacceptable risk, for example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore banned.
The EU aspires to be the global leader in safe AI. By developing a strong regulatory framework based on human rights and fundamental values, the EU can develop an AI ecosystem that benefits everyone.
This means better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.
Recently, the Commission has launched a consultation on a Code of Practice for providers of general-purpose Artificial Intelligence (GPAI) models. This Code, foreseen by the AI Act, will address critical areas such as transparency, copyright-related rules, and risk management. GPAI providers with operations in the EU, businesses, civil society representatives, rights holders and academic experts are invited to submit their views and findings, which will feed into the Commission’s upcoming draft of the Code of Practice on GPAI models.
The provisions on GPAI will enter into application in 12 months. The Commission expects to finalize the Code of Practice by April 2025. In addition, the feedback from the consultation will also inform the work of the AI Office, which will supervise the implementation and enforcement of the AI Act rules on GPAI.
Egypt has achieved a significant milestone in medicines regulation, attaining maturity level 3 (ML3) in ...
A total of 157,000 beneficiaries – representing over 38,000 households – got a boon of ...
The United Kingdom and Australia have contributes about $ 22 million to support the efforts ...
اترك تعليقا