Artificial intelligence (AI) will potentially contribute around USD 15.7 trillion to the economy by 2030, according to PwC’s global AI study.
The growth of AI has emerged at an unprecedented pace, paving the way for the development of innovations and the evolution of industries that were previously just fragments within our imagination.
AI’s rapid advancement and consequent rise in potential risks have come as a challenge, with organizations demanding AI-regulated frameworks to govern its development and deployment.
As AI’s influence on our lives becomes ever more prevalent, establishing comprehensive AI regulations has become a necessity, ensuring the responsible use of its power while promoting the safe development of innovations and safeguarding human rights and societal values.
Latest: UAE, France to Jointly Explore AI Use Cases Across Sectors
The Need for AI Regulations
With the technological advancements made by AI in recent years, many computer scientists and top technology companies have been driven to facilitate regulations to ensure its safe use.
AI’s social impact has been remarkable with many businesses embracing its adoption. However, regulations to assess any liability or future accountability have long been demanded.
Dealing with the speed of AI developments from narrow to generative has garnered mixed concerns and reactions from developers and users alike. The advent of AI superintelligence (ASI) in the near future will accelerate the need for regulations to safeguard the common public and national interests.
The World Economic Forum emphasized the need for AI regulations and liability systems but highlighted the challenges brought about by AI’s nature as neither good nor bad, naturally raising concerns regarding the implementation of supposed guidelines.
AI training uses vast amounts of data, underscoring concerns about privacy breaches. Protection for data privacy should be paramount, stressing the need for rulings on the ethical use of AI. The widespread effect of misinformation that AI may potentially deliver should be taken into context to safeguard human rights and safety.
Moreover, AI’s impact on the economy, particularly on job displacement and unemployment, should be addressed and regulated. According to a recent study from the Institute for Public Policy Research (IPPR), up to 8 million jobs in the United Kingdom are at risk of being lost to AI.
Notably, technology giant, Amazon, launched an AI recruiting tool in 2018 that faced backlash for its gender bias in hiring candidates. The automated recruitment system preferred male candidates only, indicating bias against women in the male-dominated tech industry.
Implementing the EU AI Act serves as the first step to promoting trustworthy AI and regulating its worldwide influence.
Also Read: AI Adoption Across Europe: Workforce Engagement and Economic Growth Implications
The EU AI Act
Following its implementation last August, the European Union (EU) AI Act remains to be the first comprehensive regulatory framework for AI regulation, having been applied to AI systems or models that are placed on the market and are used or have an impact within the EU.
The Act includes guidelines for developers and deployers to follow for establishing AI-related applications and promoting human-centric and trustworthy AI.
Centering on protecting citizens’ fundamental rights, safety, and health against the harmful effects of AI systems, the AI Act is set to provide harmonized rules for transparency and support for innovation, particularly for small and medium enterprises (SMEs) and start-up companies.
Interestingly, the EU AI Act highlights the adequate level of AI literacy that developers must provide for their employees, ensuring that they are well-equipped with the knowledge, experience, education, and training related to the use of AI systems.
The new AI rulebook categorizes AI systems based on the risk delivered by their use cases, encompassing prohibited, high-risk, and minimal, warranting heightened safety in utilizing AI. AI systems with unacceptable risks and serious threats to human rights will be banned under the AI Act, as well as manipulative and deceptive AI that influences the users’ free will.
According to Article no. 5 of the EU AI Act, deploying AI systems that contain subliminal techniques, with the intention of influence or manipulation, is forbidden. AI systems or models with underlying subliminal methods could be harmful. These systems transcend human consciousness, influencing their behavior or causing them to make decisions and actions that would not be made otherwise.
Cognitive behavioral manipulation, social scoring, biometric identification and categorization, and real-time and remote biometric identification also fall under the ‘AI systems with unacceptable risks’ category.
Furthermore, the Act’s high-risk AI systems need to comply with strict requirements that include detailed documentation, clear user information, and risk mitigation systems, among others.
High-risk AI systems under this regulation include those operating within the following niches: biometrics, critical infrastructure management and operation, education and vocational training, employment, access to essential private and public services and benefits, law enforcement, migration, asylum and border control management, and administration of justice and democratic processes.
In addition, not all education-related trainings are considered high-risk, provided that the training does not use AI systems to determine access or admission to the institution.
Exceptions were made by the EU AI Act in the fields of military, defense, national security, and scientific research. AI systems that impose minimal to no threats, coined as ‘general purpose AI,’ are not regulated by the Act.
Under a specific transparency risk, chatbots must issue a disclosure stating that users are interacting with an AI. Users must also be informed if biometric categorization or emotion recognition systems are utilized throughout the interaction.
Moreover, AI systems with minimal risks that use speech recognition, computer vision, and deep fakes for image enhancement or modification, among others, face no obligations under the EU AI Act.
Related: EU Reaches Agreement on AI Regulation
Establishing Safe AI for a Safer Environment
Developing and deploying AI systems or models requires a purpose that adheres to the guidelines implemented by the AI Act to deliver trust among its users, safeguarding the power it holds.
According to the European Commission, the EU AI Act will be fully applied from August 2, 2026, 2 years after its entry. However, provisions related to AI literacy will take effect on February 2, 2025, while rules on general-purpose AI governance and obligations will be applicable from August 2, 2025.
Meanwhile, the obligations for high-risk AI systems will be implemented from August 2, 2027, 36 months following the EU AI Act’s entry.
Developers and deployers noncompliant with the obligations may face penalties of up to EUR 35 million or 7% of the total worldwide annual turnover for prohibited AI endeavors and EUR 15 million or 3% of the total global annual turnover of the preceding year for violations of high-risk AI systems, according to the European Commission. Incorrect, incomplete, or misleading information related to AI systems and models will incur a penalty of up to EUR 7.5 million or 1.5% of the total worldwide annual turnover.
The move will amplify risk management and accountability for any negative effect AI may deliver and will solidify humans’ control over AI systems.
Swedish telecommunications company, Ericsson, has reiterated the adoption of the guidelines set by the EU AI Act, highlighting its transparency, privacy and data governance, technical robustness and safety, and societal environmental well-being. Ericsson emphasized how trust in AI can ascend through enhanced comprehension of its system and processes.
Also Read: EU’s Vision for Ethical AI: Pioneering Strategies Leading Global Innovation
Final Thoughts
As the EU AI Act comes into effect and the idea of safeguarding common interests against the potential risk of AI use in the future gains traction, it is forecast that other countries will follow the EU’s responsible approach and implementation.
The guidelines and implementation of this Act will serve as a foundation for the widespread adoption of AI ethics, impact, and responsibility, ensuring compliance and safeguarding AI’s unlimited potential.
A deep understanding of AI’s impacts will help organizations deploy safer AI, prioritizing accountability and governance, and building trust in the technology that is set to revolutionize our way of life.
Read More: European AI Investment: Shaping the Path to Global Leadership Amidst Challenges