Technology

AI Without Borders

By  | 

Artificial Intelligence (AI) is the “capability of computer systems to perform tasks that normally require human intelligence.” Once the province of science fiction, individuals today rely on AI without even realizing it to make their daily routines more efficient. Stemming from the rapid diffusion of AI having led to rising concerns over the practical and ethical consequences, not all countries agree on how to measure these tradeoffs. Ultimately, many countries have enacted new regulations on AI, but in the absence of a rough consensus on first principles, achieving regulatory harmonization across borders will prove an ongoing challenge.  

AI in Everyday Life  

One reason that AI is popular is the ability to create personalized algorithms. AI-driven commercial apps and web streaming services direct consumers to products and shows they are likely to enjoy, given their past history. Because these algorithms rely on private data, malicious actors with access might be able to compromise user privacy, and AI pessimists have issued stark warnings about the danger to civil liberties. Other potential threats lurk as AI applications spread to physical devices like appliances and automobiles. AI optimists, however, suggest that these dangers are manageable, and that the benefits of AI far outweigh its costs.  

Consider the smart car, which optimists believe will reduce casualties by stopping drivers from falling asleep or driving drunk. Smart cars may also reduce congestion by automatically routing drivers to optimal routes – and by preventing the kind of crashes that snarl traffic. Pessimists, however, warn that auto-piloted cars will fail due to unanticipated hazards, moving obstacles, and fluctuating speed limits. Tesla has already faced lawsuits for “17 fatalities and 736 crashes since 2019 alone.” As AI advantages become the new norm for making lives easier, experts and policymakers alike are sounding alarms about the consequences – known and unknown. 

The View from Washington  

In an effort to pave the way for the harmonization of AI regulations, the U.S has released a National Cybersecurity Strategy (NCS) that outlined five critical pillars. Although many U.S. leaders embraced AI capabilities to revolutionize the future of work, skeptics cautioned that AI could be abused by an adversary like China or Russia for political leverage. By releasing a new Executive Order on AI Safety, the U.S. took initiative to “ensure that America leads the way … managing the risks of artificial intelligence (AI),” (E.O.) while also deterring any hostile actors looking to exploit AI for their personal gain. Moreover, the U.S. believes that AI should require international regulations because the U.S. needs these rules to realign market forces to better protect the country, civil liberties, and democratic values in cyberspace. The U.S. has stressed rebalancing the “responsibility to defend cyberspace by shifting the burden… away from individuals, small businesses, and local governments, and onto the organizations that are most capable… to reduce risks for all of us.” (NCS) In the end, this E.O. ensures the protection of Americans’ privacy, the advancement of civil rights, and will “secure the full benefits of a safe and secure digital ecosystem,” demonstrating just how much AI power can affect us, even if we choose not to acknowledge it. 

The View from Brussels 

Through the recent enactment of the first E.U. AI Act, the E.U. has made significant progress into the harmonization of AI regulations. Being composed of three branches: a commission, a parliament, and a council, having to establish AI consensus amongst 27 member states is a task often hindered by conflicting national interests, political gridlock, and the infringement of state sovereignty. Overall E.U. leaders want to harness the power of AI because they understand that “AI possesses enormous potential.” However, E.U. leaders fear that if AI is left unchecked and uncontrolled, its miscalculated risks could have devastating impacts that to reiterate the U.S., threaten state sovereignty and civil liberties in cyberspace. Without any transparency, the E.U. believes that AI should require international regulations because AI regulatory harmonization is the key to “sustaining transnational security and the stability of global partnerships.” (E.U.) E.U. Commission President, Ursula Von der Leyen reaffirms this in her remarks at the U.K.’s recent AI Safety Summit, committing the E.U. to being a role model at multilateral forums like the G7 Hiroshima Process, because she recognizes the goal to “understand and mitigate the risks of very complex AI systems,” as one that is shared beyond E.U. borders.  

Conclusion 

Global leaders agree that regulatory harmonization on AI is vital because without achieving a rough consensus, it could compromise the sovereignty, security, and civil liberties for everyone both in and out of cyberspace. In comparing the U.S. E.O. and E.U. AI Act, the biggest differences amongst the perspectives of those in Washington and Brussels were the disagreements over adopting a decentralized versus centralized approach, the assessment and prioritization of risks, and the fact that the E.U. AI Act had explicitly exercised its centralized power to set rules that enforce subjecting aggressive businesses and malicious actors to non-compliance fines “ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.” Because the E.U. is composed of 27 member states and has adopted a centralized strategy due to the complexity of E.U. politics, moving forward towards regulatory harmonization will the E.U.’s centralized and U.S.'s decentralized approaches be incompatible? Furthermore, what the U.S. could do next is explicitly state penalties of numeric value and establish a running list of bans on certain software and systems as the E.U. AI Act has. Global leaders should invest in AI regulatory harmonization now more than ever because it would strengthen pre-existing partnerships while also integrating new AI topics of discussion at multilateral forums that would be beneficial in the long run to forge a universal standard on AI ethics and conduct. With the rapid pace at which AI is evolving, despite potentially never mitigating the misuse of AI entirely, countries can begin holding AI abusers responsible through the regulatory harmonization and rough consensus on AI matters.  

About the Author

Alexandra Klemer (she/her) grew up in NYC and graduated from the University of Delaware with a bachelor's in International Relations. Currently she is a 2nd year graduate student at American University: SIS in Washington, D.C. pursuing a master's in International Affairs: U.S. Foreign Policy and National Security. Recently she studied abroad at the Geneva Graduate Institute (IHEID) in Switzerland. Alexandra's overall interest in AI stems from the shift in global leaders investing in new critical and emerging technologies as they become more relevant in everyday life.