Keep pace with the rapidly evolving fintech industry by subscribing to the BIGcast Network. Get weekly insights from industry leaders John Best and Glen Sarvady, delivered straight to your preferred podcast platform. Join our community and stay informed about the latest trends shaping the credit union industry. Subscribe today and ensure you’re always ahead of the curve.
In 2016 the European Union issued the Payment Services Directive (PSD2) to govern open banking. US legislators are finally getting to it now, with a framework rather similar to PSD2.
Later that same year the EU addressed data privacy with its General Data Protection Regulation (GDPR). The topic remains a notable gap in US federal law, but GDPR served as the obvious template for 2018’s California Consumer Privacy Act (CCPA).
This month the EU began the formal process of regulating artificial intelligence with the issuance of its AI Act.
Does anyone detect a pattern here?
Typically, business leaders’ mantra is “the less regulation, the better.” That’s an oversimplification, however- many also crave regulatory clarity, as well as the consistency that comes with a Federal statute as opposed to the need to navigate dozens of unique state laws.
AI is one such area where the desire for guardrails and guidance seems to be a consensus view. Unfortunately the devil’s in the details, as they say, and a US law does not yet appear to be on the horizon. If past is prologue, however, we can likely find some clues about its ultimate direction in the EU’s AI Act.
It’s not like the EU pulled their rulebook together overnight- early drafts of its legislation date back to 2021, although generative AI’s rapid advancement undoubtedly prompted numerous revisions. The AI Act technically takes effect in May, although companies will have as much as three years to comply.
Notably, the EU has not flagged financial services as a key focus area- the AI Act first tackles what it deems “high risk” categories- namely policing, education and healthcare. Nonetheless it’s fairly straightforward to extend much of the underlying logic to financial settings. For instance, the declaration that “AI should be a human-centric technology” suggests that humans should remain part of all decision making processes. This may be a nod to the predicted arrival of artificial general intelligence, which we discussed with AI thought leader Zach Kass on this podcast.
The AI Act also bans “social scoring systems that could lead to discrimination,” as well as the use of techniques like facial recognition to enable remote biometric identification of individuals in public places. Further afield, it prohibits “subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making.”
It’s clear that clarifying questions will arise during the Act’s implementation, many of which will likely come from innovators looking to test the boundaries of permitted uses. There’s also the inevitable issue of enforcement, which gives rise to an “if you outlaw guns, only outlaws will have guns” style debate. Perhaps in response, the EU has made the interesting call of largely exempting open source AI models providing full transparency into their detailed architecture and parameters. A recurring theme of the Act seems to be the establishment of trust in AI, which transparency will certainly help to achieve.
The MIT Technology Review has published some quite readable summaries of the AI Act. We’ll be following developments as the law rolls out this summer; odds are good that US regulators and elected officials will be doing the same, probably playing not-so-fast follower on several of its provisions.
https://www.big-fintech.com/Media?p=ai-optimism-with-a-dose-of-reality
https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/
How can we assist?