FP StaffDec 28, 2022 16:38:38 IST
Whether the place that you live has self-driving cars or not, the fact is, if you access the internet in any way, you have come across something that is greatly influenced by AI. Starting from intelligent home appliances like refrigerators and vacuum cleaners to more complex applications such as driving a car, or selecting ads that would actually have some meaning to you, AI has become all-pervasive, and all-encompassing in this day and age.
While its proponents claim AI will revolutionise human experience, critics point out that the technology has the massive risk of handing over important choices to robots.
Lawmakers in Europe and North America, however, are still catching up to the advancements that AI has made in the last couple of years, and only starting to think of regulating it.
The AI Act, a piece of law intended to control the algorithm age, is expected to be approved by the European Union next year. A draft of an AI Bill of Rights was recently released in the United States, and legislation is also being considered in Canada.
China’s use of AI has been totalitarian – their use of biometric data, facial recognition, and other technology to create a powerful system of control has loomed big in the discussions.
But even before AI can be regulated, it is pertinent that AI gets codified and defined, which, is a daunting task in itself.
Suresh Venkatasubramanian, co-author of the AI Bill of Rights and professor at Brown University, has claimed that even trying to define what exactly AI is, will always end up to be “a mug’s game”.
He said that the measure should cover any technology that harms people’s rights.
To that end, the European Union is striving to define the subject as broadly as possible, but has often run into issues. In its proposed law virtually every automated computer system is considered to be AI. This particular issue is caused by the shifts in the manner in which the term AI is used.
For decades, AI or sentience basically described attempts to create machines that simulated human thinking. But funding largely dried up for this research, also known as symbolic AI, in the early 2000s.
With the rise of the Silicon Valley titans, AI was reborn as a catch-all label for their number-crunching programs and the algorithms they generated. Again, there was some processing, but not in a manner that would be similar to how humans think.
Nevertheless, this automation allowed them to target users with advertising and content, helping them to make hundreds of billions of dollars. This automation allowed them to target users with advertising and content, helping them to make hundreds of billions of dollars.
Both, the US, or the west, and EU have tried to be as broad as possible with their definitions about AI. However, this is where the similarities end, for the approach they have taken after this point, is as different as chalk and cheese.
The EU’s draft AI Act runs to more than 100 pages.Among its most eye-catching proposals are the complete prohibition of certain “high-risk” technologies — the kind of biometric surveillance tools used in China. It also drastically limits the use of AI tools by migration officials, police and judges.
The US’ AI Bill of Rights, on the other hand, is a brief set of principles framed in aspirational language, with exaltations like “you should be protected from unsafe or ineffective systems”.
The bill was issued by the White House and heavily relies on existing law. Experts reckon no dedicated AI legislation is likely in the United States until 2024 at the earliest because of the presidential elections.
No matter what the approach is, what cannot be denied is the fact that regulation is needed, especially when you consider just how powerful certain language models, the AI behind chatbots and ChatGPT have become, and how well they are able to not only converse but also influence one’s decision making.