hit tracker
Friday, April 19, 2024
HomeLatest NewsOpinion | Dangerous uses of Artificial Intelligence by public authorities

Opinion | Dangerous uses of Artificial Intelligence by public authorities

Date: April 19, 2024 Time: 14:45:58

We have been talking about chatGPT and other creative (or, rather, generative) Artificial Intelligences for months and it seems that, given the challenges, opportunities and doubts they raise, we have forgotten about other uses of Artificial Intelligences, both by private companies and by authorities. public, which, perhaps, are not so ‘attractive’, but which can significantly affect our lives.

In particular, today I want to refer to the uses or practices of Artificial Intelligence by public authorities, making it clear in advance that I have no unfavorable prejudice (previous judgment) regarding its use, but rather, on the contrary, I maintain that public authorities have a duty to use it to improve the public services they must provide to citizens.

However, at the same time, the fundamental rights of citizens must be protected against the possible dangerous uses of Artificial Intelligence, as is well highlighted in the Proposal for the Regulation of Artificial Intelligence (April 2021), in the Spanish Bill of Rights Digital (July 2021) and in the European Declaration of Principles and Digital Rights (December 2022).

Prohibited Use: Social Rating System

To begin with, the proposal for a European Regulation on Artificial Intelligence establishes, in its art. 5.1.c): “The introduction on the market, the commissioning or the use of AI systems by public authorities or on their behalf, in order to evaluate or classify the reliability of natural persons during a specified period of time.”

Clarifying below that said evaluation or classification of the reliability of natural persons (from the point of view of said public authorities) cannot be made based on: 1) their social conduct or 2) personal characteristics or their personality (concepts legal indeterminate that will have to be determined and differentiated), known or predicted.

And it establishes that the resulting social classification cannot cause harmful or unfavorable treatment towards certain individuals or entire groups 1) in social contexts that are not related to the contexts where the data was generated or collected; and 2) unreasonable or disproportionate to your social behavior or the seriousness of it.

High-risk uses by public authorities

Meanwhile, in art. 6 regulates high-risk intelligence systems and the rest of the articles are dedicated to imposing requirements that they must meet, the obligations of providers and users and other parties, the transparency obligations of certain Artificial Intelligence systems, the Codes Artificial behavior, etc.

Annex III of the Proposed Regulation contains a list of these high-risk Artificial Intelligence systems, expressly citing their use by 1) public authorities, 2) law enforcement authorities or 3) judicial authorities, and distinguishing up to eight contours in which they can be used, with the appropriate safeguards:

Biometric identification and categorization of natural personsManagement and operation of essential infrastructuresEducation and professional trainingEmployment, worker management and access to self-employmentAccess and enjoyment of essential public services and their benefitsMatters related to law enforcementMigration, asylum and border control managementAdministration of justice and democratic processes

sanctions

in art. 71 of the Proposal provides that “States will determine the sanctions regime, including administrative fines, applicable to infractions of this Regulation and will adopt all the necessary measures to guarantee its adequate and effective application. The established provisions will be effective, proportionate and dissuasive.”

It establishes administrative fines of up to 30 million euros or, if the offender is a company, up to 6% of the total annual worldwide turnover of the previous financial year, if this amount is higher, for prohibited uses of artificial intelligence (art. 5). And up to 20 million euros or up to 4%, for non-compliance with the requirements or obligations for high-risk uses.

But then he qualifies: “Each Member State will establish rules that determine if it is possible, and to what extent, it will impose administrative fines on authorities and public bodies established in said Member State.” In other words, the prohibited and high-risk uses of Artificial Intelligence by public authorities could go unpunished, as is already the case in Data Protection.

* This website provides news content gathered from various internet sources. It is crucial to understand that we are not responsible for the accuracy, completeness, or reliability of the information presented Read More

Puck Henry
Puck Henry
Puck Henry is an editor for ePrimefeed covering all types of news.
RELATED ARTICLES

Most Popular

Recent Comments