Ottobre 31, 2024

The algorithm as the drug.

7 min read

Is it socially acceptable that an artificial intelligence makes decisions that benefit the majority of people while damaging others? A way to regulate algorithms is possible.

In the 80s of the last century, people considered “intelligent” a calculator that was able to perform mathematical calculations, which until then were attributed only to human intelligence. The computer raised the bar further, giving us the possibility to automatize algorithm processes and so to face repetitive tasks, that until then were a prerogative only of the human intelligence.

When digitalizing a process, its nature changes, because machines manage to operate at a speed and a scale vastly greater than humans. Now, automatization is about to go into an era in which, with artificial intelligence, new typologies of applications able to automatize perception, classification and prediction activities can be realized, activities that were exclusively human characteristics until recently.

Arthur Bloch, an American humourist, wrote: “to clean something it is necessary to sully another”. While someone takes power, someone else is damaged. Every action has effects, some positives, other negatives. We therefore ask the question of how to regulate new technologies in order to maximize the benefit and at the same time minimize negative consequences.

Each regulation is sustained by a vigilance, control and sanction system. Often, though, our institutions are not able to face their duties because of the quantity of episodes to deal with and because of the rapidity of reaction requested; indeed, a lot of questions are delegated to native digital enterprises.

It is the case, for instance, of the censorship of dangerous information during the recent ongoing pandemic or that of combating trade in prohibited substances, the fight against counterfeiting, protection of copyright and so on.

These actions make use of artificial intelligence tools, and the system is not freed from consequences: who guarantees penalized people’s rights? In case of unfair evaluation and privation of human rights, how can the right to appeal be exercised? To whom is it addressed? How long and how much does it cost, including all the bureaucratic difficulties?

We have to start from the fact that every control or vigilance system based on artificial intelligence, perfectly functioning, will certainly make wrong decisions too. The system is not deterministic, as it could be, for instance, a speed camera: if you exceed speed limit with your car, speed camera detects it and fine is inevitable.

You can appeal against sanction, but the car driver is always guilty until proven otherwise. A deterministic system perfectly functioning ì, certificated and verified, can decide who is the culprit. An artificial intelligence system, on the contrary, perfectly functioning, is nothing more than a statistics agent and it necessarily produces probabilistic results which could result correct in the 98% of the cases and wrong in the remaining 2% (it is inappropriate to classify them as “errors”). Thus, for the 2% of the cases, a person is recognized as the culprit even though it is not.

For that person, the wrong decision can have effects far beyond the scope of the decision itself: lost opportunities, social blaming, negative feedback and other effects that can easily spread on the web and become impossible to remove.

In some cases, the review procedure can even miss or, if it exists, can be insufficient or have a disproportionate cost not accessible to everyone, or even, can require an excessive time and not be able to rectify the spill-over caused by the mistake. Imagine finding yourself closed behind bars because an artificial intelligence system identified you as the culprit of a crime, since it attributed a 75% probability of your similarity to the identikit given by the victim. Fortunately, now you are not in prison, otherwise you wouldn’t be reading this article. In other places, though, it is an affirmed reality.

We can ask ourselves what is better to do to take the best from the opportunities offered by artificial intelligence and to orient technologies to the growth of collective wellness. Every valuation cannot prescind from shared values that are at the basis of our society. It is exactly on this point that we want to stop: collective wellness.

Is it socially acceptable that an artificial intelligence makes decisions benefitting the majority of people while damaging others? In some eastern cultures the answer is “yes”: the unfortunate are victims of the superior interest of society that is always before the person. In western culture, instead, judgement is much more nuanced and generally the protection of the person is considered (almost) indispensable.

Think about, for instance, an artificial intelligence that decides who has to be imprisoned and who has not. Some cultures, particularly eastern cultures – for dictatorships is a basic rule – accept the idea that some evaluation mistake and so some innocent in prison, justify the overall positive effect. We can think: “better an innocent in prison than a guilty in freedom”. In our system of values, it is the opposite.

In a democratic system, the evaluation of balance between collective benefit and the risk of harm of the individual, is effectuated by specific organisms using rigorous procedures. Let’s do a mind experiment by considering an example concerning future autonomous driving cars: the CEO of a car company sells his products (non-defective) to users, knowing that ten thousand of them will certainly die while driving. The cause of death is not in products, the problem is human driving.  The CEO is not responsible for those deaths, yet it is the customer to be guilty.

Let’s imagine now that this car company introduces a future technology of autonomous driving that allows to lower the number of victims from ten thousand to 50. It would be a wonderful income for the company, but almost certainly, the families of the victims would sue the company and denounce the CEO. Let’s assume that the company protects itself, from the civil responsibility point of view, through a policy assurance so that it can avoid bankruptcy followed by compensation for damages and responsibility; the CEO could, however, risk being imprisoned for putting on market 2a product that caused deaths.

Here is a system of artificial intelligence non-defective producing -unfortunately and luckily- wrong predictions. “Unfortunately” because it is really difficult to give an explanation of why the algorithm took that specific decision. “Luckily” because it is a system that works: it lowers deaths from ten thousand to 50 victims. Being a statistical approach, though, “doing mistakes” is allowed, since it is “necessary” to allow it to work better.

The CEO at this point, to defend himself should demonstrate that his system of driving is not defective and that prediction mistakes, causing mortal accidents, are within the tolerance of the plate data declared and accepted by the consumer (and endorsed by the courts).

What must be these plate data be? What values are socially acceptable and represent a benefit for the company and so a technology to promote? And how to evaluate them?

 We are facing a case in which, on the one hand, it is possible that the effect of a wrong prediction could generate relevant impacts on an individual’s life, this time for the entire community. What direction should we take? It is necessary to find a compromise!

A regulatory facility that can guide us in similar situations, already exists: that of drugs. We know that, in some cases and if used properly, some drugs can have collateral effects, even mortal ones. But normally they work.

The development of drug industry is based on a solid normative system, decided with democratic processes and socially shared, that obliges drugs companies to:

  • Declare what medicines are for.
  • Declare how they should be used.
  • Declare any negative collateral effect.
  • Perform tests on animals and in case of success, ask for the permission to test on people.
  • Monitor and track each subsequent use phase.

We already have laws that regulate all of that. Let’s imagine now that a new mortal pathology is discovered: if nothing is decided, we will have 10 thousand victims; if we give a drug, we will have 16 victims. No one can doubt that this drug would be a wonderful benefit for the company; and above all, that the drug company and the CEO are not responsible, respectively civilly and penally, for adverse cases.

Probably, even for artificial intelligence, a normative infrastructure would be useful, able to evaluate companies responsibility, not in relation to the single accident but to the overall effect: by imposing the obligation of making proper tests, declaring what has been optimized, following proper authorisation tests, linking the responsibility to the conformity of these protocols and instituting vigilance and control organisms.

Only in this way, fixing exactly the point of balance between individual guarantees and collective benefit for the applications that bring higher impacts on people’s life, companies and their CEOs can be relieved, otherwise they could result in inhibition, with the effect of depriving the entire community from the benefits artificial intelligence can bring.

Naturally, not all the applications using artificial intelligence should follow such a stringent and onerous testing, certification and control regime, but only those that are considered to determine relevant impacts on people’s life.

On closer inspection, even pharmaceutical regulation is neither black nor white, but it is a continuum between pharmaceutical and alimentation: from products that can be used only under the supervision of a doctor, to those that can be prescribed by the doctor, to the over-the-counter drugs, to the nutraceutical and generic food products. For each category, there are rules and various control organisms appropriate to the need. Even the artificial intelligence systems can be optimized in relation to the regulatory framework.

In this way, we don’t have to treat artificial intelligence on a par with a new form of esoteric life that in some way escapes from moral questions. We can treat it as what it is: an instrument that, for each sector, should be valued peacefully, considering how much it improves our way of operating in relation to traditional methods.

USEFUL LINKS:

Artificial Intelligence in Pharmaceutical, Biologics and Medical Devices

How should we regulate Artificial Intelligence?The Artificial Thought.

The Artificial Thought.

ORIGINAL ARTICLE:

L’algoritmo come il farmaco.

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *

Copyright © All rights reserved. | Newsphere by AF themes.