-
Published: 19 July 2023
The authorities in a Japanese prefecture decided to adhere to the opinion of the machine and not to provide temporary alternative care for the little one despite the visible bruises on her body
Edited by| Christian Megan
Technology section - CJ journalist
Tokyo – July,19,2023
Resorting to artificial intelligence without regulatory restrictions that scrutinize the proposed results, many problems may lead to destruction (bxhair)
The use of artificial intelligence without examination and scrutiny, without regulatory restrictions and laws that scrutinize the results it proposes, poses many problems that may lead to the destruction of those who are guided by its advice.
In 2019, child protection counseling centers in the Japanese prefecture of Mie began experimenting with an artificial intelligence-based system, whose task is to help reach a decision on placing this or that child in temporary preventive care away from parents or legal guardians, in case they neglect their young or practice violence against them.
The development of the smart model followed the death of a number of children because they remained with parents or guardians despite signs that they were being bullied at home. In the experiment, staff at these centers entered basic information, such as the ages of children and their family members, into computers during consultations, as well as other data intended for risk assessment, such as "whether there are bruises on the head, face or stomach" or not.
After entering all the required information, the AI system calculates the possible frequency of abuse, decides how many days are required for consultations, and whether the case requires temporary preventive custody. After considering the assessments made by the AI, officials at the center make a final verdict on whether the children should be placed in temporary protective custody.
The experiment lasted about eight months, and apparently the curators of this governorate then considered that the artificial intelligence assistance alleviates the burden on the center's staff, and provides appropriate assessments regarding the abuse that some children suffer even from the closest people to them.
But finally, it seemed clear that this artificial intelligence designed to protect children may lead them to an unknown fate, especially if we decide that it is the first and last decision maker and neglect our role. In an incident covered by several newspapers and websites, including the "Japan Times" (Japan Times), the ugly end to the life of a four-year-old girl was written only because officials decided to appeal to the opinion of this technology and threw clear signs in front of their eyes indicating a reality different from the verdict reached by "artificial intelligence".
So far, many details about the incident have not been made public, but according to the newspaper, a decision partially decided by the artificial intelligence system in 2022 not to place the girl under temporary alternative care was the reason for her subsequent death at the hands of her mother, as indicated by officials in Mie Prefecture.
Last June, the provincial police arrested the mother (42 years old), accusing her of injuring her child, which led to her death.
The smart system did not weigh the decision to give temporary custody in this case, and its recommendation for such custody did not exceed 39 percent. Accordingly, the officials decided to continue to monitor the situation, but without taking the little one to the temporary nursery, while the mother indicated that she would follow the instructions of the staff of the child protection counseling center.
At a press conference a week ago, Katsuyuki ichimi, the governor of Mie Prefecture, stressed the importance of decisions made by officials dealing with such issues. "The numbers shown by the AI system are nothing more than a measuring tool,"he said.
"We cannot decide now whether the way the AI system was used to make the decision is one hundred percent appropriate," the governor added, noting that a committee composed of external experts will review the case.
The Mie Prefectural Government announced the launch of a plan that requires children's counseling centers to directly verify through visits the safety of all children under home supervision.
It is noteworthy that for the deceased child, the staff of the children's counseling center has not independently verified her safety for almost a year.
In fact, this incident is only the tip of the iceberg of mistakes made and will be made by artificial intelligence in general, not to mention the many dangers posed by this technology to humanity as a whole. Job losses due to automation and surveillance of societies, ethnic and gender bias, widening social and economic inequality, in addition to the decline in morality and the spread of biased opinions, more investments in autonomous weapons powered by artificial intelligence for war purposes, and financial crises due to these algorithms... The list of risks goes on for a long time, and they are undoubtedly really real. However, artificial intelligence itself will be the most important tool or tool in our toolbox that will solve the biggest challenges we face, especially since all the signs indicate that the capabilities that artificial intelligence can develop in the future are still at a very early stage. Ultimately, it is in our hands to decide when this technology plays the role of a referee, and where we want it to stay away, because we all make mistakes, even Follies, with"who is in US" artificial intelligence.
{source}<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-4474625449481215"
crossorigin="anonymous"></script>
<!-- moss test ad -->
<ins class="adsbygoogle"
style="display:block"
data-ad-client="ca-pub-4474625449481215"
data-ad-slot="6499882985"
data-ad-format="auto"
data-full-width-responsive="true"></ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>{/source}