A recent investigation by Human Rights Watch has uncovered a disturbing trend in AI development, where images of children are being used to train artificial intelligence models without consent, potentially exposing them to significant privacy and safety risks.that Human Rights Watch researcher Hye Jung Han has discovered that popular AI datasets, such as LAION-5B, contain links to hundreds of photos of Australian children.
Perhaps even more alarming is the fact that some of the URLs in the dataset reveal identifying information about the children, including their names and locations. In one instance, Han was able to trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia” from a single photo link. This level of detail puts children at risk of privacy violations and potential safety threats.
The use of these images in AI training sets poses unique risks to Australian children, particularly indigenous children who may be more vulnerable to harm. Han’s report highlights that for First Nations peoples, who “restrict the reproduction of photos of deceased people during periods of mourning,” the inclusion of these images in AI datasets could perpetuate cultural harms.