Recall that video hosting has introduced new rules for authors, requiring the disclosure of information about the use of neural networks in creating videos. When uploading a video, you should consider whether it contains images of people doing things that were not actually done, or of places and events that never happened. Special “presence” labels will appear on AI videos in the coming weeks.
However, bloggers cannot reveal information if the AI, for example, created clearly unrealistic content or participated in writing the script. Also included in these standards are children’s content, which may include not only useful materials, but also low-quality materials with generic characters, incongruous plots and a lack of educational value, often published without any verification.
“We require creators of children’s content to disclose substantially altered or artificially created content if it appears realistic. We do not require the disclosure of content that is clearly not real and that does not mislead the viewer into believing it is real,” it clarified. Elena Hernández, video hosting representative.
However, as Wired writes, these videos can pass through parental control and the platform’s recommendation algorithm, which threatens the spread of dubious content among young users. What’s more, YouTube is a “kids’ entertainment giant” that eclipses even Disney.
Compounding the problem, according to a Pew Research Center study, 53% of parents of children ages 11 and younger say their children watch videos on the service every day, and 35% say they do so several times a day. day. Although there is a children’s app called YouTube Kids, built using automatic filters and user feedback, many parents still use the main version, relying on flashy video intros and clickbait titles.
“The logic of YouTube’s new rules is to accurately inform a person that this is not a real person, but a deepfake. In fact, for children this is a minor problem: what difference does it make if you draw a “Dinosaur in a cartoon? By a person or by artificial intelligence?” said the director of public relations operations at the iTrend agency, Denis Boytsov.
In their opinion, it is necessary to protect children using this service, first of all, not from generative models, but from live bloggers who exploit the peculiarities of children’s consciousness work and offer low-level entertainment content. Moreover, they do it according to the principle “if only they would call more people.”
“YouTube, unfortunately, never introduced these rules that have been requested for a long time,” the interlocutor emphasized.