Google has taken down over 7,500 YouTube channels in 2023 first quarter as part of its investigation into coordinated influence operations, apart from terminating 6,285 accounts and 52 Blogger content linked to China.
These channels and blogs are used to upload spam content in Chinese about music, entertainment and lifestyle, as per reports.
“A very small subset uploaded content in Chinese and English about China and US foreign affairs,” said Google.
Google’s Threat Analysis Group (TAG) has also terminated 40 YouTube channels sharing content in Persian, English, Hindi and Urdu that was supportive of the Iranian government and critical of protesters in the Islamic country.
YouTube has also taken down 1,088 accounts, which were sharing pro-Azerbaijan content and critical of Armenia. The online video platform also blocked two domains from eligibility to appear on Google News surfaces and Discover as part of its investigation into coordinated influence operations linked to individuals from Poland.
“The campaign was sharing content in Polish that was supportive of Russia and critical of the United States and Ukraine. We received leads from Mandiant, which is now part of Google Cloud,” the company stated.
The termination spree didn’t end there, as Google acted against 87 YouTube channels linked to the Russian Internet Research Agency (IRA).
“We terminated 4 YouTube channels as part of our investigation into coordinated influence operations. The campaign was sharing content in German that was critical of Ukrainian refugees,” the company added.
Meanwhile, YouTube has decided to relax its stance on profanity in videos after an uproar in the community regarding the new rules.
The rules, which came into effect in November 2022, treated all profanity as equal, meaning there were no exceptions or no context wherein the flagged videos would earn ad revenue.
The new rules also seemed to ‘disproportionately affect’ gaming-focused YouTube channels that often produce videos with M-rated games. The video streaming platform had started to retroactively apply these rules, meaning old videos that earned revenue were no longer eligible.
The content-sharing platform was also not allowing the content creators to edit the older videos to earn revenue again.
Under the old rules, which earned the wrath of the content creators’ community, videos would get demonetized if there was any swearing within the first seven seconds.
Under the revised policies, videos with ‘inappropriate language’ will still be eligible for monetization, even if there are curse words within the first seven seconds.
If the offensive language comes under the “moderate” category, then the video would get limited advertisements, instead of being completely demonetized. However, if the curse words are ‘excessive’, the video will be flagged by YouTube. The use of background music with swear words will no longer cause the video to be demonetized/receive limited ads.
YouTube will now review older videos that were flagged, and will clear them if they meet the criteria cited under the revised rules.