AI Can Cause Banning Of YouTube Channels

Posted by Kirhat | Tuesday, November 18, 2025 | | 0 comments »

Tech YouTuber
Whether it's a woodworking YouTube channel or one focused on car repairs, one constant is the community of like-minded individuals that develops in comments sections and Twitch chats.

Take Enderman, a YouTube channel dedicated to exploring Windows. It has a 390,000-strong subscriber base, which Enderman carefully cultivated since starting in November 2014. In November 2025, though, Enderman was on the receiving end of a channel ban that was allegedly unjust and administered by YouTube's AI tools.

As a result, fans of the channel have been at the center of a wave of discourse surrounding so-called "clankers" and their influence on content moderation — to wit, the dystopian idea of AI making such decisions without sufficient oversight from a human.

The Reddit thread "Enderman's channel has sadly been deleted..." gets immediately to the heart of the issue, in my eyes, with u/CatOfBacon lamenting, "This is why we should never let clankers moderate ANYTHING. Just letting them immediately pull the trigger with zero human review is just going to cause more ... like this to happen."

Of course, moderation errors can be made, whether by human or AI, and in such cases, many feel the utmost needs to be done to ensure creators can rectify the situation when they are penalized or even banned unfairly. That said, u/Bekfast59 added that the appeals process in such a case can be "fully AI as well," muddying the waters.

Watching fans hurry to preserve the YouTuber's content on services like PreserveTube, it really struck me that YouTube's processes can leave creators extremely vulnerable. A banned channel means that those connected to it are also banned, and it isn't clear precisely how YouTube determines that. These things need to be made more transparent to users.

A 3 November 2025 upload from Enderman, simply titled "My channel is getting terminated," leaves no room at all for ambiguity. He immediately launches into the story of his second channel, Andrew, which had been banned for something seemingly random: Being linked to another channel that had been hit by three copyright strikes, according to the YouTube Studio message the content creator received.

With no apparent connection to the other channel in question, a bemused Enderman associated this banning with a mistaken automatic AI flagging. "I had no idea such drastic measures like channel termination were allowed to be processed by AI, and AI only," he said.

From the video and the YouTube Studio appeals process that the creator went through on camera, it isn't clear whether this was entirely the case or whether a human evaluated the channel after it was flagged. Enderman's claim, though, is far from a unique one among tech YouTubers.

Other channels, such as creator Scrachit Gaming (who has accrued 402,000 subscribers over almost 3,000 uploads), were also targeted, with the creator sharing in a post on X that they had also been banned for an alleged link to the same channel that Enderman was flagged for.

The very same day, a follow-up post from TeamYouTube declared that it had restored the Scrachit Gaming channel after looking into the ban, and had also followed up with other affected creators. As of the time of writing, Enderman's secondary channel Andrew has also been reinstated. The quick turnaround went a very long way to convincing me that this may have been a simple automatic error by YouTube's systems, quickly corrected when a human assessed the situation.

With a huge network of channels of all shapes and sizes, it's natural that there would be some bad actors among them, and that YouTube would require ways of responding to and combating that. Unfortunately, though, it seems that the AI systems that play a role in this lack oversight, a problem for the platform to resolve going forward. What is undeniable is that machine learning has a significant role in the way that YouTube monitors and moderates its content.

0 comments

Post a Comment