AI Can Cause Banning Of YouTube Channels

Posted by Kirhat | Tuesday, November 18, 2025 | | 0 comments »

Tech YouTuber
Whether it's a woodworking YouTube channel or one focused on car repairs, one constant is the community of like-minded individuals that develops in comments sections and Twitch chats.

Take Enderman, a YouTube channel dedicated to exploring Windows. It has a 390,000-strong subscriber base, which Enderman carefully cultivated since starting in November 2014. In November 2025, though, Enderman was on the receiving end of a channel ban that was allegedly unjust and administered by YouTube's AI tools.

As a result, fans of the channel have been at the center of a wave of discourse surrounding so-called "clankers" and their influence on content moderation — to wit, the dystopian idea of AI making such decisions without sufficient oversight from a human.

The Reddit thread "Enderman's channel has sadly been deleted..." gets immediately to the heart of the issue, in my eyes, with u/CatOfBacon lamenting, "This is why we should never let clankers moderate ANYTHING. Just letting them immediately pull the trigger with zero human review is just going to cause more ... like this to happen."

Of course, moderation errors can be made, whether by human or AI, and in such cases, many feel the utmost needs to be done to ensure creators can rectify the situation when they are penalized or even banned unfairly. That said, u/Bekfast59 added that the appeals process in such a case can be "fully AI as well," muddying the waters.

Watching fans hurry to preserve the YouTuber's content on services like PreserveTube, it really struck me that YouTube's processes can leave creators extremely vulnerable. A banned channel means that those connected to it are also banned, and it isn't clear precisely how YouTube determines that. These things need to be made more transparent to users.

A 3 November 2025 upload from Enderman, simply titled "My channel is getting terminated," leaves no room at all for ambiguity. He immediately launches into the story of his second channel, Andrew, which had been banned for something seemingly random: Being linked to another channel that had been hit by three copyright strikes, according to the YouTube Studio message the content creator received.

With no apparent connection to the other channel in question, a bemused Enderman associated this banning with a mistaken automatic AI flagging. "I had no idea such drastic measures like channel termination were allowed to be processed by AI, and AI only," he said.

From the video and the YouTube Studio appeals process that the creator went through on camera, it isn't clear whether this was entirely the case or whether a human evaluated the channel after it was flagged. Enderman's claim, though, is far from a unique one among tech YouTubers.

Other channels, such as creator Scrachit Gaming (who has accrued 402,000 subscribers over almost 3,000 uploads), were also targeted, with the creator sharing in a post on X that they had also been banned for an alleged link to the same channel that Enderman was flagged for.

The very same day, a follow-up post from TeamYouTube declared that it had restored the Scrachit Gaming channel after looking into the ban, and had also followed up with other affected creators. As of the time of writing, Enderman's secondary channel Andrew has also been reinstated. The quick turnaround went a very long way to convincing me that this may have been a simple automatic error by YouTube's systems, quickly corrected when a human assessed the situation.

With a huge network of channels of all shapes and sizes, it's natural that there would be some bad actors among them, and that YouTube would require ways of responding to and combating that. Unfortunately, though, it seems that the AI systems that play a role in this lack oversight, a problem for the platform to resolve going forward. What is undeniable is that machine learning has a significant role in the way that YouTube monitors and moderates its content.

Read More ...

Android Plans To Get Namedrop Feature Like iPhone

Posted by Kirhat | Monday, November 17, 2025 | | 0 comments »

Androind Feature
When Apple introduced NameDrop to the iPhone early and Apple Watch with the release of iOS 17, many weren't sure how to feel about the new contact sharing option. In fact, many flocked to find guides on how to disable NameDrop as soon as it was widely available, with some law enforcement agencies even issuing warnings about the feature, urging users to turn it off.

While those warnings allude to the feature being a bit more of a security flaw than it really is, as NameDrop still requires user verification to send information to another device, that hasn't stopped the feature from becoming one of the more underrated contact sharing solutions built directly into the operating system. And now, it looks like Google might follow suit with something similar..

According to a new APK teardown from Android Authority, Google first included code for a NameDrop-like feature in v25.44.32 beta of Google Play Services. The blog claims to have discovered strings of code tied to two features called Gesture Exchange and Contact Exchange. Now, with the release of Google Play Services v25.46.31, the folks behind that report were able to actually enable one of the features attached to the new system..

Just like its iOS counterpart, Google's Contact Exchange feature appears to offer both a Share and Receive only option for those who plan to use it. It's still unclear exactly how it will work — NameDrop can be activated by placing two iOS devices with it enabled right next to each other — though it's believed it will use NFC much in the same way that Apple's feature does. That said, it's impossible to say for sure without Google officially announcing the feature. .

It's also a bit early to nail down what Google plans to call this feature, as Contact Exchange and Gesture Exchange both sound like work-in-progress names and not something considered as a major operating system feature. It's also unclear whether this new feature will launch as part of Android 16 or if Google will push it over to Android 17 sometime in the future.

Read More ...

Test Shows AI Understands Human Feelings

Posted by Kirhat | Sunday, November 16, 2025 | | 0 comments »

AI Empathy
Although artificial intelligence is frequently lauded for its coding ability or its math skills, how does it really perform when it is examined on something inherently human, such as emotions?

A recent study from the University of Geneva and the University of Bern reports that a handful of popular AI systems (e.g. ChatGPT) may actually have superior performance than participants taking an emotional intelligence test made for humans.

Researchers wanted to explore whether machines could recognize and reason about emotions similarly to how humans do, and surprisingly, the answer was yes – and more. Across a total of five different tests of emotional understanding and regulation, an average of 81 percent of the six AI models used correctly answered emotional understanding questions, whereas the average human had a correct response rate of only 56 percent.

These findings challenge the deep-rooted assumption that empathy, judgment, or emotional awareness exists only among humans.

The researchers used the well-established assessments psychologists use to measure "ability emotional intelligence," which has a right and wrong answer, much like a math test or personality quiz. Subjects had to choose the emotion the person was likely to feel in a specific situation, or what the best option was to help someone relax.

The AI models (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3) underwent testing between December 2024 and January 2025. Each system completed the tests on ten occasions so researchers could find average scores from the models, and compare with the scores of human participants from previous validation studies.

In conclusion, each AI exceeded humans in every test. The systems displayed a high degree of agreement among themselves, which indicates that they produced similar emotional judgments, even in the absence of direct training on the evaluation of emotions.

"LLMs can not only identify the best option among many available ones, but also create new scenarios that suit the desired context," says Katja Schlegel, lecturer at the University of Bern’s Institute of Psychology and lead author of the study.

Two tests, the Situational Test of Emotion Understanding (STEU) and the Geneva Emotion Knowledge Test – Blends (GEMOK-Blends), assessed the participants’ ability to recognize emotional states in different situations. Other tests, the Situational Test of Emotion Management (STEM) and subtests from the Geneva Emotional Competence Test (GECo), evaluated emotional regulation and emotional management.

Each question presented a realistic situation and asked for the best answer that demonstrated emotional intelligence. For example, if Employee A stole an idea from Employee B and then presented it to their supervisor and received praise, the appropriate answer is not to confront Employee A or seek revenge, but to subtly approach a supervisor with a calm discussion. This is an act of emotional control.

"The results showed significantly higher scores for the LLMs – 82 percent, compared to 56 percent by human participants," explained Marcello Mortillaro, a senior scientist at the Swiss Centre for Affective Sciences. "This indicates that these AIs not only comprehend emotions, but also possess an understanding of functioning with emotional intelligence."

Read More ...

Apple And Issey Miyake Unite For The iPhone Pocket

Posted by Kirhat | Saturday, November 15, 2025 | | 0 comments »

iPhone Pocket
Apple and Japanese fashion house ISSEY MIYAKE have teamed up to launch iPhone Pocket, a limited-edition accessory designed to carry an iPhone like a piece of clothing, Apple announced in a news release. The collaboration introduces a 3D-knitted pocket that can stretch to hold an iPhone and other essentials.

The iPhone Pocket, inspired by ISSEY MIYAKE’s pleats and the concept of "a piece of cloth," features a ribbed open structure that subtly reveals its contents when stretched.

It can be worn in multiple ways, handheld, tied to a bag, or worn on the body, and comes in a palette of eight colors for the short strap design and three for the long strap version.

n a statement, Molly Anderson, Apple’s vice president of industrial design, said: “Apple and Issey Miyake share a design approach that celebrates craftsmanship, simplicity and delight. This clever extra pocket exemplifies those ideas and is a natural accompaniment to our products.

"The colour palette of iPhone Pocket was intentionally designed to mix and match with all our iPhone models and colours – allowing users to create their own personalised combination. Its recognizable silhouette offers a beautiful new way to carry your iPhone, AirPods, and favourite everyday items."

The accessory is made in Japan, and prices start at US$ 149.95 for the short strap and US$ 229.95 for the long strap, with availability beginning soon at select Apple Stores and online in regions including the U.S., UK, Japan and China, the release says.

Online reception towards the iPhone Pocket has been mostly negative, with many users criticising its high price and others unfavourably comparing it to the "mankini" worn by Sacha Baron Cohen's Borat in the 2006 film.

Read More ...

Launch Of Russia's AI Robot Was A Disaster

Posted by Kirhat | Friday, November 14, 2025 | | 0 comments »

Aldol
Russia’s first AI humanoid robot collapsed on stage seconds after making its debut at a technology event in Moscow. The Video showed the robot, Aldol, staggering onto the stage to the soundtrack of Gonna Fly Now from the film "Rocky" during a showcase of Russia’s emerging robotics sector on Tuesday (11 November).

But as the humanoid lifted its hand to wave at the crowd, it lost balance and fell to the ground, shattering into pieces. Developers were seen hastily trying to pick the robot back up before giving up and trying to cover it with a black cloth. But this ended up being tangled up with the robot, which was moving erratically on the ground.

The robot, presented by the Russian robotics firm Idol, was being shown at a forum of the New Technology Coalition in Moscow, an association of companies for the development of humanoid robots, including Promobot, Double U Expo, Idol and Robot Corporation.

The exhibit aimed to demonstrate Russia’s progress in artificial intelligence and anthropomorphic robotics as the country positions itself in the global race for next-generation humanoid machines.

Developers had hailed the robot’s ability to fulfil three human functions, including moving on its legs, manipulating objects and communicating with people. But instead, it showcased Russia’s failings in the robotics sector.

Russia’s domestic robotics development has lagged behind since Vladimir Putin launched his full-scale invasion of Ukraine. The sector had previously relied on foreign manufacturers, but they all withdrew from the country when the war began, triggering discussions among authorities about how to boost progress in an increasingly significant global sector.

In 2023, just 2,100 robotic complexes were installed in Russia compared to 25,000 in Germany and 300,000 in China, according to a report in the IntelliNews.

Read More ...

Tech Companies Need To Replace Labor To Gain Profit

Posted by Kirhat | Wednesday, November 12, 2025 | | 0 comments »

Geoffrey Hinton
Computer scientist and Nobel laureate Geoffrey Hinton has reiterated his warnings about how artificial intelligence will affect the labor market and the role of companies leading the charge.

In an interview with Bloomberg TV’s Wall Street Week last 31 October, he said the obvious way to make money off AI investments, aside from charging fees to use chatbots, is to replace workers with something cheaper.

Hinton, whose work has earned him a Nobel Prize and the moniker "godfather of AI," added that while some economists point out previous disruptive technologies created as well as destroyed jobs, it’s not clear to him that AI will do the same.

"I think the big companies are betting on it causing massive job replacement by AI, because that’s where the big money is going to be," he warned.

Just four so-called AI hyperscalers — Microsoft, Meta, Alphabet and Amazon — are expected to boost capital expenditures to US$ 420 billion next fiscal year from US$ 360 billion this year, according to Bloomberg.

Meanwhile, OpenAI alone has announced a total of US$ 1 trillion in infrastructure deals in recent weeks with AI-ecosystem companies like Nvidia, Broadcom and Oracle.

When asked if such investments can pay off without destroying jobs, Hinton replied, "I believe that it can’t. I believe that to make money you’re going to have to replace human labor."

The remarks echo what he said in September, when he told the Financial Times that AI will "create massive unemployment and a huge rise in profits," attributing it to the capitalist system.

In fact, evidence is mounting that AI is shrinking opportunities, especially at the entry level, and an analysis of job openings since OpenAI launched ChatGPT shows they plummeted roughly 30 percent.

And this past week, Amazon announced 14,000 layoffs, largely in middle management. While CEO Andy Jassy said the decision was due to "culture" and not AI, a memo he sent in June predicted a smaller corporate workforce "as we get efficiency gains from using AI extensively across the company."

Read More ...