Algorithms From BrainBody LLM Offer A lot Of Potential

Posted by Kirhat | Monday, December 01, 2025 | | 0 comments »

Virtual Figures
Can you imagine a robot that doesn’t just follow commands but actually plans its actions, adjusts its movements on the go, and learns from feedback—much like a human would? This question may sound like a far-fetched idea, but researchers at NYU Tandon School of Engineering have achieved this with their new algorithm, BrainBody-LLM.

According to Rupendra Brahambhatt of Interesting Engineering, one of the main challenges in robotics has been creating systems that can flexibly perform complex tasks in unpredictable environments.

Traditional robot programming or existing LLM-based planners often struggle because they may produce plans that aren’t fully grounded in what the robot can actually do.

BrainBody-LLM addresses this challenge by using large language models (LLMs)—the same kind of AI behind ChatGPT to plan and refine robot actions. This could make future machines smarter and more adaptable.

The BrainBody-LLM algorithm mimics how the human brain and body communicate during movement. It has two main components: the first is the Brain LLM that handles high-level planning, breaking complex tasks into smaller, manageable steps.

The Body LLM then translates these steps into specific commands for the robot’s actuators, enabling precise movement.

A key feature of BrainBody-LLM is its closed-loop feedback system. The robot continuously monitors its actions and the environment, sending error signals back to the LLMs so the system can adjust and correct mistakes in real time.

"The primary advantage of BrainBody-LLM lies in its closed-loop architecture, which facilitates dynamic interaction between the LLM components, enabling robust handling of complex and challenging tasks," Vineet Bhat, first study author and a PhD candidate at NYU Tandon, said.

To test their approach, the researchers first ran simulations on VirtualHome, where a virtual robot performed household chores.

They then tested it on a real robotic arm, the Franka Research 3. BrainBody-LLM showed clear improvements over previous methods, increasing task completion rates by up to 17 percent in simulations.

On the physical robot, the system completed most of the tasks it was tested on, demonstrating the algorithm’s ability to handle real-world complexities.

BrainBody-LLM could transform how robots are used in homes, hospitals, factories, and in various other settings where machines are required to perform complex tasks with human-like adaptability.

The method could also inspire future AI systems that combine more abilities, such as 3D vision, depth sensing, and joint control, helping robots move in ways that feel even more natural and precise.

However, it’s still not ready for full-scale deployment. So far, the system has only been tested with a small set of commands and in controlled environments, which means it may struggle in open-ended or fast-changing real-world situations.

Read More ...

Will Apple Launch 3 iPhones Instead Of iPhone 18 In 2026?

Posted by Kirhat | Saturday, November 29, 2025 | | 0 comments »

iPhone Colors
After they released the new iPhone models during the fall of every year for time immemorial, recent rumors suggest that tech giant Apple is planning to change its launch schedule by adding new revenue growth opportunities for the company.

The latest report to highlight that comes from Bloomberg's Mark Gurman. In his Power On newsletter, the journalist says Apple's next fall product release will only consist of the iPhone 18 Pro, iPhone 18 Pro Max, and iPhone Fold.

Apparently, the company is set to shift the standard iPhone 18's release to early 2027, alongside the iPhone 18e and possibly the second generation of the iPhone Air.

According to Gurman, this move to release new iPhone models every six months could help Apple get steadier revenue throughout the year, reduce strain on employees and manufacturing partners, and prevent the different models from cannibalizing each other's sales. It's also going to be easier for customers to choose their preferred model when Apple releases future iPhones.

Over the past few weeks, there have been rumors suggesting that Apple is uncertain about launching a second generation of the iPhone Air. While some reports suggested the company will scrap the second-gen model. The Information says Apple will launch the iPhone Air 2 in spring 2027 with a dual-lens camera system. On the other hand, Bloomberg's Mark Gurman says Apple isn't preparing a major design change to the Air, but is instead focusing on the new A20 Pro chip based on the 2nm manufacturing process to extend the battery life.

Not only that, but Gurman notes that Apple didn't promote the iPhone Air as the "iPhone 17 Air" because it doesn't want to tie its release schedule to the main line, which means the company could release it every 15 or 18 months, or even less frequently.

Still, the journalist says the iPhone Air was more of an experiment in preparation for the iPhone Fold, which is expected to be like two iPhone Airs stacked together. The company is also expected to unveil the all-new iPhone 20 in 2027 to celebrate the iPhone's 20th anniversary.

Read More ...

China Has Started Deploying Humanoid Robots To Its Borders

Posted by Kirhat | Thursday, November 27, 2025 | | 0 comments »

UBTech Walkers
China’s tech company UBTech Robotics has just secured a 264 million yuan (US$ 37 million) contract to deploy industrial-grade humanoid robots across border crossings in Guangxi, expanding the country’s push to apply robotics in public-facing and industrial environments. Deliveries are scheduled to begin in December.

The agreement was signed with a humanoid robot centre in Fangchenggang, a coastal city bordering Vietnam. The deployment will involve UBTech’s Walker S2, a model launched in July and described as the world’s first humanoid robot capable of autonomously replacing its own battery.

The initiative marks one of China’s largest real-world rollouts of humanoid systems in government operations. The details were first reported by the South China Morning Post (SCMP).

Simultaneously, the company issued a brief public announcement on social media alongside news of its inclusion in the MSCI China Index.

The pilot programme will deploy Walker S2 robots at border checkpoints to guide travellers, manage personnel flow, assist with patrol duties, handle logistics tasks, and support commercial services, the SCMP report said. In addition to immigration-related operations, the robots will also be used at manufacturing sites for steel, copper, and aluminium to conduct inspections.

The deal reflects an acceleration in China’s broader effort to commercialise embodied AI. The robotics sector has received strong policy backing, and agencies across multiple provinces have begun incorporating robots into routine work.

Similar deployments have also appeared in airports, government offices, and at major events. A China Central Television segment referenced by the SCMP reported that a related robot had been deployed at Hangzhou Xiaoshan International Airport to answer passenger questions.

During this year’s Shanghai Cooperation Organisation Summit in Tianjin, immigration authorities used a multilingual robot developed by Beijing-based iBen Intelligence. Police patrol robots have also been seen in cities such as Shenzhen, Shanghai, and Chengdu.

Read More ...

AI Can Cause Banning Of YouTube Channels

Posted by Kirhat | Tuesday, November 18, 2025 | | 0 comments »

Tech YouTuber
Whether it's a woodworking YouTube channel or one focused on car repairs, one constant is the community of like-minded individuals that develops in comments sections and Twitch chats.

Take Enderman, a YouTube channel dedicated to exploring Windows. It has a 390,000-strong subscriber base, which Enderman carefully cultivated since starting in November 2014. In November 2025, though, Enderman was on the receiving end of a channel ban that was allegedly unjust and administered by YouTube's AI tools.

As a result, fans of the channel have been at the center of a wave of discourse surrounding so-called "clankers" and their influence on content moderation — to wit, the dystopian idea of AI making such decisions without sufficient oversight from a human.

The Reddit thread "Enderman's channel has sadly been deleted..." gets immediately to the heart of the issue, in my eyes, with u/CatOfBacon lamenting, "This is why we should never let clankers moderate ANYTHING. Just letting them immediately pull the trigger with zero human review is just going to cause more ... like this to happen."

Of course, moderation errors can be made, whether by human or AI, and in such cases, many feel the utmost needs to be done to ensure creators can rectify the situation when they are penalized or even banned unfairly. That said, u/Bekfast59 added that the appeals process in such a case can be "fully AI as well," muddying the waters.

Watching fans hurry to preserve the YouTuber's content on services like PreserveTube, it really struck me that YouTube's processes can leave creators extremely vulnerable. A banned channel means that those connected to it are also banned, and it isn't clear precisely how YouTube determines that. These things need to be made more transparent to users.

A 3 November 2025 upload from Enderman, simply titled "My channel is getting terminated," leaves no room at all for ambiguity. He immediately launches into the story of his second channel, Andrew, which had been banned for something seemingly random: Being linked to another channel that had been hit by three copyright strikes, according to the YouTube Studio message the content creator received.

With no apparent connection to the other channel in question, a bemused Enderman associated this banning with a mistaken automatic AI flagging. "I had no idea such drastic measures like channel termination were allowed to be processed by AI, and AI only," he said.

From the video and the YouTube Studio appeals process that the creator went through on camera, it isn't clear whether this was entirely the case or whether a human evaluated the channel after it was flagged. Enderman's claim, though, is far from a unique one among tech YouTubers.

Other channels, such as creator Scrachit Gaming (who has accrued 402,000 subscribers over almost 3,000 uploads), were also targeted, with the creator sharing in a post on X that they had also been banned for an alleged link to the same channel that Enderman was flagged for.

The very same day, a follow-up post from TeamYouTube declared that it had restored the Scrachit Gaming channel after looking into the ban, and had also followed up with other affected creators. As of the time of writing, Enderman's secondary channel Andrew has also been reinstated. The quick turnaround went a very long way to convincing me that this may have been a simple automatic error by YouTube's systems, quickly corrected when a human assessed the situation.

With a huge network of channels of all shapes and sizes, it's natural that there would be some bad actors among them, and that YouTube would require ways of responding to and combating that. Unfortunately, though, it seems that the AI systems that play a role in this lack oversight, a problem for the platform to resolve going forward. What is undeniable is that machine learning has a significant role in the way that YouTube monitors and moderates its content.

Read More ...

Android Plans To Get Namedrop Feature Like iPhone

Posted by Kirhat | Monday, November 17, 2025 | | 0 comments »

Androind Feature
When Apple introduced NameDrop to the iPhone early and Apple Watch with the release of iOS 17, many weren't sure how to feel about the new contact sharing option. In fact, many flocked to find guides on how to disable NameDrop as soon as it was widely available, with some law enforcement agencies even issuing warnings about the feature, urging users to turn it off.

While those warnings allude to the feature being a bit more of a security flaw than it really is, as NameDrop still requires user verification to send information to another device, that hasn't stopped the feature from becoming one of the more underrated contact sharing solutions built directly into the operating system. And now, it looks like Google might follow suit with something similar..

According to a new APK teardown from Android Authority, Google first included code for a NameDrop-like feature in v25.44.32 beta of Google Play Services. The blog claims to have discovered strings of code tied to two features called Gesture Exchange and Contact Exchange. Now, with the release of Google Play Services v25.46.31, the folks behind that report were able to actually enable one of the features attached to the new system..

Just like its iOS counterpart, Google's Contact Exchange feature appears to offer both a Share and Receive only option for those who plan to use it. It's still unclear exactly how it will work — NameDrop can be activated by placing two iOS devices with it enabled right next to each other — though it's believed it will use NFC much in the same way that Apple's feature does. That said, it's impossible to say for sure without Google officially announcing the feature. .

It's also a bit early to nail down what Google plans to call this feature, as Contact Exchange and Gesture Exchange both sound like work-in-progress names and not something considered as a major operating system feature. It's also unclear whether this new feature will launch as part of Android 16 or if Google will push it over to Android 17 sometime in the future.

Read More ...

Test Shows AI Understands Human Feelings

Posted by Kirhat | Sunday, November 16, 2025 | | 0 comments »

AI Empathy
Although artificial intelligence is frequently lauded for its coding ability or its math skills, how does it really perform when it is examined on something inherently human, such as emotions?

A recent study from the University of Geneva and the University of Bern reports that a handful of popular AI systems (e.g. ChatGPT) may actually have superior performance than participants taking an emotional intelligence test made for humans.

Researchers wanted to explore whether machines could recognize and reason about emotions similarly to how humans do, and surprisingly, the answer was yes – and more. Across a total of five different tests of emotional understanding and regulation, an average of 81 percent of the six AI models used correctly answered emotional understanding questions, whereas the average human had a correct response rate of only 56 percent.

These findings challenge the deep-rooted assumption that empathy, judgment, or emotional awareness exists only among humans.

The researchers used the well-established assessments psychologists use to measure "ability emotional intelligence," which has a right and wrong answer, much like a math test or personality quiz. Subjects had to choose the emotion the person was likely to feel in a specific situation, or what the best option was to help someone relax.

The AI models (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3) underwent testing between December 2024 and January 2025. Each system completed the tests on ten occasions so researchers could find average scores from the models, and compare with the scores of human participants from previous validation studies.

In conclusion, each AI exceeded humans in every test. The systems displayed a high degree of agreement among themselves, which indicates that they produced similar emotional judgments, even in the absence of direct training on the evaluation of emotions.

"LLMs can not only identify the best option among many available ones, but also create new scenarios that suit the desired context," says Katja Schlegel, lecturer at the University of Bern’s Institute of Psychology and lead author of the study.

Two tests, the Situational Test of Emotion Understanding (STEU) and the Geneva Emotion Knowledge Test – Blends (GEMOK-Blends), assessed the participants’ ability to recognize emotional states in different situations. Other tests, the Situational Test of Emotion Management (STEM) and subtests from the Geneva Emotional Competence Test (GECo), evaluated emotional regulation and emotional management.

Each question presented a realistic situation and asked for the best answer that demonstrated emotional intelligence. For example, if Employee A stole an idea from Employee B and then presented it to their supervisor and received praise, the appropriate answer is not to confront Employee A or seek revenge, but to subtly approach a supervisor with a calm discussion. This is an act of emotional control.

"The results showed significantly higher scores for the LLMs – 82 percent, compared to 56 percent by human participants," explained Marcello Mortillaro, a senior scientist at the Swiss Centre for Affective Sciences. "This indicates that these AIs not only comprehend emotions, but also possess an understanding of functioning with emotional intelligence."

Read More ...