New Model Helps Humanoid Robots Adapt More

Posted by Kirhat | Wednesday, December 17, 2025 | | 0 comments »

Humanoid Robot
Christopher McFadden of Interesting Engineering reported that researchers from Wuhan University have recently developed a new framework that could help robots manipulate objects more easily. Introduced in a new paper on arXiv, this approach should enable humanoid robots to grasp and handle a greater variety of objects than is currently possible.

At present, humanoid robots are great at tasks like using tools, grasping, and walking, but they suffer from inherent limitations. In most cases, they can fail tasks when an object changes shape or when lighting changes.

They can also struggle completing tasks the robot hasn’t been specifically trained to do. It is this lack of generalization that is widely seen as one of the technology’s major limitations.

To help overcome this, the Wuhan team set out to develop what it calls the recurrent geometric-prior multimodal policy, RGMP for short. This framework is designed to help humanoid robots have a kind of in-built common sense about things like shapes and space.

It also provides robots with a means to better select required skills for a task, and a more data-efficient way to learn movement patterns.

The goal of it, ultimately, is to help robots pick the right action and adapt in new environments with far less training data than before. According to the team, RGMP consists of two main key parts.

The first is called the Geometric-Prior Skill Selector (GSS), which helps the robot decide which of its "tools" and skills is best suited to a task. Using things like its cameras, the robot can use GSS to work out an object’s shape, size, and orientation.

With this information in hand (so to speak), the robot can then work out what needs to be done to complete a given task (i.e, pick up, push, grip, hold with two hands, etc.).

The second is called Adaptive Recursive Gaussian Network (ARGN). Once the robot picks a skill, the ARGN helps the robot actually perform the task. It achieves this by modelling spatial relationships between the robot and the object.

It can also help predict movements step-by-step, and is extremely data-efficient (needs far fewer training examples than typical deep learning methods).

This combination of ARGN and GSS helps robots better complete tasks without needing thousands of demonstrations and training. In testing, robots using the framework were able to achieve an impressive 87 percent success rate in novel tasks that the robots had no experience in completing.

The team also found that the framework is around 5 times more data-efficient than current diffusion-policy-based models (which are currently state-of-the-art). This is impressive and could be very important in the future.

If robots can reliably manipulate objects without being retrained for each new situation, they can actually be used in tasks like helping around the home to clean, tidy, and perhaps even cook.

Read More ...

Apple Is Facing Key Leadership Shakeup

Posted by Kirhat | Tuesday, December 16, 2025 | | 0 comments »

Apple Management
It was reported by Fortune that tech giant Apple is currently undergoing the most extensive executive overhaul in recent history, with a wave of senior leadership departures that marks the company’s most significant management realignment since its visionary co-founder and CEO Steve Jobs died in 2011.

The leadership exodus spans critical divisions from artificial intelligence to design, legal affairs, environmental policy, and operations, which will have major repercussions for Apple’s direction for the foreseeable future.

Last 4 December, Apple announced Lisa Jackson, its VP of environment, policy, and social initiatives, as well as Kate Adams, the company’s general counsel, will both retire in 2026. Adams has been Apple’s chief legal officer since 2017, and Jackson joined Apple in 2013. Adams will step down late next year, while Jackson will leave next month.

Jackson and Adams join a growing list of top executives who have either left or announced their exits this year. AI chief John Giannandrea announced his retirement earlier this month, and its design lead Alan Dye, who took charge of Apple’s all-important user interface design after Jony Ive left the company in 2019, was just poached by Mark Zuckerberg’s Meta this week.

The scope of the turnover is unprecedented in the Tim Cook era. In July, Jeff Williams, Apple’s COO who was long thought to succeed Cook as CEO, decided to retire after 27 years with the company. One month later, Apple’s CFO Luca Maestri also decided to step back from his role. And the design division, which just lost Dye, also lost Billy Sorrentino, a senior design director, who left for Meta with Dye.

Things have been particularly turbulent for Apple’s AI team, though: Ruoming Pang, who headed its AI Foundation Models Team, left for Meta in July and took about 100 engineers with him. Ke Yang, who led AI-driven web search for Siri, and Jian Zhang, Apple’s AI robotics lead, also both left for Meta.

While all of these departures are a big deal for Apple, the timing may not be a coincidence. Both Bloomberg and the Financial Times have reported on Apple ramping up its succession plan efforts in preparation for Cook, who has led the company since 2011, to retire in 2026.

Cook turned 65 in November and has grown Apple’s market cap from about US$ 350 billion to a whopping US$ 4 trillion under his tenure. Bloomberg reports John Ternus has emerged as the leading internal candidate to replace him.

Read More ...

McDo AI Ad Labeled As "Cold" And "Emotionless"

Posted by Kirhat | Thursday, December 11, 2025 | | 0 comments »

McDo AI Ad
The general consensus shows that it is not appealing and the public made sure that it gets cancelled. A recent McDonald’s Christmas advertisement entirely generated by AI has faced public backlash, leading to the video being delisted from YouTube.

Reportedly, the ad was created for the fast-food giant’s Netherlands division by the ad agency TBWA\Neboko and the production house The Sweetshop.

The 45-second spot revolved around the theme that the holiday season is the "most terrible time of the year."

It was labeled "cold" and emotionless by viewers who decried its low quality and the use of AI rather than human artists.

The advertisement was produced with the cynical idea that the holiday season is the "most terrible time of the year," thereby presenting McDonald's as a peaceful sanctuary free from seasonal chaos.

It depicts AI-generated individuals suffering through various common winter activities that go wrong, such as stressful family dinners, chaotic shopping, caroling, botched cookie baking, and disastrous Christmas tree decorating.

The commercial ends with saying: "Hide out in McDonald’s until January’s here."

Viewers criticized both the quality and the message of the advertisement.

The AI-generated McDonald’s ad was visually jarring with rapidly changing scenes that complicated the viewing experience.

Futurism reported that this technique is often used in AI video because the technology tends to lose visual continuity after only a few seconds.

The advertisement’s characteristic AI flaws created an unsettling "uncanny valley" effect, making the clip immediately become the source of viewer dissatisfaction.

The ad, posted earlier on YouTube, generated a modest 20,000 views.

It prompted a flood of negative comments, leading McDonald’s to first disable the comment section for the weekend and then completely remove the video.

Read More ...

Insect-Style Robot Pulled Off Difficult Maneuvers

Posted by Kirhat | Saturday, December 06, 2025 | | 0 comments »

Insect Robots
If the report of Aamir Khollam from Interesting Engineering were true, then the tiny robotic insects may soon become lifesaving tools in disaster zones. The report further stated that MITT researchers have unveiled an aerial microrobot that flies with unprecedented speed and agility, mirroring the gymnastic motion of real insects.

In the future, these miniature flying machines could navigate collapsed buildings after earthquakes and help locate survivors in places larger robots cannot reach.

The breakthrough marks a significant shift in micro-robotics, where flight stability and speed have historically lagged far behind nature’s engineering.

Earlier versions of insect-scale robots could only fly slowly and along predictable paths. The new robot changes that dynamic entirely.

Roughly the size of a microcassette and lighter than a paperclip, the machine uses soft artificial muscles that power its large flapping wings at high frequency.

The updated hardware enables tight turns, rapid acceleration, and aerial tricks that resemble insect maneuverability.

But hardware alone wasn’t enough. The robot needed a smarter and faster "brain."

That came in the form of a new AI-based controller that interprets the robot’s position and environment, then decides how it should move in real time.

Previous control systems required manual tuning by engineers, which limited performance and didn’t scale for complex movement.

Kevin Chen, associate professor in MIT’s Department of Electrical Engineering and Computer Science, explains the goal clearly – "We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate."

He adds, "Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal."

Read More ...

The First SMS Was Sent 33 Years Ago

Posted by Kirhat | Friday, December 05, 2025 | | 0 comments »

First SMS
It was on 3 December 1992 that a simple message that read "Merry Christmas" was sent and it marked a historic shift in global communication.

Neil Papworth, then a 22-year-old engineer, sent the message from his computer to the Orbitel 901 phone of Vodafone director Richard Jarvis. He was working on Vodafone UK’s Short Message Service Centre as part of the now-defunct Sema Group Telecoms. At the time, he saw it as routine work rather than a milestone.

"It didn’t feel momentous at all," he later said in an interview with CBC in 2017. "For me it was just getting my job done on the day and ensuring that our software that we’d been developing for a good year was working OK."

The idea for SMS started years before the first message. In 1984, Finnish engineer Matti Makkonen proposed the concept at a conference in Copenhagen.

A year later, Friedhelm Hillebrand at Deutsche Telekom suggested a 160-character limit after studying everyday written messages.

The European Telecommunications Standards Institute began developing formal standards by 1991, and the first message followed a year later in the United Kingdom.

At the time, mobile technology was shifting from analog to digital with GSM networks. Phones did not have keyboards, so Papworth had to send the message through his computer. Jarvis received it while attending a Christmas party.

Shortly after, Papworth got a confirmation call from the event that proved the test worked.

Early SMS relied on 7-bit encoding and routing through SMS centres that stored and forwarded messages when phones were out of range.

Nokia helped push SMS from experiment to mainstream. In 1994, the company released a handset that allowed users to both send and receive messages.

Adoption grew slowly at first because phones were costly and carriers focused on voice services.

By the late 1990s, texting surged especially among younger users.

T9 predictive typing made input easier, and prepaid plans made texting cheaper. Network providers later enabled cross-network messaging, accelerating mass usage.

By February 2001, users in the United Kingdom sent around one billion texts every month. Charges reached 10 pence per message, generating major revenue.

By 2010, the International Telecommunications Union reported trillions of messages sent yearly, turning SMS into a global cultural habit that shaped abbreviations like LOL and BRB.

Read More ...

Xpeng's Iron Robot Impresses Crowd

Posted by Kirhat | Thursday, December 04, 2025 | | 0 comments »

Xpeng Robot
When tech company Xpeng unveiled its Next Gen Iron humanoid recently, the robot glided across the stage with movement so fluid that the crowd froze. Many of those in the crowd thought they saw an actor in a suit. Clips spread online within hours, and people everywhere claimed the same thing: it looked too human to be a machine.

The reaction spread fast, so Xpeng's CEO He Xiaopeng returned to the stage one day later with a plan to settle the argument. He cut into Iron's leg to show its internal machinery. It felt theatrical but also necessary to end the rumor that a human controlled the robot from inside.

The demonstration showed Iron was a real machine with complex systems beneath its flexible skin.

He shared how his robotics team stayed awake through the night, seeing viewers accuse them of staging a stunt. After the reveal, Iron walked again in front of the crowd without a human inside. The moment closed the debate and highlighted how far the company has come since its first model in 2024.

The latest Iron uses a humanoid spine with bionic muscles and flexible skin. It moves with 82 degrees of freedom, and its human-sized hands include 22 degrees of freedom supported by a tiny harmonic joint engineered by the company. The robot runs on all solid-state batteries that keep the body light and strong.

Iron also uses Xpeng's second-generation VLA model. Three Turing chips with 2,250 TOPS of power support tasks like conversations, walking and natural interactions. It responds in ways that feel closer to a person than a robot.

Xpeng says future versions will offer different body shapes. That claim hints at customizable designs when these units reach consumers.

Xpeng's long-term vision goes far beyond a single showcase moment. The company plans to place the Next Gen Iron model in real-world environments. Early units will focus on commercial roles such as tour guides, shopping guides and customer service helpers. These placements allow the robots to interact with large crowds, gather feedback and refine their behavior in dynamic public spaces.

This rollout forms part of what Xpeng describes as a gradual path toward mass production. The team aims to reach large-scale manufacturing by the end of 2026. That milestone could introduce hundreds or even thousands of humanoid units into select venues. Businesses may adopt them to manage foot traffic, assist guests or support basic retail tasks.

While the company talks openly about commercial integration, the timeline for home use remains unclear. They have not shared when consumers will be able to buy a version suited for daily household tasks. Engineers still need to address safety, privacy and reliability standards before a humanoid can operate inside private homes.

Even so, this moment signals a clear shift: robots that move and react in a lifelike way are no longer far-off ideas. They are stepping into public spaces where people will see them operate up close. This shift could reshape how we all view service work and personal assistance in the years ahead.

Read More ...