Showing posts with label AI Tech. Show all posts
Showing posts with label AI Tech. Show all posts

Growing AI Scandal With Careers At Stake

Posted by Kirhat | Tuesday, April 21, 2026 | | 0 comments »

AI Scandal
Over the past month, A.I. detection has been at the center of a series of controversies: Hachette pulled the horror novel "Shy Girl" by Mia Ballard after detectors flagged it as substantially A.I.-generated.

The New York Times cut ties with a freelance book critic who admitted that an A.I. editing tool had regurgitated passages from a Guardian article into his draft. The Atlantic reported that a "Modern Love" column had been flagged as more than 60 percent A.I.-generated.

In certain corners of social media, A.I.-detector screenshots are shared like mug shots, and pile-ons have the grim energy of public stonings.

This may all seem understandable—people want to know if what they’re reading was generated by a bot, and some argue they deserve to know. However, such controversy narrows the issue of A.I.’s steady encroachment to one of process, rather than impact.

Drawing a red line around using chatbots to generate prose may make it easier to ignore the way that the technology may be shaping writing before one even types a single word. And a culture of callouts, scandals, and fear may prevent media and publishing from wrestling with much thornier questions of authorship.

At the center of many of these controversies is a company called Pangram, whose CEO, Max Spero, has become the go-to authority when A.I. authorship disputes erupt. On Twitter/X, where Spero calls himself a "slop janitor," a user flagged a Guardian sports journalist’s writing as A.I.-generated. The publication responded that this was "the same style he’s used for 11 years writing for the Guardian, long before LLMs existed. The allegation is preposterous."

Spero quote-tweeted the exchange with a Pangram time-series analysis of 871 articles by the journalist: "It’s clear that he is increasingly relying on AI. In two weeks in February he churned out nine articles classified by Pangram as fully AI-generated. Receipts below."

Or take Pangram’s appearance in the Shy Girl cancellation. Readers on Reddit and YouTube had been flagging the horror novel as suspiciously A.I. for months, but then Spero ran the full manuscript and posted the result (78 percent A.I.-generated). Hachette pulled the book the day the Times piece ran. A story in the Atlantic soon followed. Spero was on LinkedIn, urging publishers to "strictly moderat[e] AI generated content" and "draft and enforce robust AI-use policy."

A pattern emerges: The crowd suspects a problem, then Pangram validates the suspicion, stokes the mob, and sells the solution. The impulse to dismiss all this as a detector company drumming up business runs into an issue—Pangram actually works way better than you might think. Brian Jabarian, a University of Chicago economist who conducted a rigorous independent evaluation of A.I. detectors, told me flatly, "This narrative that we shouldn’t use A.I. detection doesn’t seem to hold anymore."

Jabarian’s preprint, co-authored with Alex Imas and with no disclosed financial ties to the company, tested the tool across nearly 2,000 passages and found near-zero false-positive and false-negative rates on medium-to-long texts, the length of a typical op-ed or a verbose Amazon review.

Independent benchmarks confirm that Pangram outperforms every other detector tested and is robust against "humanizers," or software designed to smuggle A.I. text past detectors. So when Spero posts a time-series chart of hundreds of articles showing when a journalist’s output started sounding fishily like ChatGPT, I am inclined to believe it. That A.I. detection is finally catching up is, on balance, a Good Thing. A.I.-generated articles already far outnumber human ones. Social media is flooded with low-effort slop. According to Pangram’s own research, a fifth of peer reviews submitted to the A.I. research conference ICLR are fully A.I.-generated, and 9 percent of American newspapers contain undisclosed bot use. In this A.I.-powered asphyxiation of the information ecosystem, Spero has positioned himself on social media as a folk hero hauling in the oxygen tanks. You can tag his company’s bot on Twitter/X, and it will tell you whether a post is A.I. On Spero’s social media to-do list: a "slop hunter of the week leaderboard."

Read More ...

AI Boosts Nuclear Technology Of U.S.

Posted by Kirhat | Thursday, April 02, 2026 | | 0 comments »

Nuclear Technology
According to the article of Interesting Engineering, the United States is currently using an AI to streamline the nuclear regulatory process. The Department of Energy used AI mapping to convert a safety analysis document required under DOE’s authorization pathway for advanced reactor demonstrations into U.S. Nuclear Regulatory Commission (NRC) licensing documents for commercial deployment.

The DOE revealed that this accomplishment shows the role AI can play in improving the efficiency and accuracy of nuclear technology licensing, and could one day help to accelerate timelines for the commercial deployment of advanced nuclear reactors.

"Now is the time to move boldly on AI-accelerated nuclear energy deployment," said Rian Bahran, Deputy Assistant Secretary for Nuclear Reactors.

"This partnership, combined with the President’s orders, represents more than incremental 'uplift' improvements. It has the potential to transform how industry prepares its regulatory submissions and deploys nuclear energy while upholding the highest standards of safety and compliance."

Everstar’s Gordian AI solution, built on the Microsoft Azure platform, was recently used to convert the Preliminary Documented Safety Analysis for DOE’s National Reactor Innovation Center’s (NRIC) Generic High Temperature Gas Reactor (HTGR) into sections equivalent to an NRC license application, according to a press release.

The DOE revealed that the final 208-page document took one day to generate. Typically, the process takes a team of people between four and six weeks to complete the same task. The AI tool also comprehensively identified missing or incomplete information needed to successfully complete an NRC application.

Gordian was engineered for nuclear-grade technical work and is equipped with physics and engineering tools, as well as the ability to understand and integrate data through semantic ontology mapping, to ensure that the final output is computed and verified, not inferred, according to the DOE.

"Nuclear is poised to solve today’s critical energy challenges," said Kevin Kong, CEO and Founder of Everstar. "We’re excited to partner with INL to meet the moment, working together to accelerate regulatory review and commercialization."

The DOE also revealed that Gordian’s output was subsequently evaluated by an expert for accuracy, missing information, consistency, as well as grammar and structure to ensure that its results were correct and adhered to rigorous professional standards. The output was found to demonstrate quality, rigor, and depth, as well as the tool’s ability to identify and qualify its own gaps in data knowledge.

"Our collaborations with DOE, INL and across the industry are demonstrating how we can effectively bring secure, scalable AI technologies to solve key energy challenges and achieve the broader national and economic security goals envisioned by the Department’s Genesis Mission," said Carmen Krueger, Corporate Vice President, US Federal, Microsoft.

The DOE also highlighted that currently, the nuclear licensing process involves multiple rounds of manual document reviews and minor clerical adjustments, which can take years to complete.

Read More ...

Humanoid Luna Can Turn And Spin

Posted by Kirhat | Wednesday, April 01, 2026 | | 0 comments »

Luna
A Shenzhen-based robotics company LimX Dynamics has officially unveiled its latest humanoid robot called Luna. The robot made its first public physical appearance at the Taobao Influencer Festival, marking the world’s first live showcase of the platform.

The unveiling suggests LimX is expanding beyond purely industrial humanoid robotics toward robots designed for broader public interaction, lifestyle, and commercial environments.

LimX’s previous flagship humanoid, OLI, was known for its industrial metallic silver design and its ability to operate in rugged environments such as construction sites and industrial facilities.

Luna, however, represents a different direction for the company, featuring what LimX describes as a more lifestyle-oriented aesthetic.

During its debut presentation, Luna performed a short catwalk demonstration and executed an illusion turn, a gymnastics-style movement used to demonstrate balance, agility, and motion control.

The demonstration highlighted the robot’s walking stability, joint coordination, and overall movement fluidity rather than industrial task performance.

According to LimX Dynamics, Luna features upgrades to its mechanical configuration and joint system, allowing the humanoid robot to achieve 33 degrees of freedom.

This level of articulation enables more complex movement patterns and a more human-like gait compared to many current-generation humanoid robots.

Although LimX has not yet released a full Luna specification sheet, the company confirmed that the robot is based on the same architecture as its OLI humanoid platform.

According to Origin of Bots, the Luna humanoid stands 165 × 55 × 30 cm and weighs about 55 kg with its battery installed, placing it close to human proportions.

The robot can walk at speeds of up to 5 km/h (1.4 m/s), and the battery system is intended to support extended research and development cycles.

Luna uses dual Intel RealSense D435i depth cameras mounted on the head and chest along with RGB cameras for object recognition and interaction tasks. The robot employs vision–LiDAR fusion and Visual SLAM for navigation in dynamic environments and crowded spaces.

On the OLI platform, LimX uses a computing backpack powered by an NVIDIA AGX Orin chip, while perception computing is handled by an Orin NX module rated at 157 tera operations per second.

The robot operates on a Linux-based software environment using ROS 2 and Python, allowing developers and researchers to build custom interaction scripts, robotics applications, and autonomous behaviors.

LimX Dynamics also stated that Luna is designed for long-term operation and research use, with battery systems intended to support extended development and testing cycles over multiple years.

Read More ...

AI May Be Giving Wrong Advice Just To Flatter Its User

Posted by Kirhat | Tuesday, March 31, 2026 | | 0 comments »

AI That Flatters
Several artificial intelligence (AI) chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.

The study, published last 26 March in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

"This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement," says the study led by researchers at Stanford University.

The study found that a technological flaw already tied to some high-profile cases of delusional and suicidal behavior in vulnerable populations is also pervasive across a wide range of people's interactions with chatbots. It's subtle enough that they might not notice and a particular danger to young people turning to AI for many of life's questions while their brains and social norms are still developing.

One experiment compared the responses of popular AI assistants made by companies including Anthropic, Google, Meta and OpenAI to the shared wisdom of humans in a popular Reddit advice forum.

Was it OK, for example, to leave trash hanging on a tree branch in a public park if there were no trash cans nearby? OpenAI's ChatGPT blamed the park for not having trash cans, not the questioning litterer who was "commendable" for even looking for one. Real people thought differently in the Reddit forum abbreviated as AITA, after a phrase for someone asking if they are a cruder term for a jerk.

"The lack of trash bins is not an oversight. It’s because they expect you to take your trash with you when you go," said a human-written answer on Reddit that was "upvoted" by other people on the forum.

The study found that, on average, AI chatbots affirmed a user's actions 49 percent more often than other humans did, including in queries involving deception, illegal or socially irresponsible conduct, and other harmful behaviors.

"We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what," said author Myra Cheng, a doctoral candidate in computer science at Stanford.

Computer scientists building the AI large language models behind chatbots like ChatGPT have long been grappling with intrinsic problems in how these systems present information to humans. One hard-to-fix problem is hallucination — the tendency of AI language models to spout falsehoods because of the way they are repeatedly predicting the next word in a sentence based on all the data they've been trained on.

Read More ...

Major Food Chain Starts Using Robots As Staff

Posted by Kirhat | Wednesday, March 25, 2026 | | 0 comments »

McDo Bot
Fast-food giant McDonalds has begun testing robotic staff as a way of seeing how the restaurant chain could go fully automated. The test was carried out in Shanghai using robots to deliver meals to customers and collect food trays.

The age of robotics and AI is truly here and that's very apparent, as it's arrived in McDonalds. Yup, the major food chain has begin testing robotic staff in its actual restaurants.

The Shanghai McDonalds was the home of Keenon Robotics machines, which went to work serving customers this week.

The bots covered a host of tasks, from greeting and providing information, cleaning, to delivering food to customers, and even collecting trays.

While this was a test, the future of robots in restaurants could look similar to this setup. The idea is to go fully automated with a single location no longer requiring many human staff at all.

From front end human interactions with service to back end kitchen staff cooking the food, McDonalds is looking into making it all robot run.

The reality is that this was very much a test and the idea that this could work as a restaurant, at this early stage, is still a reach.

While robots running restaurants is still years away, this could sign us edging that much closer to humanoid robots working alongside people in the near future.

The androids serving customers certain look capable, while the wheel based screen-toting bots appear far more fun.

What all this means for jobs, the economy and our future is a far bigger question that this burger based trial can't answer right now.

Read More ...

A Robot That Solves Cube Puzzles In Record Time

Posted by Kirhat | Sunday, March 22, 2026 | | 0 comments »

Puzzle-Solving Robot
Two brothers from the U.K. just a new milestone in robotics by designing a robot capable of solving a complex puzzle cube at incredible speed.

Their robot recently earned recognition from Guinness World Records after it successfully solved a 4×4 puzzle cube in just 45.3 seconds, surpassing a record that had remained unbeaten for more than a decade.

The record-setting project was developed by Matthew Pidden and Thomas Pidden. The brothers combined their technical skills to build the robot.

Matthew focused mainly on the software and control system, developing the algorithms that allow the robot to analyze the cube and determine the correct sequence of moves needed to solve it. Thomas contributed by designing and producing many of the robot’s mechanical parts using 3D printing technology.

Their collaboration allowed them to merge programming expertise with creative engineering, resulting in a machine that works both accurately and efficiently.

The robot is built around a central frame that holds the cube in place. It uses four mechanical arms positioned around the cube. Each arm can rotate different layers of the puzzle with precision. Once the cube is scanned and its pattern is identified, the robot calculates the fastest solution using programmed algorithms. It then performs a rapid series of rotations until every face of the cube is correctly aligned.

During the demonstration, the robot moved quickly and smoothly as each arm twisted the cube in a carefully calculated sequence. Within seconds, the puzzle was completely solved.

The successful record attempt did not happen immediately. The brothers faced a few unsuccessful trials before achieving the final result. After refining the robot’s performance and improving its speed, they managed to complete the puzzle in 45.3 seconds, officially setting the new world record.

Interestingly, the idea for the robot began as a student project while studying at the University of Bristol. What started as an academic experiment eventually developed into an advanced robotic system capable of achieving a world record.

The accomplishment of the Pidden brothers demonstrates how creativity, persistence, and technical knowledge can lead to remarkable achievements. Their robot not only showcases the growing capabilities of robotics but also highlights how technology can tackle complex challenges with speed and accuracy.

This achievement may inspire many young engineers and programmers to explore robotics and develop new technologies that push the limits of what machines can do.

Read More ...

New Model Helps Humanoid Robots Adapt More

Posted by Kirhat | Wednesday, December 17, 2025 | | 0 comments »

Humanoid Robot
Christopher McFadden of Interesting Engineering reported that researchers from Wuhan University have recently developed a new framework that could help robots manipulate objects more easily. Introduced in a new paper on arXiv, this approach should enable humanoid robots to grasp and handle a greater variety of objects than is currently possible.

At present, humanoid robots are great at tasks like using tools, grasping, and walking, but they suffer from inherent limitations. In most cases, they can fail tasks when an object changes shape or when lighting changes.

They can also struggle completing tasks the robot hasn’t been specifically trained to do. It is this lack of generalization that is widely seen as one of the technology’s major limitations.

To help overcome this, the Wuhan team set out to develop what it calls the recurrent geometric-prior multimodal policy, RGMP for short. This framework is designed to help humanoid robots have a kind of in-built common sense about things like shapes and space.

It also provides robots with a means to better select required skills for a task, and a more data-efficient way to learn movement patterns.

The goal of it, ultimately, is to help robots pick the right action and adapt in new environments with far less training data than before. According to the team, RGMP consists of two main key parts.

The first is called the Geometric-Prior Skill Selector (GSS), which helps the robot decide which of its "tools" and skills is best suited to a task. Using things like its cameras, the robot can use GSS to work out an object’s shape, size, and orientation.

With this information in hand (so to speak), the robot can then work out what needs to be done to complete a given task (i.e, pick up, push, grip, hold with two hands, etc.).

The second is called Adaptive Recursive Gaussian Network (ARGN). Once the robot picks a skill, the ARGN helps the robot actually perform the task. It achieves this by modelling spatial relationships between the robot and the object.

It can also help predict movements step-by-step, and is extremely data-efficient (needs far fewer training examples than typical deep learning methods).

This combination of ARGN and GSS helps robots better complete tasks without needing thousands of demonstrations and training. In testing, robots using the framework were able to achieve an impressive 87 percent success rate in novel tasks that the robots had no experience in completing.

The team also found that the framework is around 5 times more data-efficient than current diffusion-policy-based models (which are currently state-of-the-art). This is impressive and could be very important in the future.

If robots can reliably manipulate objects without being retrained for each new situation, they can actually be used in tasks like helping around the home to clean, tidy, and perhaps even cook.

Read More ...

McDo AI Ad Labeled As "Cold" And "Emotionless"

Posted by Kirhat | Thursday, December 11, 2025 | | 0 comments »

McDo AI Ad
The general consensus shows that it is not appealing and the public made sure that it gets cancelled. A recent McDonald’s Christmas advertisement entirely generated by AI has faced public backlash, leading to the video being delisted from YouTube.

Reportedly, the ad was created for the fast-food giant’s Netherlands division by the ad agency TBWA\Neboko and the production house The Sweetshop.

The 45-second spot revolved around the theme that the holiday season is the "most terrible time of the year."

It was labeled "cold" and emotionless by viewers who decried its low quality and the use of AI rather than human artists.

The advertisement was produced with the cynical idea that the holiday season is the "most terrible time of the year," thereby presenting McDonald's as a peaceful sanctuary free from seasonal chaos.

It depicts AI-generated individuals suffering through various common winter activities that go wrong, such as stressful family dinners, chaotic shopping, caroling, botched cookie baking, and disastrous Christmas tree decorating.

The commercial ends with saying: "Hide out in McDonald’s until January’s here."

Viewers criticized both the quality and the message of the advertisement.

The AI-generated McDonald’s ad was visually jarring with rapidly changing scenes that complicated the viewing experience.

Futurism reported that this technique is often used in AI video because the technology tends to lose visual continuity after only a few seconds.

The advertisement’s characteristic AI flaws created an unsettling "uncanny valley" effect, making the clip immediately become the source of viewer dissatisfaction.

The ad, posted earlier on YouTube, generated a modest 20,000 views.

It prompted a flood of negative comments, leading McDonald’s to first disable the comment section for the weekend and then completely remove the video.

Read More ...

Insect-Style Robot Pulled Off Difficult Maneuvers

Posted by Kirhat | Saturday, December 06, 2025 | | 0 comments »

Insect Robots
If the report of Aamir Khollam from Interesting Engineering were true, then the tiny robotic insects may soon become lifesaving tools in disaster zones. The report further stated that MITT researchers have unveiled an aerial microrobot that flies with unprecedented speed and agility, mirroring the gymnastic motion of real insects.

In the future, these miniature flying machines could navigate collapsed buildings after earthquakes and help locate survivors in places larger robots cannot reach.

The breakthrough marks a significant shift in micro-robotics, where flight stability and speed have historically lagged far behind nature’s engineering.

Earlier versions of insect-scale robots could only fly slowly and along predictable paths. The new robot changes that dynamic entirely.

Roughly the size of a microcassette and lighter than a paperclip, the machine uses soft artificial muscles that power its large flapping wings at high frequency.

The updated hardware enables tight turns, rapid acceleration, and aerial tricks that resemble insect maneuverability.

But hardware alone wasn’t enough. The robot needed a smarter and faster "brain."

That came in the form of a new AI-based controller that interprets the robot’s position and environment, then decides how it should move in real time.

Previous control systems required manual tuning by engineers, which limited performance and didn’t scale for complex movement.

Kevin Chen, associate professor in MIT’s Department of Electrical Engineering and Computer Science, explains the goal clearly – "We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate."

He adds, "Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal."

Read More ...

Xpeng's Iron Robot Impresses Crowd

Posted by Kirhat | Thursday, December 04, 2025 | | 0 comments »

Xpeng Robot
When tech company Xpeng unveiled its Next Gen Iron humanoid recently, the robot glided across the stage with movement so fluid that the crowd froze. Many of those in the crowd thought they saw an actor in a suit. Clips spread online within hours, and people everywhere claimed the same thing: it looked too human to be a machine.

The reaction spread fast, so Xpeng's CEO He Xiaopeng returned to the stage one day later with a plan to settle the argument. He cut into Iron's leg to show its internal machinery. It felt theatrical but also necessary to end the rumor that a human controlled the robot from inside.

The demonstration showed Iron was a real machine with complex systems beneath its flexible skin.

He shared how his robotics team stayed awake through the night, seeing viewers accuse them of staging a stunt. After the reveal, Iron walked again in front of the crowd without a human inside. The moment closed the debate and highlighted how far the company has come since its first model in 2024.

The latest Iron uses a humanoid spine with bionic muscles and flexible skin. It moves with 82 degrees of freedom, and its human-sized hands include 22 degrees of freedom supported by a tiny harmonic joint engineered by the company. The robot runs on all solid-state batteries that keep the body light and strong.

Iron also uses Xpeng's second-generation VLA model. Three Turing chips with 2,250 TOPS of power support tasks like conversations, walking and natural interactions. It responds in ways that feel closer to a person than a robot.

Xpeng says future versions will offer different body shapes. That claim hints at customizable designs when these units reach consumers.

Xpeng's long-term vision goes far beyond a single showcase moment. The company plans to place the Next Gen Iron model in real-world environments. Early units will focus on commercial roles such as tour guides, shopping guides and customer service helpers. These placements allow the robots to interact with large crowds, gather feedback and refine their behavior in dynamic public spaces.

This rollout forms part of what Xpeng describes as a gradual path toward mass production. The team aims to reach large-scale manufacturing by the end of 2026. That milestone could introduce hundreds or even thousands of humanoid units into select venues. Businesses may adopt them to manage foot traffic, assist guests or support basic retail tasks.

While the company talks openly about commercial integration, the timeline for home use remains unclear. They have not shared when consumers will be able to buy a version suited for daily household tasks. Engineers still need to address safety, privacy and reliability standards before a humanoid can operate inside private homes.

Even so, this moment signals a clear shift: robots that move and react in a lifelike way are no longer far-off ideas. They are stepping into public spaces where people will see them operate up close. This shift could reshape how we all view service work and personal assistance in the years ahead.

Read More ...

DeepSeek Has A New Benchmark In AI Math Scores

Posted by Kirhat | Tuesday, December 02, 2025 | | 0 comments »

DeepSeek
The International Mathematical Olympiad (IMO), being held annually since 1959, is widely regarded as the world’s most prestigious maths competition. It tests participants with problems that demand deep insight, creativity, and rigorous reasoning, according to Harvard AI researcher Huang Yichen and UCLA computer science professor Yang Lin.

Now, Chinese AI startup DeepSeek has made its Math-V2 model widely available, open-sourcing it on Hugging Face and GitHub under a permissive license that allows developers to adapt and repurpose the system, according to Bojan Stojkovski of Interesting Engineering.

Math-V2 has demonstrated gold-medal-level performance at the IMO, a feat requiring not just correct answers but also transparent reasoning behind them – a standard only about 8 per cent of human participants achieve.

The company says its Math-V2 model achieved gold-level scores on problems from both this year’s International Mathematical Olympiad and the 2024 Chinese Mathematical Olympiad. By open-sourcing the model, DeepSeek aims to lower barriers for researchers and developers eager to experiment with advanced AI capable of reasoning through high-level mathematical challenges, a domain traditionally dominated by proprietary systems, the South China Morning Post reported.

In a Hugging Face post, DeepSeek researchers emphasized that further developing AI’s mathematical capabilities could have a transformative impact on scientific research, from complex simulations to theoretical problem-solving.

They cautioned, however, that many of today’s AI systems have been primarily optimized to perform well on standard maths benchmarks, achieving high scores without necessarily improving the underlying reasoning and problem-solving abilities that drive real innovation.

To strengthen the rigour of its AI’s mathematical reasoning, DeepSeek focused on enabling the model to "self-verify" its answers, even for problems without pre-existing solutions, the researchers explained. This self-checking ability allows the AI to assess the consistency and validity of its reasoning, helping ensure that its conclusions are not only correct when known solutions exist, but also reliable when tackling novel or unsolved mathematical challenges.

DeepSeek’s approach tackles a longstanding limitation in AI development: most systems only show improvement on tasks where solutions can be easily verified. By enabling self-verifiable reasoning, the model can extend its capabilities to more complex, open-ended problems. The researchers noted that, although significant work remains, these results indicate that self-verifying mathematical reasoning is a promising research direction that could pave the way for more advanced and capable AI systems in mathematics and beyond.

After achieving gold at the International Mathematical Olympiad, Google DeepMind made its proprietary model accessible to subscribers of its premium Ultra plan, giving a select group of developers early access to the advanced AI. In contrast, OpenAI’s CEO Sam Altman announced that the company’s experimental model, which also earned a gold medal at the IMO, would remain unavailable to the public for many months, SCMP added.

At the same time, such moves highlight differing strategies among leading AI firms, with some opting for controlled access to protect intellectual property and ensure responsible use, while others focus on gradually broadening availability to researchers and developers.

Read More ...

Algorithms From BrainBody LLM Offer A lot Of Potential

Posted by Kirhat | Monday, December 01, 2025 | | 0 comments »

Virtual Figures
Can you imagine a robot that doesn’t just follow commands but actually plans its actions, adjusts its movements on the go, and learns from feedback—much like a human would? This question may sound like a far-fetched idea, but researchers at NYU Tandon School of Engineering have achieved this with their new algorithm, BrainBody-LLM.

According to Rupendra Brahambhatt of Interesting Engineering, one of the main challenges in robotics has been creating systems that can flexibly perform complex tasks in unpredictable environments.

Traditional robot programming or existing LLM-based planners often struggle because they may produce plans that aren’t fully grounded in what the robot can actually do.

BrainBody-LLM addresses this challenge by using large language models (LLMs)—the same kind of AI behind ChatGPT to plan and refine robot actions. This could make future machines smarter and more adaptable.

The BrainBody-LLM algorithm mimics how the human brain and body communicate during movement. It has two main components: the first is the Brain LLM that handles high-level planning, breaking complex tasks into smaller, manageable steps.

The Body LLM then translates these steps into specific commands for the robot’s actuators, enabling precise movement.

A key feature of BrainBody-LLM is its closed-loop feedback system. The robot continuously monitors its actions and the environment, sending error signals back to the LLMs so the system can adjust and correct mistakes in real time.

"The primary advantage of BrainBody-LLM lies in its closed-loop architecture, which facilitates dynamic interaction between the LLM components, enabling robust handling of complex and challenging tasks," Vineet Bhat, first study author and a PhD candidate at NYU Tandon, said.

To test their approach, the researchers first ran simulations on VirtualHome, where a virtual robot performed household chores.

They then tested it on a real robotic arm, the Franka Research 3. BrainBody-LLM showed clear improvements over previous methods, increasing task completion rates by up to 17 percent in simulations.

On the physical robot, the system completed most of the tasks it was tested on, demonstrating the algorithm’s ability to handle real-world complexities.

BrainBody-LLM could transform how robots are used in homes, hospitals, factories, and in various other settings where machines are required to perform complex tasks with human-like adaptability.

The method could also inspire future AI systems that combine more abilities, such as 3D vision, depth sensing, and joint control, helping robots move in ways that feel even more natural and precise.

However, it’s still not ready for full-scale deployment. So far, the system has only been tested with a small set of commands and in controlled environments, which means it may struggle in open-ended or fast-changing real-world situations.

Read More ...

China Has Started Deploying Humanoid Robots To Its Borders

Posted by Kirhat | Thursday, November 27, 2025 | | 0 comments »

UBTech Walkers
China’s tech company UBTech Robotics has just secured a 264 million yuan (US$ 37 million) contract to deploy industrial-grade humanoid robots across border crossings in Guangxi, expanding the country’s push to apply robotics in public-facing and industrial environments. Deliveries are scheduled to begin in December.

The agreement was signed with a humanoid robot centre in Fangchenggang, a coastal city bordering Vietnam. The deployment will involve UBTech’s Walker S2, a model launched in July and described as the world’s first humanoid robot capable of autonomously replacing its own battery.

The initiative marks one of China’s largest real-world rollouts of humanoid systems in government operations. The details were first reported by the South China Morning Post (SCMP).

Simultaneously, the company issued a brief public announcement on social media alongside news of its inclusion in the MSCI China Index.

The pilot programme will deploy Walker S2 robots at border checkpoints to guide travellers, manage personnel flow, assist with patrol duties, handle logistics tasks, and support commercial services, the SCMP report said. In addition to immigration-related operations, the robots will also be used at manufacturing sites for steel, copper, and aluminium to conduct inspections.

The deal reflects an acceleration in China’s broader effort to commercialise embodied AI. The robotics sector has received strong policy backing, and agencies across multiple provinces have begun incorporating robots into routine work.

Similar deployments have also appeared in airports, government offices, and at major events. A China Central Television segment referenced by the SCMP reported that a related robot had been deployed at Hangzhou Xiaoshan International Airport to answer passenger questions.

During this year’s Shanghai Cooperation Organisation Summit in Tianjin, immigration authorities used a multilingual robot developed by Beijing-based iBen Intelligence. Police patrol robots have also been seen in cities such as Shenzhen, Shanghai, and Chengdu.

Read More ...

AI Can Cause Banning Of YouTube Channels

Posted by Kirhat | Tuesday, November 18, 2025 | | 0 comments »

Tech YouTuber
Whether it's a woodworking YouTube channel or one focused on car repairs, one constant is the community of like-minded individuals that develops in comments sections and Twitch chats.

Take Enderman, a YouTube channel dedicated to exploring Windows. It has a 390,000-strong subscriber base, which Enderman carefully cultivated since starting in November 2014. In November 2025, though, Enderman was on the receiving end of a channel ban that was allegedly unjust and administered by YouTube's AI tools.

As a result, fans of the channel have been at the center of a wave of discourse surrounding so-called "clankers" and their influence on content moderation — to wit, the dystopian idea of AI making such decisions without sufficient oversight from a human.

The Reddit thread "Enderman's channel has sadly been deleted..." gets immediately to the heart of the issue, in my eyes, with u/CatOfBacon lamenting, "This is why we should never let clankers moderate ANYTHING. Just letting them immediately pull the trigger with zero human review is just going to cause more ... like this to happen."

Of course, moderation errors can be made, whether by human or AI, and in such cases, many feel the utmost needs to be done to ensure creators can rectify the situation when they are penalized or even banned unfairly. That said, u/Bekfast59 added that the appeals process in such a case can be "fully AI as well," muddying the waters.

Watching fans hurry to preserve the YouTuber's content on services like PreserveTube, it really struck me that YouTube's processes can leave creators extremely vulnerable. A banned channel means that those connected to it are also banned, and it isn't clear precisely how YouTube determines that. These things need to be made more transparent to users.

A 3 November 2025 upload from Enderman, simply titled "My channel is getting terminated," leaves no room at all for ambiguity. He immediately launches into the story of his second channel, Andrew, which had been banned for something seemingly random: Being linked to another channel that had been hit by three copyright strikes, according to the YouTube Studio message the content creator received.

With no apparent connection to the other channel in question, a bemused Enderman associated this banning with a mistaken automatic AI flagging. "I had no idea such drastic measures like channel termination were allowed to be processed by AI, and AI only," he said.

From the video and the YouTube Studio appeals process that the creator went through on camera, it isn't clear whether this was entirely the case or whether a human evaluated the channel after it was flagged. Enderman's claim, though, is far from a unique one among tech YouTubers.

Other channels, such as creator Scrachit Gaming (who has accrued 402,000 subscribers over almost 3,000 uploads), were also targeted, with the creator sharing in a post on X that they had also been banned for an alleged link to the same channel that Enderman was flagged for.

The very same day, a follow-up post from TeamYouTube declared that it had restored the Scrachit Gaming channel after looking into the ban, and had also followed up with other affected creators. As of the time of writing, Enderman's secondary channel Andrew has also been reinstated. The quick turnaround went a very long way to convincing me that this may have been a simple automatic error by YouTube's systems, quickly corrected when a human assessed the situation.

With a huge network of channels of all shapes and sizes, it's natural that there would be some bad actors among them, and that YouTube would require ways of responding to and combating that. Unfortunately, though, it seems that the AI systems that play a role in this lack oversight, a problem for the platform to resolve going forward. What is undeniable is that machine learning has a significant role in the way that YouTube monitors and moderates its content.

Read More ...

Test Shows AI Understands Human Feelings

Posted by Kirhat | Sunday, November 16, 2025 | | 0 comments »

AI Empathy
Although artificial intelligence is frequently lauded for its coding ability or its math skills, how does it really perform when it is examined on something inherently human, such as emotions?

A recent study from the University of Geneva and the University of Bern reports that a handful of popular AI systems (e.g. ChatGPT) may actually have superior performance than participants taking an emotional intelligence test made for humans.

Researchers wanted to explore whether machines could recognize and reason about emotions similarly to how humans do, and surprisingly, the answer was yes – and more. Across a total of five different tests of emotional understanding and regulation, an average of 81 percent of the six AI models used correctly answered emotional understanding questions, whereas the average human had a correct response rate of only 56 percent.

These findings challenge the deep-rooted assumption that empathy, judgment, or emotional awareness exists only among humans.

The researchers used the well-established assessments psychologists use to measure "ability emotional intelligence," which has a right and wrong answer, much like a math test or personality quiz. Subjects had to choose the emotion the person was likely to feel in a specific situation, or what the best option was to help someone relax.

The AI models (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3) underwent testing between December 2024 and January 2025. Each system completed the tests on ten occasions so researchers could find average scores from the models, and compare with the scores of human participants from previous validation studies.

In conclusion, each AI exceeded humans in every test. The systems displayed a high degree of agreement among themselves, which indicates that they produced similar emotional judgments, even in the absence of direct training on the evaluation of emotions.

"LLMs can not only identify the best option among many available ones, but also create new scenarios that suit the desired context," says Katja Schlegel, lecturer at the University of Bern’s Institute of Psychology and lead author of the study.

Two tests, the Situational Test of Emotion Understanding (STEU) and the Geneva Emotion Knowledge Test – Blends (GEMOK-Blends), assessed the participants’ ability to recognize emotional states in different situations. Other tests, the Situational Test of Emotion Management (STEM) and subtests from the Geneva Emotional Competence Test (GECo), evaluated emotional regulation and emotional management.

Each question presented a realistic situation and asked for the best answer that demonstrated emotional intelligence. For example, if Employee A stole an idea from Employee B and then presented it to their supervisor and received praise, the appropriate answer is not to confront Employee A or seek revenge, but to subtly approach a supervisor with a calm discussion. This is an act of emotional control.

"The results showed significantly higher scores for the LLMs – 82 percent, compared to 56 percent by human participants," explained Marcello Mortillaro, a senior scientist at the Swiss Centre for Affective Sciences. "This indicates that these AIs not only comprehend emotions, but also possess an understanding of functioning with emotional intelligence."

Read More ...

Launch Of Russia's AI Robot Was A Disaster

Posted by Kirhat | Friday, November 14, 2025 | | 0 comments »

Aldol
Russia’s first AI humanoid robot collapsed on stage seconds after making its debut at a technology event in Moscow. The Video showed the robot, Aldol, staggering onto the stage to the soundtrack of Gonna Fly Now from the film "Rocky" during a showcase of Russia’s emerging robotics sector on Tuesday (11 November).

But as the humanoid lifted its hand to wave at the crowd, it lost balance and fell to the ground, shattering into pieces. Developers were seen hastily trying to pick the robot back up before giving up and trying to cover it with a black cloth. But this ended up being tangled up with the robot, which was moving erratically on the ground.

The robot, presented by the Russian robotics firm Idol, was being shown at a forum of the New Technology Coalition in Moscow, an association of companies for the development of humanoid robots, including Promobot, Double U Expo, Idol and Robot Corporation.

The exhibit aimed to demonstrate Russia’s progress in artificial intelligence and anthropomorphic robotics as the country positions itself in the global race for next-generation humanoid machines.

Developers had hailed the robot’s ability to fulfil three human functions, including moving on its legs, manipulating objects and communicating with people. But instead, it showcased Russia’s failings in the robotics sector.

Russia’s domestic robotics development has lagged behind since Vladimir Putin launched his full-scale invasion of Ukraine. The sector had previously relied on foreign manufacturers, but they all withdrew from the country when the war began, triggering discussions among authorities about how to boost progress in an increasingly significant global sector.

In 2023, just 2,100 robotic complexes were installed in Russia compared to 25,000 in Germany and 300,000 in China, according to a report in the IntelliNews.

Read More ...

Tech Companies Need To Replace Labor To Gain Profit

Posted by Kirhat | Wednesday, November 12, 2025 | | 0 comments »

Geoffrey Hinton
Computer scientist and Nobel laureate Geoffrey Hinton has reiterated his warnings about how artificial intelligence will affect the labor market and the role of companies leading the charge.

In an interview with Bloomberg TV’s Wall Street Week last 31 October, he said the obvious way to make money off AI investments, aside from charging fees to use chatbots, is to replace workers with something cheaper.

Hinton, whose work has earned him a Nobel Prize and the moniker "godfather of AI," added that while some economists point out previous disruptive technologies created as well as destroyed jobs, it’s not clear to him that AI will do the same.

"I think the big companies are betting on it causing massive job replacement by AI, because that’s where the big money is going to be," he warned.

Just four so-called AI hyperscalers — Microsoft, Meta, Alphabet and Amazon — are expected to boost capital expenditures to US$ 420 billion next fiscal year from US$ 360 billion this year, according to Bloomberg.

Meanwhile, OpenAI alone has announced a total of US$ 1 trillion in infrastructure deals in recent weeks with AI-ecosystem companies like Nvidia, Broadcom and Oracle.

When asked if such investments can pay off without destroying jobs, Hinton replied, "I believe that it can’t. I believe that to make money you’re going to have to replace human labor."

The remarks echo what he said in September, when he told the Financial Times that AI will "create massive unemployment and a huge rise in profits," attributing it to the capitalist system.

In fact, evidence is mounting that AI is shrinking opportunities, especially at the entry level, and an analysis of job openings since OpenAI launched ChatGPT shows they plummeted roughly 30 percent.

And this past week, Amazon announced 14,000 layoffs, largely in middle management. While CEO Andy Jassy said the decision was due to "culture" and not AI, a memo he sent in June predicted a smaller corporate workforce "as we get efficiency gains from using AI extensively across the company."

Read More ...

Robot Hands Evolving To Copy Human Hands

Posted by Kirhat | Saturday, October 25, 2025 | | 0 comments »

Robot Hand
If anybody wants to guess the purpose of any given futuristic humanoid robot, they only need to look at its hands. Last week, a pair of videos released by Boston Dynamics and Figure AI provided clear examples that certain tasks simply require much more "human touch."

In the first case, Hyundai-owned Boston Dynamics showed off a new pair of "grippers" for its trimmed-down Atlas factory robot. (Readers familiar with the company may be more familiar with Atlas' older, beefier predecessor).

The claw-like, three-digit pincer features three fingers, one functioning as an extra-long thumb, a combination particularly well suited for pinching and holding objects. Though Atlas was designed to resemble a person in other ways, its hands aren’t exactly one-to-one. Instead, company engineers said, the design was optimized for sorting, packing, and handling objects—all tasks Atlas would need to perform repeatedly in a factory or warehouse setting.

"The goal is to apply as little force as possible while maintaining a stable grasp," Atlas mechanical engineer Karl Price said.

That’s in sharp contrast to the much more seemingly human-like robot hands unveiled by Figure last week. In a flashy video announcing the launch of its knitwear-wearing "Figure 03" model, the company showcases its robots performing delicate tasks like watering a plant, washing dishes, and gently handing a glass of water to their human overlords.

Similar to Tesla with its egg-fondling Optimus robot, Figure has made it clear it envisions a future for humanoid robots in the home. The company describes its latest model as a "general-purpose humanoid robot for everyday use."

But the everyday tasks listed above, as well as many others required of a functional robot butler, pose different engineering challenges than those faced by a machine designed to sort boxes all day. The hands, in other words, offer a clearer glimpse into a robot’s larger place in the world.

Hands might be one of the hardest human body parts to accurately replicate in robotic form. Each one contains more than 30 muscles and 27 joints, enabling 27 degrees of freedom. They also have over 17,000 touch receptors and nerve endings, allowing us to perform a wide variety of actions—from tapping on a keyboard and delicately writing with a pen to hoisting a heavy barbell.

And while robot hands and advanced prosthetic limbs have made significant progress in recent years, none come close to the sophistication, reliability, and innate simplicity of a human hand. That presents a major challenge for humanoid robots, which are increasingly being pitched as tools to augment, or replace, human labor.

"The majority of the hand-led motor actions in these sectors require not only precise movements but also adaptive responses to unpredictable variables such as irregular object shapes, varying textures, and dynamic environmental conditions," University of Florida Professor of Civil Engineering Eric Du told the BBC in an interview earlier this year.

Read More ...

Honor's Robot Phone Looks Like "Wall-E"

Posted by Kirhat | Monday, October 20, 2025 | | 0 comments »

Robot Phone
At the end of a two-hour Magic8 Pro launch, Honor finally revealed something far stranger than a smartphone. They showed off their "Robot Phone," a concept device that blends AI, robotics, and mobile design into what it calls a "new species" of technology.

Honor described the device as one that "will integrate AI-powered multi-modal intelligence, robotic functionality, and advanced handheld imaging capabilities."

The company added that "as a new species of AI device, the Honor Robot Phone will redefine future human-machine interaction and coexistence."

The teaser video that followed showed a device straight out of a sci-fi film, something between Wall-E and BB-8, with a camera that giggles and swivels on command.

The Robot Phone is not an incremental upgrade or new model. Honor claims it represents an entirely new category of device, one that "positions Honor at the forefront of AI device innovation."

The company even described it as an "emotional companion" that "senses, adapts, and evolves autonomously like a robot, enriching its users’ lives with love, joy, and wisdom."

In the CGI video, the phone’s main feature is a gimbal-mounted camera that pops out from the rear.

The motorized arm allows it to move freely and capture photos or videos from nearly any angle. The camera can even look around when the phone is placed face down, giving the impression that it’s aware of its surroundings.

Honor’s promo depicts the Robot Phone doing everything from entertaining children and taking selfies to skydiving and gazing at the stars.

It even reacts with sound effects, a mix of "wheee, ohhhh, bleep, and coooo," that make it feel like a cross between R2-D2 and Grogu from Star Wars.

Beneath the theatrics, the Robot Phone hints at an evolution in how humans interact with AI.

The device could extend visual search features seen in products like the Ray-Ban Meta glasses or Google’s Circle to Search.

Read More ...

Introducing e-MG, A, Electro-Morphing Gel Robot

Posted by Kirhat | Saturday, October 18, 2025 | | 0 comments »

e-MG
This one bends, stretches, and slithers, just like a creature straight out of a Marvel movie. It was developed by researchers. It is a super agile robot that can shapeshift using a special electro-morphing gel, mimicking the fluidity and adaptability of the comic book anti-hero Venom.

Created by scientists at the University of Bristol and Queen Mary University of London, this soft, jelly-like humanoid gymnast showcases unprecedented flexibility in motion and form.

The breakthrough introduces an electro-morphing gel (e-MG) that enables robots to change shape and move with lifelike agility.

Unlike traditional rigid robots, the e-MG model can contort its limbs, twist its body, and even swing across surfaces.

This remarkable flexibility comes from the e-MG’s unique ability to respond to electric fields. When voltage is applied through ultralight electrodes, the gel reshapes itself, bending, stretching, or contracting based on the desired motion.

Study lead author Ciqun Xu, Research Associate at the University of Bristol School of Engineering Mathematics and Technology, said: "Soft robotics is an exciting and rapidly advancing field, both here in Bristol and worldwide. Our e-MG robot, which resembles something straight out of science fiction, marks an exciting breakthrough that paves the way for further progress in soft robotics."

Soft robots have long promised a gentler, more adaptable approach to automation, but their performance has often been hampered by slow response times and limited morphing capabilities.

This contrasts with previous magnetic micro robots requiring heavy, bulky and expensive electromagnets.

By constructing the e-MG robot from a soft polymer composite incorporating nanocrystalline conductors, it can be manipulated remotely by electric fields with a high level of control and body morphing.

The e-MG robot changes that. Its electroactive gel structure allows for rapid, multidirectional motion without bulky external magnets or mechanical components.

In tests, the robot performed large-scale deformations and complex movements beyond the limits of current designs. It maintained consistent performance across 10,000 actuation cycles, proving both its endurance and stability.

The e-MG’s versatility could make it a valuable addition across industries, including medical wearables, rescue robotics, and deep-space exploration.

The geometry of an e-MG robot can be tailored to specific application scenarios.

Its geometry can be tailored for specific scenarios, and in one demonstration, the humanoid gymnast robot used its flexible limbs to swing along a ceiling for locomotion.

Researchers say the material can even be integrated with rigid robotic parts to create hybrid machines suited for complex, high-stress environments.

Ciqun added, "The potential applications of soft robotics are as broad as they are exciting. From space exploration to wearable devices and healthcare, soft robotics can push the boundaries of what is possible."

Read More ...