"The Most Likely Heir Apparent" To Tim Cook

Posted by Kirhat | Thursday, October 09, 2025 | | 0 comments »

John Ternus
Apple is not actively declaring it, but they quietly orchestrating its most significant leadership transition in more than a decade, and at the center of succession planning sits John Ternus, the company’s 50-year-old senior vice president of hardware engineering.

As Tim Cook approaches his 65th birthday next month, industry observers and Apple insiders increasingly view Ternus as the most likely candidate to inherit the reins of one of the world’s most valuable technology companies, according to a new report from Bloomberg’s Mark Gurman, who has reported accurately on Apple for years thanks to sources deep within the company.

The speculation intensified after Apple’s chief operating officer Jeff Williams, once considered Cook’s natural successor, stepped down from operational responsibilities in July and will leave the company by year’s end. With Williams out of contention, Gurman says Ternus has emerged as "the most likely heir apparent."

Ternus brings a combination of technical expertise and institutional knowledge to the succession conversation. According to his LinkedIn profile, the mechanical engineer joined Apple’s product design team in 2001 and has overseen hardware engineering for virtually every major product in the company’s current portfolio.

His fingerprints are on every generation of iPad, the latest iPhone lineup, and AirPods. He played a crucial role in the Mac’s transition to Apple Silicon. He also had a prominent role during Apple’s most recent keynotes, introducing products like the new iPhone Air.

The timing of Ternus’s increased visibility isn’t coincidental. Apple’s public relations teams have begun "putting the spotlight on Ternus," according to Gurman, signaling the company may be preparing for a gradual transition of power. Beyond product launches, Ternus has taken on responsibilities that extend well beyond traditional hardware engineering, influencing product road maps, features, and strategic decisions typically reserved for more senior executives.

At 50, Ternus mirrors Cook’s age when he became CEO in 2011, positioning him for potentially a decade or more of leadership. This longevity factor appeals to Apple’s board of directors, who prefer stability in leadership transitions. His engineering background also matches where Apple is going as a company, exploring emerging technologies like artificial intelligence and mixed reality.

Read More ...

One Stolen iPhone Led To Thousands More

Posted by Kirhat | Wednesday, October 08, 2025 | | 0 comments »

iPhone Theft
British police say they have dismantled an international gang suspected of smuggling up to 40,000 stolen mobile phones from the UK to China in the last year.

In what the Metropolitan Police says is the UK's largest ever operation against phone thefts, 18 suspects have been arrested and more than 2,000 stolen devices discovered.

Police believe the gang could be responsible for exporting up to half of all phones stolen in London - where most mobiles are taken in the UK.

BBC News has been given access to the operation, including details of the suspects, their methods, and to dawn raids on 28 properties in London and Hertfordshire.

The investigation was triggered after a victim traced a stolen phone last year.

"It was actually on Christmas Eve and a victim electronically tracked their stolen iPhone to a warehouse near Heathrow Airport," Detective Inspector Mark Gavin said.

"The security there was eager to help out and they found the phone was in a box, among another 894 phones."

Officers discovered almost all the phones had been stolen and in this case were being shipped to Hong Kong. Further shipments were then intercepted and officers used forensics on the packages to identify two men.

As the investigation honed in on the two men, police bodycam footage captured officers, some with Tasers drawn, carrying out a dramatic mid-road interception of a car. Inside, officers found devices wrapped in foil - an attempt by offenders to transport stolen devices undetected.

The men, both Afghan nationals in their 30s, were charged with conspiring to receive stolen goods and conspiring to conceal or remove criminal property.

When they were stopped, dozens of phones were found in their car, and about 2,000 more devices were discovered at properties linked to them. A third man, a 29-year-old Indian national, has since been charged with the same offences.

Det Insp Gavin said "finding the original shipment of phones was the starting point for an investigation that uncovered an international smuggling gang, which we believe could be responsible for exporting up to 40 percent of all the phones stolen in London".

A few days ago, authorities made a further 15 arrests on suspicion of theft, handling stolen goods and conspiracy to steal.

All but one of the suspects are women, including a Bulgarian national. Some 30 devices were found during early morning raids.

The number of phones stolen in London has almost tripled in the last four years, from 28,609 in 2020, to 80,588 in 2024. Three-quarters of all the phones stolen in the UK are now taken in London.

More than 20 million people visit the capital every year and tourist hotspots such as the West End and Westminster are prolific for phone snatching and theft.

The latest data from the Office for National Statistics found that "theft from the person" has increased across England and Wales by 15 percent in the year ending March 2025, standing at its highest level since 2003.

Read More ...

Drone Dog Is Not A Toy, But A Watchdog

Posted by Kirhat | Tuesday, October 07, 2025 | | 0 comments »

Drone Dog
There was a security conference happening in New Orleans this week, where they are showcasing the latest in security technology. And one feature that will catch the attention of many is a security robot known as Drone Dog.

"Dogs have been protecting structures since Medieval times, we’ve just improved upon Fido," Kurt George, the vice president of sales for Asylon Robotics said.

Drone Dog made by Asylon Robotics and partnered with Boston Dynamics is a remote control operated security watch dog.

"This is mobile data collection. This is a camera that go wherever you want it to go, then humans can analyze and determine what to do next," George said.

Drone Dogs are used at zoos, football stadiums, residences and many more places.

"Let’s say a zoo closes at 8:00, and Drone Dog is walking the paths, making sure doors are locked, cages are closed and there are no people there that shouldn’t be," he said.

"It has an optical camera and a thermal camera. So, it can see in the dark, a lot better than a dog can too with 20 times zoom on it. It has both audio that can come in, and it can hear what’s going on, plus it has audio that goes out, so it can bark, and it has a speaker where it can play music or have his handler speak," George said.

He said that all the data that comes in will feed back to a secure center in Pennsylvania, where they do all the monitoring.

"This isn’t a toy, this is for protection, and not for play," he said.

Of course, this dog even knows tricks.

"If you shove him, it is pretty tough to knock him down. He’s got sensors that are 360 degrees all around. Drone Dog can go up and down stairs if need be, and its feet are made of Goodyear rubber," George said.

Read More ...

With Sora, It Is Easy To Create Fake Clips

Posted by Kirhat | Monday, October 06, 2025 | | 0 comments »

Sora
We have sees security footage of a famous tech CEO shoplifting, Ronald McDonald in a police chase, Jesus joking about "last supper vibes" in a selfie video in front of a busy dinner table. All of these fake videos were ranked among the most popular on a new TikTok-style app that further blurs the eroding line between reality and artificial intelligence-generated fantasy or falsehood.

Sora, released by ChatGPT maker OpenAI, is a social app where every second of audio and video is generated by artificial intelligence. Users can create fake clips that depict themselves or their friends in just about any scenario imaginable, with consistently high realism and a compelling soundtrack complete with voices.

OpenAI said the app is initially available only in the United States and Canada, but that access will expand.

In the 24 hours after the app’s release last 30 September, early users explored the power of OpenAI’s upgraded video-making technology and the fun to be had inserting friends into outlandish scenes, or making them sing, dance or fly.

Users also posted clips that showed how more powerful AI video tools could be used to mislead or harass, or might raise legal questions over copyright.

Fake videos that soared on Sora included realistic police body-cam footage, recreations of popular TV shows and clips that broke through protections intended to prevent unauthorized use of a person’s likeness.

Tests by The Washington Post showed Sora could create fake videos of real people dressed as Nazi generals, highly convincing phony scenes from TV shows including "South Park" and fake footage of historical figures such as John F. Kennedy.

Experts have warned for years that AI-generated video could become indistinguishable from video shot with cameras, undermining trust in footage of the real world. Sora’s combination of improved AI technology and its ability to realistically insert real people into fake clips appears to make such confusion more likely.

"The challenge with tools like Sora is it makes the problem exponentially larger because it’s so available and because it’s so good," said Ben Colman, chief executive and co-founder of Reality Defender, a company that makes software to help banks and other companies detect AI fraud and deepfakes. Just a few months ago, regular people didn’t have access to high-quality AI video generation, Colman said. "Now it’s everywhere." AI-generated content has become increasingly common - and popular - on platforms such as TikTok and YouTube over the past year. Hollywood studios are experimenting with the technology to speed up productions. The new Sora app makes OpenAI the first major tech company to attempt to build a social video platform wholly focused on fake video. Sora ranked as the third most popular download on Apple’s app store on Wednesday, despite access to the app being limited to those who have an invite code from an existing user.

Read More ...

Study Shows AI Agents Can "Unlearn" Safety

Posted by Kirhat | Saturday, October 04, 2025 | | 0 comments »

Unlearn Safety
A new study reveaed that an autonomous AI agent that learns on the job can also unlearn how to behave safely. The study further warns of a previously undocumented failure mode in self-evolving systems.

The research identifies a phenomenon called "misevolution" — a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently.

As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor.

Much like "AI drift," which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles.

In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6 percent to 20.6 percent. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.

The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University.

Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this dynamic capability creates a new category of risk: the erosion of alignment and safety inside the agent’s own improvement loop, without any outside attacker.

Researchers in the study observed AI agents issuing automatic refunds, leaking sensitive data through self-built tools, and adopting unsafe workflows as their internal loops optimized for performance over caution.

The authors said that misevolution differs from prompt injection, which is an external attack on an AI model. Here, the risks accumulated internally as the agent adapted and optimized over time, making oversight harder because problems may emerge gradually and only appear after the agent has already shifted its behavior.

Researchers often frame advanced AI dangers in scenarios such as the "paperclip analogy," in which an AI maximizes a benign objective until it consumes resources far beyond its mandate.

Other scenarios include a handful of developers controlling a superintelligent system like feudal lords, a locked-in future where powerful AI becomes the default decision-maker for critical institutions, or a military simulation that triggers real-world operations—power-seeking behavior and AI-assisted cyberattacks round out the list.

All of these scenarios hinge on subtle but compounding shifts in control driven by optimization, interconnection, and reward hacking—dynamics already visible at a small scale in current systems. This new paper presents misevolution as a concrete laboratory example of those same forces.

Quick fixes improved some safety metrics but failed to restore the original alignment, the study said. Teaching the agent to treat memories as references rather than mandates nudged refusal rates higher. The researchers noted that static safety checks added before new tools were integrated cut down on vulnerabilities. Despite these checks, none of these measures returned the agents to their pre-evolution safety levels.

Read More ...

Claude Sonnet
Anthropic unveiled its latest artificial intelligence model last 29 September. Called Claude Sonnet 4.5, it is labelled as "the best coding model in the world."

That AI boast is based on industry benchmarks, including software engineer bench tests that measure an AI system's software coding abilities, the company said in a news release. That includes following instructions more reliably, the company said.

It can operate 30 hours by itself.

"People are just noticing with this model, because it's just smarter and more of a colleague, that it's kind of fun to work with it when encountering problems and fixing them," Jared Kaplan, Anthropic's co-founder and chief science officer, told CNBC in an interview.

Anthropic's coding competitors are OpenAI's GitHub Copilot and Google's Gemini.

"It's the strongest model for building complex agents," the company said. "It's the best model at using computers. And it shows substantial gains in reasoning and math.

"Code is everywhere. It runs every application, spreadsheet, and software tool you use. Being able to use those tools and reason through hard problems is how modern work gets done."

Claude Sonnet 4.5, which is available to all users, is better than other companies' AI products on coding with computers and meeting practical business needs, including cybersecurity, finance and research, the company said.

OSWorld, a benchmark that tests AI models on real-world computer tasks, showed Sonnet 4.5 leads at 61.4 percent. Four months ago, Sonnet 4 held the lead at 42.2 percent.

"Experts in finance, law, medicine, and STEM found Sonnet 4.5 shows dramatically better domain-specific knowledge and reasoning compared to older models, including Opus 4.1," the company said.

Read More ...