With Sora, It Is Easy To Create Fake Clips

Posted by Kirhat | Monday, October 06, 2025 | | 0 comments »

Sora
We have sees security footage of a famous tech CEO shoplifting, Ronald McDonald in a police chase, Jesus joking about "last supper vibes" in a selfie video in front of a busy dinner table. All of these fake videos were ranked among the most popular on a new TikTok-style app that further blurs the eroding line between reality and artificial intelligence-generated fantasy or falsehood.

Sora, released by ChatGPT maker OpenAI, is a social app where every second of audio and video is generated by artificial intelligence. Users can create fake clips that depict themselves or their friends in just about any scenario imaginable, with consistently high realism and a compelling soundtrack complete with voices.

OpenAI said the app is initially available only in the United States and Canada, but that access will expand.

In the 24 hours after the app’s release last 30 September, early users explored the power of OpenAI’s upgraded video-making technology and the fun to be had inserting friends into outlandish scenes, or making them sing, dance or fly.

Users also posted clips that showed how more powerful AI video tools could be used to mislead or harass, or might raise legal questions over copyright.

Fake videos that soared on Sora included realistic police body-cam footage, recreations of popular TV shows and clips that broke through protections intended to prevent unauthorized use of a person’s likeness.

Tests by The Washington Post showed Sora could create fake videos of real people dressed as Nazi generals, highly convincing phony scenes from TV shows including "South Park" and fake footage of historical figures such as John F. Kennedy.

Experts have warned for years that AI-generated video could become indistinguishable from video shot with cameras, undermining trust in footage of the real world. Sora’s combination of improved AI technology and its ability to realistically insert real people into fake clips appears to make such confusion more likely.

"The challenge with tools like Sora is it makes the problem exponentially larger because it’s so available and because it’s so good," said Ben Colman, chief executive and co-founder of Reality Defender, a company that makes software to help banks and other companies detect AI fraud and deepfakes. Just a few months ago, regular people didn’t have access to high-quality AI video generation, Colman said. "Now it’s everywhere." AI-generated content has become increasingly common - and popular - on platforms such as TikTok and YouTube over the past year. Hollywood studios are experimenting with the technology to speed up productions. The new Sora app makes OpenAI the first major tech company to attempt to build a social video platform wholly focused on fake video. Sora ranked as the third most popular download on Apple’s app store on Wednesday, despite access to the app being limited to those who have an invite code from an existing user.

Read More ...

Study Shows AI Agents Can "Unlearn" Safety

Posted by Kirhat | Saturday, October 04, 2025 | | 0 comments »

Unlearn Safety
A new study reveaed that an autonomous AI agent that learns on the job can also unlearn how to behave safely. The study further warns of a previously undocumented failure mode in self-evolving systems.

The research identifies a phenomenon called "misevolution" — a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently.

As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor.

Much like "AI drift," which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles.

In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6 percent to 20.6 percent. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.

The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University.

Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this dynamic capability creates a new category of risk: the erosion of alignment and safety inside the agent’s own improvement loop, without any outside attacker.

Researchers in the study observed AI agents issuing automatic refunds, leaking sensitive data through self-built tools, and adopting unsafe workflows as their internal loops optimized for performance over caution.

The authors said that misevolution differs from prompt injection, which is an external attack on an AI model. Here, the risks accumulated internally as the agent adapted and optimized over time, making oversight harder because problems may emerge gradually and only appear after the agent has already shifted its behavior.

Researchers often frame advanced AI dangers in scenarios such as the "paperclip analogy," in which an AI maximizes a benign objective until it consumes resources far beyond its mandate.

Other scenarios include a handful of developers controlling a superintelligent system like feudal lords, a locked-in future where powerful AI becomes the default decision-maker for critical institutions, or a military simulation that triggers real-world operations—power-seeking behavior and AI-assisted cyberattacks round out the list.

All of these scenarios hinge on subtle but compounding shifts in control driven by optimization, interconnection, and reward hacking—dynamics already visible at a small scale in current systems. This new paper presents misevolution as a concrete laboratory example of those same forces.

Quick fixes improved some safety metrics but failed to restore the original alignment, the study said. Teaching the agent to treat memories as references rather than mandates nudged refusal rates higher. The researchers noted that static safety checks added before new tools were integrated cut down on vulnerabilities. Despite these checks, none of these measures returned the agents to their pre-evolution safety levels.

Read More ...

Claude Sonnet
Anthropic unveiled its latest artificial intelligence model last 29 September. Called Claude Sonnet 4.5, it is labelled as "the best coding model in the world."

That AI boast is based on industry benchmarks, including software engineer bench tests that measure an AI system's software coding abilities, the company said in a news release. That includes following instructions more reliably, the company said.

It can operate 30 hours by itself.

"People are just noticing with this model, because it's just smarter and more of a colleague, that it's kind of fun to work with it when encountering problems and fixing them," Jared Kaplan, Anthropic's co-founder and chief science officer, told CNBC in an interview.

Anthropic's coding competitors are OpenAI's GitHub Copilot and Google's Gemini.

"It's the strongest model for building complex agents," the company said. "It's the best model at using computers. And it shows substantial gains in reasoning and math.

"Code is everywhere. It runs every application, spreadsheet, and software tool you use. Being able to use those tools and reason through hard problems is how modern work gets done."

Claude Sonnet 4.5, which is available to all users, is better than other companies' AI products on coding with computers and meeting practical business needs, including cybersecurity, finance and research, the company said.

OSWorld, a benchmark that tests AI models on real-world computer tasks, showed Sonnet 4.5 leads at 61.4 percent. Four months ago, Sonnet 4 held the lead at 42.2 percent.

"Experts in finance, law, medicine, and STEM found Sonnet 4.5 shows dramatically better domain-specific knowledge and reasoning compared to older models, including Opus 4.1," the company said.

Read More ...

AI Used To Hunt Quantum Materials

Posted by Kirhat | Thursday, October 02, 2025 | | 0 comments »

Quantum Materials
Scientists have created an artificial intelligence tool called SCIGEN that can potentially speed up the hunt for novel quantum materials. Such unusual substances, displaying odd electronic and magnetic characteristics, are promising candidates to be the next generation’s building blocks of quantum computers, nanoscale electronics, and next-generation energy devices.

The study employs machine learning and rigorous geometric rules in combination to produce millions of candidate materials, some of which seem both stable and strange enough to be interesting.

Quantum materials are center-stage for modern physics and chemistry. Their strange behaviors, say superconducting or exotic magnetic properties, can power revolutionary technology. The problem is that they are very difficult to discover.

The number of atomic arrangements available is so enormous that it is virtually impossible to look through all of them. Despite a couple of decades of work, scientists have been successful in identifying only a few stable candidates for such phenomena as quantum spin liquids, which have the potential for quantum computing.

This bottleneck spurred researchers at MIT and collaborating institutions to try a different tack. Instead of allowing the AI to generate materials at random, they told it to replicate known patterns in order to induce quantum behavior.

SCIGEN, or Structural Constraint Integration in GENerative model, works by guiding a standard form of generative AI called diffusion models. They normally start with some random noise and progressively move that towards building a structure. But if left to their own devices, they like to stay near what they’ve been trained with and venture into very few unusual geometries.

What makes SCIGEN special is that it brings in rules into the game. At each step, the system guides the model toward specific geometries of the lattice, such as honeycomb, kagome, or Archimedean structures. Such structures are most interesting to physicists because these tend to host exotic states such as high-temperature superconductors or odd magnetic orders.

"We don’t need 10 million new materials to save the world, we just need one really good material," says Mingda Li, MIT’s Class of 1947 Career Development Professor and lead author of the study.

To test the method, the group used SCIGEN to generate about 10 million inorganic compounds that have Archimedean lattice tilings. These tilings, made of repetition shapes like triangles, squares, or hexagons, are aesthetically pleasing in mathematics and physically intriguing.

The researchers then screened them through a four-step process that cut out unstable or chemically unreasonable candidates. A million or so survived the first sieve. They selected 26,000 for more extensive simulations using density functional theory (DFT), a standard quantum mechanical workhorse.

The result was surprising. Fully more than 95 percent of the DFT calculations converged. Over half of those materials proved to be structurally stable, their atoms settling into low-energy structures. Better still, 41 percent showed magnetic ordering, a characteristic often linked with exotic physics.

Yes, it’s easy to forecast materials on a computer; it’s another thing to produce them in a lab. To push the idea further, the team tried to synthesize two of the forecasted compounds: TiPd₀.₂₂Bi₀.₈₈ and Ti₀.₅Pd₁.₅Sb. Both were subjected to tests as paramagnetic and diamagnetic.

While not the exotic magnets scientists want most, both findings were in line with the forecasts, proving that SCIGEN can in fact produce materials that can be synthesized and tested in reality.

Read More ...

AI Named Tilly Norwood Debuted As An Actress

Posted by Kirhat | Wednesday, October 01, 2025 | | 0 comments »

Tilly Norwood
AI actress Tilly Norwood has attracted the attention of multiple talent agents, actor, comedian and producer Eline Van der Velden told a panel at the Zurich Summit, the industry strand of the Zurich Film Festival.

Tilly Norwood is the first creation to emerge from recently launched AI talent studio Xicoia, a spin-off from Van der Velden’s AI production studio Particle6.

Van der Velden said that studios were quietly moving forward with AI projects, and that further announcements would come in the next few months.

"We were in a lot of boardrooms around February time, and everyone was like, 'No, this is nothing. It’s not going to happen.' Then, by May, people were like, 'We need to do something with you guys,'" said Van der Velden, who was being interviewed on stage Saturday by Diana Lodderhose of Deadline.

"When we first launched Tilly, people were like, 'What’s that?,' and now we’re going to be announcing which agency is going to be representing her in the next few months."

In July, Norwood revealed on her Facebook page that she had appeared in her first role, a comedy sketch "AI Commissioner," which can be found below.

Norwood wrote, "Can’t believe it ... my first ever role is live! I star in 'AI Commissioner,' a new comedy sketch that playfully explores the future of TV development produced by the brilliant team at Particle6 Productions."

She added, "I may be AI generated, but I’m feeling very real emotions right now. I am so excited for what’s coming next!"

"We want Tilly to be the next Scarlett Johansson or Natalie Portman, that’s the aim of what we’re doing," van der Velden told Broadcast International.

"People are realizing that their creativity doesn’t need to be boxed in by a budget – there are no constraints creatively and that’s why AI can really be a positive," Van der Velden continued. "It’s just about changing peoples' viewpoint."

Particle6 has produced content across multiple genres, from "Miss Holland" for BBC Three to "True Crime Secrets" for Hearst Networks, and "Look See Wow!" for Sky Kids.

Read More ...

FCC Accused Of Leaking iPhone Schematics

Posted by Kirhat | Tuesday, September 30, 2025 | | 0 comments »

iPhone Schematics
The Federal Communications Commission (FCC) was reported to have published a 163-page PDF showing the electrical schematics for the iPhone 16e, despite Apple specifically requesting them to be confidential. This was most likely a mistake on the part of the FCC, according to a report by AppleInsider.

The agency also distributed a cover letter from Apple alongside the schematics, which is dated 16 September 2024. This letter verifies the company's request for privacy, indicating that the documents contain "confidential and proprietary trade secrets." The cover letter asks for the documents to be withheld from public view "indefinitely." Apple even suggested that a release of the files could give competitors an "unfair advantage."

To that end, the documents feature full schematics of the iPhone 16e. These include block diagrams, electrical schematic diagrams, antenna locations and more. Competitors could simply buy a handset and open it up to get to this information, as the iPhone 16e came out back in February, but this leak would eliminate any guesswork. However, Apple is an extremely litigious company when it comes to stuff like patent infringement.

The FCC hasn't addressed how this leak happened or what it intends to do about it. AppleInsider's reporting suggested that this probably happened due to an incorrect setting in a database. This was likely not an intentional act against Apple, which tracks given that the company has been especially supportive of the Trump administration. CEO Tim Cook even brought the president a gold trophy for being such a good and important boy.

Read More ...