AI Companies Aggressively Recruting PhD Prospects

Posted by Kirhat | Friday, June 27, 2025 | | 0 comments »

PhD Students
Larry Birnbaum, a professor of computer science at Northwestern University was recruiting a promising PhD to become a graduate researcher. Simultaneously, Google was wooing the student.

And when the prospect visited the tech giant’s campus in Mountain View, Calif., the company slated him to chat with its cofounder Sergey Brin and CEO Sundar Pichai, who are collectively worth about US$ 140 billion and command over 183,000 employees.

"How are we going to compete with that?" Birnbaum asks, noting that PhDs in corporate research roles can make as much as five times professorial salaries, which average US$ 155,000 annually. "That’s the environment that every chair of computer science has to cope with right now."

Though Birnbaum says these recruitment scenarios have been "happening for a while," the phenomenon has reportedly worsened as salaries across the industry have been skyrocketing. The trend recently became headline news after reports surfaced of Meta offering to pay some highly experienced AI researchers between seven- and eight-figure salaries.

Those offers—coupled with the strong demand for leaders to propel AI applications—may be helping to pull up the salary levels of even newly minted PhDs. Even though some of these graduates have no professional experience, they are being offered the types of comma-filled levels traditionally reserved for director- and executive-level talent.

Engineering professors and department chairs at Johns Hopkins, University of Chicago, Northwestern, and New York University interviewed by Fortune are divided on whether these lucrative offers lead to a "brain drain" from academic labs.

The brain drain camp believes this phenomenon depletes the ranks of academic AI departments, which still do important research and also are responsible for training the next generation of PhD students.

At the private labs, the AI researchers help juice Big Tech’s bottom line while providing, in these critics’ view, no public benefit. The unconcerned argue that academia is a thriving component of this booming labor market.

In the days before ChatGPT, top AI researchers were in high demand, just as today. But many of the top corporate AI labs, such as OpenAI, Google DeepMind, and Meta’s FAIR (Fundamental AI Research), would allow established academics to keep their university appointments, at least part-time. This would allow them to continue to teach and train graduate students, while also conducting research for the tech companies.

While some professors say that there’s been no change in how frequently corporate labs and universities are able to reach these dual corporate-academic appointments, others disagree. NYU’s Bari says this model has declined owing to "intense talent competition, with companies offering millions of dollars for full-time commitment which outpaces university resources and shifts focus to proprietary innovation."

Read More ...

AI May Not Be The Answer To Mysteries In Astronomy

Posted by Kirhat | Thursday, June 26, 2025 | | 0 comments »

AI Astronomy
There is a team of astronomerswho claimed that they've gleaned the mysterious traits of our galaxy's black hole by probing it with an AI model. But a pretty big name on the field is throwing a little bit of cold water on their work. Just a little bit.

Reinhard Genzel, a Nobel laureate and an astrophysicist at the Max Planck Institute, expressed some skepticism regarding the team's use of AI, and the quality of the data they fed into the model.

"I'm very sympathetic and interested in what they're doing," Genzel told Live Science. "But artificial intelligence is not a miracle cure."

Raging at the center of the Milky Way some 26,000 light years away is Sagittarius A*, a supermassive black hole with over 4.3 million times the mass of the Sun, and an event horizon nearly 16 million miles in diameter.

Back when it wasn't clear what Sagittarius A* was other than a weird bright object in the galactic center, Genzel and fellow astrophysicist Andrea Ghez illuminated its colossal scale and eventually proved that it was a supermassive black hole, a feat that earned them both a Nobel Prize in physics in 2020.

But much of our galaxy's dark, beating heart remains a mystery, as do supermassive black holes in general. How and when do these cosmic behemoths form, and how do they gain such incredible mass? Astronomers agree that they would have to have been formed in the early universe, but the rest remains contentious.

One reason is that no star is heavy enough to directly collapse into an object of a supermassive black hole's size. True, they can grow by swallowing nearby matter, like an unfortunate star that wanders too close, or even merging with another black hole, but that doesn't explain all cases. Some are so massive that the time it'd take for them to accrete enough matter to reach their observed size would be older than the universe itself.

A breakthrough came in 2022, when astronomers revealed the first image of Sagittarius A* taken with the Event Horizon Telescope, three years after the same observatory — which is actually made up of several radio telescopes scattered across the globe — was used to stitch together humankind's first image of a black hole whatsoever.

But the image — and the data that comprised it — was fuzzy. There wasn't enough detail present to tease out the black hole's structure or behavior.

That's where this latest work, detailed in three studies published in the journal Astronomy & Astrophysics, comes in. In a nutshell, the astronomers trained a neural network on millions of synthetic simulations using discarded ETH data that was deemed too grainy to decode, largely due to the interference introduced by the Earth's atmosphere. Once the AI model cut its teeth on the synthetic data, it looked at the real observations of Sagittarius A* and produced a much clearer image.

"It is very difficult to deal with data from the Event Horizon Telescope," coauthor of the main study Michael Janssen, an astrophysicist at Radboud University in the Netherlands, told Live Science. "A neural network is ideally suited to solve this problem."

The AI-enhancement suggested that the supermassive black hole is rotating somewhere between 80 to 90 percent of its maximum possible velocity, which is blindingly fast, as these objects can spin at a significant fraction of the speed of light. Its rotation axis, in fact, appears to be pointing towards the Earth. The AI model also revealed that the black hole's emissions are coming from its accretion disk — the glowing disc of hot matter swirling just outside its event horizon — and not an energetic outburst called a jet that's produced by the black hole's absurdly powerful magnetic fields.

Read More ...

Latin AI
While AI development is focusing on countries like China, Japan, U.S. and European Union members, a dozen Latin American countries are also collaborating to launch Latam-GPT in September. It is the first large artificial intelligence language model trained to understand the region's diverse cultures and linguistic nuances, Chilean officials.

This open-source project, steered by Chile's state-run National Center for Artificial Intelligence (CENIA) alongside over 30 regional institutions, seeks to significantly increase the uptake and accessibility of AI across Latin America.

Chilean Science Minister Aisen Etcheverry said the project "could be a democratizing element for AI," envisioning its application in schools and hospitals with a model that reflects the local culture and language.

Developed starting in January 2023, Latam-GPT seeks to overcome inaccuracies and performance limitations of global AI models predominantly trained on English.

Officials said that it was meant to be the core technology for developing applications like chatbots, not a direct competitor to consumer products like ChatGPT.

A key goal is preserving Indigenous languages, with an initial translator already developed for Rapa Nui, Easter Island's native language.

The project plans to extend this to other Indigenous languages for applications like virtual public service assistants and personalized education systems.

The model is based on Llama 3 AI technology and is trained using a regional network of computers, including facilities at Chile's University of Tarapaca and cloud-based systems.

Regional development bank CAF and Amazon Web Services have supported it.

Read More ...

What Are The Protocols That Governed AI?

Posted by Kirhat | Monday, June 23, 2025 | | 0 comments »

AI Protocols
In the world of advance technology, rules are important. Much like everything else in the world, they abide by certain standards.

With the boom in personal computing came USB, a standard for transferring data between devices. With the rise of the internet came IP addresses, numerical labels that identify every device online. With the advent of email came SMTP, a framework for routing email across the internet.

These are protocols — the invisible scaffolding of the digital realm — and with every technological shift, new ones emerge to govern how things communicate, interact, and operate.

As the world enters an era shaped by AI, it will need to draw up new ones. But AI goes beyond the usual parameters of screens and code. It forces developers to rethink fundamental questions about how technological systems interact across the virtual and physical worlds.

How will humans and AI coexist? How will AI systems engage with each other? And how will we define the protocols that manage a new age of intelligent systems?

Across the industry, startups and tech giants alike are busy developing protocols to answer these questions. Some govern the present in which humans still largely control AI models. Others are building for a future in which AI has taken over a significant share of human labor.

"Protocols are going to be this kind of standardized way of processing non-deterministic information," Antoni Gmitruk, the chief technology officer of Golf, which helps clients deploy remote servers aligned with Anthropic's Model Context Protocol, told BI. Agents, and AI in general, are "inherently non-deterministic in terms of what they do and how they behave."

When AI behavior is difficult to predict, the best response is to imagine possibilities and test them through hypothetical scenarios. Does everything need a protocol? Definitely not. The AI boom marks a turning point, reviving debates over how knowledge is shared and monetized. McKinsey & Company calls it an "inflection point" in the fourth industrial revolution — a wave of change that it says began in the mid-2010s and spans the current era of "connectivity, advanced analytics, automation, and advanced-manufacturing technology." Moments like this raise a key question: How much innovation belongs to the public and how much to the market? Nowhere is that clearer than in the AI world's debate between the value of open-source and closed models. "I think we will see a lot of new protocols in the age of AI," Tiago Sada, the chief product officer at Tools for Humanity, the company building the technology behind Sam Altman's World. However, "I don't think everything should be a protocol."

Read More ...

Apple Researchers Point Out Dangers Of AI Expectations

Posted by Kirhat | Saturday, June 21, 2025 | | 0 comments »

Apple Researchers
Researchers at Apple are not afraid to face the wrath of AI supporters when they released an eyebrow-raising paper that throws cold water on the "reasoning" capabilities of the latest, most powerful large language models.

In the paper, a team of machine learning experts makes the case that the AI industry is grossly overstating the ability of its top AI models, including OpenAI's o3, Anthropic's Claude 3.7, and Google's Gemini.

In particular, the researchers assail the claims of companies like OpenAI that their most advanced models can now "reason" — a supposed capability that the Sam Altman-led company has increasingly leaned on over the past year for marketing purposes — which the Apple team characterizes as merely an "illusion of thinking."

It's a particularly noteworthy finding, considering Apple has been accused of falling far behind the competition in the AI space. The company has chosen a far more careful path to integrating the tech in its consumer-facing products — with some seriously mixed results so far.

In theory, reasoning models break down user prompts into pieces and use sequential "chain of thought" steps to arrive at their answers. But now, Apple's own top minds are questioning whether frontier AI models simply aren't as good at "thinking" as they're being made out to be.

"While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood," the team wrote in its paper.

The authors — who include Samy Bengio, the director of Artificial Intelligence and Machine Learning Research at the software and hardware giant — argue that the existing approach to benchmarking "often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality."

By using "controllable puzzle environments," the team estimated the AI models' ability to "think" — and made a seemingly damning discovery.

"Through extensive experimentation across diverse puzzles, we show that frontier [large reasoning models] face a complete accuracy collapse beyond certain complexities," they wrote.

Thanks to a "counter-intuitive scaling limit," the AIs' reasoning abilities "declines despite having an adequate token budget."

Put simply, even with sufficient training, the models are struggling with problem beyond a certain threshold of complexity — the result of "an 'overthinking' phenomenon," in the paper's phrasing.

The finding is reminiscent of a broader trend. Benchmarks have shown that the latest generation of reasoning models is more prone to hallucinating, not less, indicating the tech may now be heading in the wrong direction in a key way.

Exactly how reasoning models choose which path to take remains surprisingly murky, the Apple researchers found.

"We found that LRMs have limitations in exact computation," the team concluded in its paper. "They fail to use explicit algorithms and reason inconsistently across puzzles."

The researchers claim their findings raise "crucial questions" about the current crop of AI models' "true reasoning capabilities," undercutting a much-hyped new avenue in the burgeoning industry.

That's despite tens of billions of dollars being poured into the tech's development, with the likes of OpenAI, Google, and Meta, constructing enormous data centers to run increasingly power-hungry AI models.

Could the Apple researchers' finding be yet another canary in the coalmine, suggesting the tech has "hit a wall"?

Or is the company trying to hedge its bets, calling out its outperforming competition as it lags behind, as some have suggested?

It's certainly a surprising conclusion, considering Apple's precarious positioning in the AI industry: at the same time that its researchers are trashing the tech's current trajectory, it's promised a suite of Apple Intelligence tools for its devices like the iPhone and MacBook.

"These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning," the paper reads.

Read More ...

FBI Warns iPhone Users About Latest Scam

Posted by Kirhat | Thursday, June 19, 2025 | | 0 comments »

iPhone Scam
The Federal Bureau of Investigation (FBI) has just issued a new warning for iPhone users regarding a text-message scam that has been bombarding users lately.

According to Forbes, messaging attacks on iPhone and Android are up more than 700 percent this month. One malicious text that has been making the rounds involves bad actors posing as the Department of Motor Vehicles (DMV) and demanding money for unpaid tolls or fines at the threat of possible loss of license or jail time.

These DMV texts are more dangerous than the previous unpaid toll messages that have been popping up on people's phones for more than year, according to Guardio.

"These scam texts lead to phishing websites designed to steal people’s credit card information and make unauthorized charges," Guardio told Forbes.

Last week, WREG reported that the FBI is investigating the scheme. FBI Supervisory Special Agent David Palmer told WREG that the DMV messages are a "copycat" of the toll scam.

"It costs next to nothing for them to use these algorithms to send these messages and calls out, but in return, they can achieve getting your personal information, putting malware on your phone, which then can go in and steal information from your device, or collect your payment information," Palmer said.

Palmer added that upon receiving one of the texts, he immediately picked up upon wording clues that gave away that it was a con.

"A couple of things that I noticed immediately, on it, is the text message I received said it was from the North Tennessee Department of Motor Vehicles. So you know, obviously, there is no north or south Tennessee, a red flag immediately and also looking at the sender, the message I received was from email address @catlover.com, obviously that is not a government address," Palmer said.

The FBI added that real government agencies will not contact you in this manner. The organization also advises any users to not click on links they receive in text messages from unknown sources and to delete the texts "immediately."

This crop of DMV scam messages have been reported around the country, including Tennessee, Arizona, New York, Minnesota, California, Florida, Georgia, Illinois, Ohio, Oregon, Texas, and Washington D.C.

Read More ...