Despite Its Growth, AI Development Lacks Transparency

Posted by Kirhat | Tuesday, July 02, 2024 | | 0 comments »

AI Transparency
According to the report of Eileen Yu, Senior Contributing Editor of ZDNET, transparency is still lacking around how foundation models are trained, and this gap can lead to increasing tension with users as more organizations look to adopt artificial intelligence (AI).

In Asia-Pacific, excluding China, IDC projects that spending on AI will grow 28.9 percent from US$ 25.5 billion in 2022 to US$ 90.7 billion by 2027. The research firm estimates that 81 percent of this spending will be directed toward predictive and interpretative AI applications.

So while there is much hype around generative AI, this AI segment will account for just 19% of the region's AI expenditure, Chris Marshall, an IDC Asia-Pacific VP, posited. The research highlights a market that needs a broader approach to AI that spans beyond generative AI, Marshall said at the Intel AI Summit held in Singapore last May 2024.

IDC noted, however, that 84 percent of Asia-Pacific organizations believe that tapping generative AI models will offer a significant competitive edge for their business. These enterprises hope to achieve gains in operational efficiencies and employee productivity, improve customer satisfaction, and develop new business models, the research firm added.

IDC also expects the majority of organizations in the region to increase edge IT spending in 2024, with 75 percent of enterprise data projected to be generated and processed at the edge by 2025, outside of traditional data centers and the cloud.

"To truly bring AI everywhere, the technologies used must provide accessibility, flexibility, and transparency to individuals, industries, and society at large," Alexis Crowell, Intel's Asia-Pacific Japan CTO, said in a statement. "As we witness increasing growth in AI investments, the next few years will be critical for markets to build out their AI maturity foundation in a responsible and thoughtful manner."

Industry players and governments have often touted the importance of building trust and transparency in AI, and for consumers to know AI systems are "fair, explainable, and safe." When ZDNET asked if there was currently sufficient transparency around how open large language models (LLMs) and foundation models were trained, however, Crowell said: "No, not enough."

She pointed to a study by researchers from Stanford University, MIT, and Princeton who assessed the transparency of 10 major foundation models, in which the top-scoring platform only managed a score of 54%. "That's a failing mark," she said during a media briefing at the summit.

0 comments

Post a Comment