Hong Kong PolyU’s top AI scientist Yang Hongxia eyes ‘last mile of generative AI’

Hong Kong PolyU’s top AI scientist Yang Hongxia eyes ‘last mile of generative AI’


Chinese artificial intelligence scientist Yang Hongxia, a professor at Hong Kong Polytechnic University (PolyU), is seeking to democratise large language models (LLMs) by empowering hospitals and various enterprises to train their own AI systems.

Yang, who previously worked on AI models at ByteDance and Alibaba Group Holding‘s Damo Academy, said in a recent interview with the South China Morning Post that her newly formed start-up, InfiX.ai, envisioned a world in which various businesses could train their own “domain-specific” LLMs, which would complement commercially available AI models from Big Tech firms and start-up developers. Alibaba owns the Post.

According to InfinX.ai’s landing page on developer platforms GitHub and Hugging Face, the start-up’s research would “eventually lead to decentralised generative AI – a future where everyone can access, contribute to and benefit from AI equally”.

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.

“Over the next five years, I expect consumers as well as enterprises, particularly small and medium-sized enterprises, to have their own domain-specific models,” said Yang, who serves as the university’s associate dean at the Faculty of Computing and Mathematical Sciences, as well as the executive director at the PolyU Academy for AI.

She said InfiX.ai, which had a US$250 million valuation after its initial funding round, had a mission to build “the last mile of generative AI”, making AI applications accessible to everyone.

That echoed the vision of Thinking Machines Lab, a start-up founded by former OpenAI chief technology officer Mira Murati. This AI research and product unicorn – reportedly in talks for a new funding round that would value the firm at about US$50 billion – said it was focused on “building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals”.

Among its various endeavours, InfiX.ai developed methods to create highly capable AI systems that required minimal computational resources, “making advanced AI accessible to organisations of all sizes through techniques like FP8 precision training, edge AI deployment and privacy-preserving solutions”, according to the company.

Yang Hongxia serves as the associate dean at the Faculty of Computing and Mathematical Sciences of Hong Kong Polytechnic University. Photo: Handout. alt=Yang Hongxia serves as the associate dean at the Faculty of Computing and Mathematical Sciences of Hong Kong Polytechnic University. Photo: Handout.>

The similar goals of Yang and Murati reflect efforts in the AI industry to broaden the technology’s adoption while expanding the scope of innovations through the most cost-effective means for enterprises to accomplish.

While a number of Big Tech firms and AI unicorns – start-ups valued over US$1 billion – have made generative AI breakthroughs, Yang said InfiX.ai aimed to enable various institutions, with private troves of data from their industries, to develop their own domain-specific models “with the minimum of computing resources”.

She said open-source models, such as those from DeepSeek, were trained without an industry’s specific domain data and therefore could only be deployed for “inference”, with widespread hallucinations – incorrect or misleading results.

Yang said the existing foundational LLMs made technical breakthroughs in mass problem solving, code generation and various general tasks, but these lacked the training to solve highly specific problems, for example, in healthcare such as cancer treatment. The pre-training of these models is often based on general data from the internet, without any specific context.

InfiX.ai provided continuous “pre-training” for LLMs by including specific industry knowledge and enterprise data, according to Yang.

A published author of many papers on LLMs, Yang said individuals and businesses would eventually have access to their own models that would parallel the steady proliferation of personal computers and smartphones. Meanwhile, the centralised development of foundational LLM technologies would develop akin to how supercomputers are deployed in national labs.

In the paper InfiMed-ORBIT: Aligning LLMs on Open-Ended Complex Tasks via Rubric-Based Incremental Training, Yang and her co-authors wrote that reinforcement learning in LLMs often failed in open-ended domains like medical consultation.

The development of generative AI, according to Yang, had entered the third stage of application, in which Chinese AI players could pursue further innovations. “China’s production performs better because we have a lot of consumers … and that’s the truth,” Yang said.

In the first half of 2025, China saw a massive uptick in generative AI adoption to 515 million users, most of whom preferred domestic AI models, according to a report released last month by the China Internet Network Information Centre.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.





Source link

Posted in

Billboard Lifestyle

We focus on showcasing the latest news in fashion, business, and entrepreneurship, while bringing fresh perspectives and sharing stories that inspire growth and innovation.

Leave a Comment