Media OutReach
IMDA Refreshes Skills Framework for Media and Continues to Support Virtual Production Capabilities and Training
– New company-led apprenticeship programme with media companies to expand job and training opportunities in the industry
– 28 Virtual Production projects supported, and 650 media professionals trained to date under the Virtual Production Innovation fund
SINGAPORE – Media OutReach Newswire – 4 December 2024 – The Infocomm Media Development Authority (IMDA) has worked closely with the media industry to introduce a refresh to the Skills Framework for Media, which will provide up-to-date sector information, job roles and existing and emerging skills for media practitioner, in new tech areas such as Virtual Production (VP) and Generative Artificial Intelligence (GenAI). To provide locals with more job and training opportunities in new tech areas, another new initiative is the company-led apprenticeship programme with media companies.
2. IMDA has been advancing VP applications in Singapore’s media industry since 2023. To date, there have been 28 VP content projects, and 650 media professionals trained in VP through these content projects and workshops, supported by the S$30 million VP Innovation Fund announced last year. These updates were made by Singapore’s Senior Minister of State for Digital Development and Information (MDDI) & Ministry of National Development (MND), Mr Tan Kiat How at the opening of the Asia TV Forum and Market (ATF) today, an event of the Singapore Media Festival (SMF). Hosted by IMDA, the SMF celebrates its 11th edition with the theme “Make It Here“, inspiring the region’s media community to create, connect, and realise their visions.
Refreshed Guide for the Future of Media Careers
3. As Singapore’s media market expands, employment opportunities are also on the rise. There were 24,960 media professionals employed across the economy in 2023, reflecting a compound annual growth rate of 7% since 2018[1]. The refreshed Skills Framework for Media provides a comprehensive roadmap for these media professionals, charting the future of media careers and talent development. This framework was developed by IMDA in partnership with SkillsFuture Singapore (SSG), industry associations like the Singapore Association of Motion Picture Professionals, Institutions of Higher Learning (IHLs), after extensive consultation with around 150 media representatives across industry, training providers, IHLs and freelancers. This ensures the framework meets the needs of a dynamic media landscape.
4. The refreshed Skills Framework identifies 195 job roles across 9 tracks, with 230 technical skills and competencies in existing and emerging skills in Media like VP, GenAI, content production, production technical services, and more. Media practitioners can use the Skills Framework to upskill and remain relevant in today’s media landscape, while employers and training providers can tap on it to structure learning and training opportunities. The framework was first launched in 2018 jointly by IMDA, SSG, Workforce Singapore (WSG) and in consultation with Singapore’s media industry.
5. IMDA will also offer more company-led on-the-job training opportunities through apprenticeships with media companies in line with the new skills added into the refreshed framework including VP. This is a new initiative and, as a start, IMDA will partner seven media companies to offer over 70 apprenticeship opportunities across content production, business management and technical roles that will further deepen practical skills development and ensuring talent is industry ready.
6. In his opening speech, SMS Tan Kiat How said, “The Asia TV Forum and Market and the Singapore Media Festival are platforms for networking and collaborations. As Asia’s entertainment content industry grows, Singapore will be your partner to tell our stories to the world, and for the world to find discover the talents and gems in Asia. Today, we are taking an important step to do so by investing in the future of media – our media professionals so that they are equipped with the right skills, technology, and platforms to excel in this dynamic industry.”
New Virtual Production Projects and Talent Supported
7. The use of VP in Singapore’s media industry continues to progress with the launch of three full-scale VP studios that can support international projects developed with VP technology. These are Aux Infinite Studios, Oceanus Media Global and X3D Studio and they are also providing VP training. Next year, media professionals can look forward to specialised training opportunities for job roles such as VP supervisors from local and overseas VP experts.
8. There have also been 28 VP content projects which leveraged VP technology to open creative possibilities and overcome physical limitations. For example, film director Ian Wee from Reelisations Pte Ltd tapped on VP in his latest content project “Time Apart”, to execute challenging time lapse sequences across different time periods. Ian was a participant of the National Film and Television School (NFTS) Certificate in Virtual Production course in April 2023. Another example, Glenn Chan from Sonder Films used 3D scanning technology to develop and integrate 3D CGI characters into virtual environments for his short-form VP project “The Old World”. Glenn was a participant of the Aux-XON SG x Korea VP Masterclass conducted earlier this year.
9. For more details on the Singapore Media Festival and the Asia TV Forum and Market please visit www.imda.gov.sg/smf. To read the latest Skills Framework for Media, visit https://www.imda.gov.sg/how-we-can-help/media-manpower-plan/skills-framework-for-media-sfw-for-media.
Hashtag: #SGMediaFest
https://www.imda.gov.sg/smf
https://www.linkedin.com/company/imdasg
https://www.instagram.com/imdasg
The issuer is solely responsible for the content of this announcement.
About the Singapore Media Festival
The Singapore Media Festival, hosted by the Infocomm Media Development Authority (IMDA), proudly returns for its 11th edition as one of Asia’s premier international media industry platforms. From 28 November to 8 December 2024, Singapore will be the focal point for Asia’s media community, showcasing diverse media innovations, forging industry deals, and presenting Singapore’s world-class content. This year’s festival, themed “Make It Here,” aims to inspire the region’s media talent to create, connect, and realise their visions. The event will bring together media professionals, industry leaders, creators, and consumers through the Singapore International Film Festival (SGIFF), Asia TV Forum & Market (ATF), Singapore Comic Con (SGCC), and Nas Summit Asia (NAS).
For more information, please visit:
www.imda.gov.sg/smf.
About Asia TV Forum & Market (ATF) 2024
3 December 2024: The ATF Leaders Dialogue
4 – 6 December 2024: Market & Conference
Into its 25th edition,
Asia TV Forum & Market (ATF) – the region’s co-production & entertainment content market and conference – is the proven industry platform to acquire knowledge, network, buy, sell, finance, distribute and co-produce across all platforms. It is the premier stage in Asia to engage with the entertainment industry’s top players from around the world. It’s where the best minds meet, and the future of Asia’s content is shaped.
For more information, please visit www.asiatvforum.com
About Infocomm Media Development Authority
The Infocomm Media Development Authority (IMDA) leads Singapore’s digital transformation by developing a vibrant digital economy and an inclusive digital society. As Architects of Singapore’s Digital Future, we foster growth in Infocomm Technology and Media sectors in concert with progressive regulations, harnessing frontier technologies, and developing local talent and digital infrastructure ecosystems to establish Singapore as a digital metropolis.
For more news and information, visit
www.imda.gov.sg or follow IMDA on LinkedIn (IMDAsg) and Instagram (@imdasg).
![]()
Media OutReach
ACE ROBOTICS Open-Sources Real-Time Generative World Model Kairos 3.0-4B
- A native world model built from the ground up for embodied intelligence, Kairos 3.0-4B delivers exceptional physics-consistent deep understanding and cross-embodiment generalization, enabling a single “brain” to drive robots of multiple form factors.
- Kairos 3.0-4B leverages a unified “multi-modal understanding-generation-prediction” architecture for physical-level deep understanding, long-horizon dynamic interaction, precise action control, and long-horizon interaction — 7-minute coherent interaction videos set a new industry benchmark.
- As a lightweight 4B-parameter model, Kairos 3.0-4B outperforms mainstream embodied world models while delivering industry-leading inference efficiency. It achieves real-time edge generation on the THOR platform with a1:1.5 ratio of generation time to video duration, leading performance across both cloud and edge environments.
- Kairos 3.0-4B achieves top-ranking accuracy across multiple authoritative benchmarks. Furthermore, leveraging model capabilities and inference tooling, its inference speed is 72 times faster than Cosmos 2.5, setting a new global performance record for embodied world models.
SHANGHAI, CHINA – Media OutReach Newswire – 13 March 2026 – ACE ROBOTICS announced the open-source release of Kairos 3.0-4B, the industry’s first native world model for embodied intelligence to realize unified “multi-modal understanding-generation-prediction” within a single architecture. As the technical cornerstone of the company’s “Human-Centric” ACE Embodied Intelligence R&D Paradigm, Kairos 3.0-4B is designed from the ground up for real-world robotic operation — integrating physical laws, human behavior, and real robot actions to deliver physics-consistent deep understanding of the real world.
The prevailing approach to embodied world models has largely involved retrofitting general-purpose large language or vision models with motion interfaces. Kairos 3.0-4B takes a fundamentally different path. Rather than appending motion capabilities onto existing model architectures, it is built from the architectural level around the fundamental physical and causal laws that govern real-world environments, constructing a unified world-understanding framework capable of cross-embodiment generalization. By embedding causal reasoning chains directly into its decision-making process, the model transcends behavioral imitation and achieves what ACE ROBOTICS defines as physical-level deep understanding — enabling robots to not only know what to do, but to understand why. Its core breakthrough lies in the deep integration of three categories of data: real robot interaction data, structured human behavioral data, and chain-of-thought reasoning data, effectively breaking down multi-source data barriers and significantly improving the reuse efficiency of real-world data.
A landmark achievement of this release is Kairos 3.0-4B’s real-time edge deployment capability. Deployed on the NVIDIA Jetson Thor T5000 platform at 517 TFLOPs, it is the world’s first embodied world model to achieve real-time generation on edge hardware — achieving a 1.5x faster-than-real-time generation speed on the THOR platform — and the first capable of directly driving physical robot bodies for real-world task execution through native edge deployment. The model issues full-body control commands spanning upper limbs, fingers, and lower limbs without intermediate control layers, enabling robots to move from “capable of performing” to genuinely “capable of working.”
Kairos 3.0-4B also delivers a breakthrough in long-horizon interaction. By combining its unified architecture with Agent-based hierarchical planning and a self-reflective iterative optimization mechanism, the model generates coherent future-state predictions up to 7 minutes in length while maintaining full scene coherence and physical fidelity throughout — setting a new industry benchmark for long-horizon embodied interaction and opening new pathways for embodied intelligence training and deployment.
On the A800 GPU benchmark, Kairos 3.0-4B’s inference speed surpasses NVIDIA Cosmos 2.5 by 72 times, setting a new global performance record for embodied world models. This performance is delivered with a lightweight footprint of just 4B parameters and 23.5GB of VRAM — a fraction of Cosmos 2.5’s 70.2GB requirement — demonstrating that efficiency and capability need not be in tension and fundamentally challenging the assumption that larger parameters are a prerequisite for superior performance. The model has also achieved top rankings across three authoritative global benchmarks: PAI-Bench-robot, co-developed by Georgia Tech and CMU; WorldModelBench-robot TI2V, introduced at CVPR 2025; and NVIDIA GEAR Lab’s DreamGen Bench, outperforming all evaluated models on physical consistency and instruction-following metrics.
Supporting seamless cross-embodiment deployment across single-arm, dual-arm, and dexterous hand configurations with no additional per-embodiment training required, Kairos 3.0-4B is compatible with major hardware platforms including Agilex PIPER, Unitree G1, and Galaxy G1. Kairos 3.0-4B is now available on Github (https://github.com/kairos-agi/kairos-sensenova) and Hugging Face (https://huggingface.co/kairos-agi/kairos-sensenova-common).
Hashtag: #ACEROBOTICS
https://www.linkedin.com/company/acerobotics/posts/?feedView=all&viewAsMember=true|
https://x.com/ace_robotics
The issuer is solely responsible for the content of this announcement.
About ACE ROBOTICS – Equipping robots with intelligent “brains” and engaging “souls”
ACE ROBOTICS is a pioneering robotics company dedicated to advancing the field of embodied intelligence. Founded by SenseTime co-founder Wang Xiaogang, the company has brought together a team of young, globally scarce AI scientists and industry experts to focus on embodied intelligence. Through breakthrough technological innovations and deep insights into embodied intelligence scenarios, we aim to empower robots with the ability to autonomously understand and explore the physical world, thereby accelerating their commercial implementation.
The company pioneered the ACE R&D paradigm and built a vision-based “environmental data engine, real-world cognition, embodied interaction generalization” technology chain. Using full spatiotemporal and multi-perspective environmental capture as its engine, along with Kairos 3.0 – China’s first open-source and commercially applicable world model – plus the Embodied Foundation Model as its technical backbone, ACE ROBOTICS addresses core industry challenges such as data scarcity, common sense gaps, poor generalization, and limited versatility. Simultaneously, the company unveiled its flagship A1 Embodied Super Brain Module, accelerating the large-scale commercial deployment of embodied intelligence across diverse scenarios.
ACE ROBOTICS is both a technology pioneer and an ecosystem builder. Through strategic cooperation with top hardware manufacturers, cloud service providers, and vertical scenario partners, we have broken through the “model-hardware-scenario” industrial deadlock, providing standardized and customized solutions that are driving the development of China’s embodied intelligence industry.
Media OutReach
OPPO and Google Partner to Redefine Productivity for Foldable Devices with Next-Gen AI Stylus Experience
Kai Tang, President of Software Engineering at OPPO, said: “OPPO’s close collaboration with partners like Google Cloud enables us to bring the latest and most advanced AI experiences to our users. Featuring powerful AI capabilities, we have evolved the traditional stylus into the innovative OPPO AI Pen, marking a significant leap in efficiency for the foldable smartphone experience.”
AI Chart and AI Image: Next-Level Productivity with OPPO AI Pen
The upcoming Find N6 will launch together with the OPPO AI Pen, featuring exclusive AI Chart and AI Image functions built with Google Cloud’s cutting-edge AI capabilities.
While taking notes or sketching ideas with a stylus helps capture inspiration quickly, translating handwritten drafts into polished, professional formats has always been a challenge. With AI Chart built with Gemini Pro, users can press the dedicated side button on the OPPO AI Pen and simply circle their handwritten notes to instantly generate a clean, editable digital table, allowing for faster information organization in meetings, planning sessions, and daily work.
Beyond text and charts, the AI Image feature is built with Nano Banana, and further expands creative possibilities by transforming simple doodles into refined artwork. Users can even provide specific text prompts to guide the AI, reimagining their drawings in any style, from classic oil painting textures to modern digital art.
Enhanced Cross-Ecosystem Sharing
OPPO is also collaborating with Google to bring AndroidTM Quick Share compatibility with Apple devices to Find N6. In the coming weeks, users will be able to send photos, videos, and files directly to Apple devices—with no additional apps required, advancing cross-ecosystem connection.
Next-Level Productivity on Find N6
These advanced productivity-enhancing features of ColorOS 16 will be fully integrated into the upcoming OPPO Find N6, delivering a high-performance and seamless experience from the very first touch. By combining OPPO’s advanced hardware with Google’s AI capabilities, Find N6 is set to turn the foldable device into a true mobile workstation, empowering users to create, collaborate, and communicate like never before.
Disclaimer: AI Chart is built with Gemini 2.5 Pro, while AI Image is built with Nano Banana (Gemini 2.5 Flash Image).
*Google, Android and Quick Share are trademarks of Google LLC.
Hashtag: #OPPO
The issuer is solely responsible for the content of this announcement.
Media OutReach
“Created for Ease”: ECOVACS Brand Campaign Honors Caregivers Across the APAC Region
Hashtag: #ECOVACS
The issuer is solely responsible for the content of this announcement.
About ECOVACS ROBOTICS
-
Feature/OPED6 years agoDavos was Different this year
-
Travel/Tourism10 years ago
Lagos Seals Western Lodge Hotel In Ikorodu
-
Showbiz3 years agoEstranged Lover Releases Videos of Empress Njamah Bathing
-
Banking8 years agoSort Codes of GTBank Branches in Nigeria
-
Economy3 years agoSubsidy Removal: CNG at N130 Per Litre Cheaper Than Petrol—IPMAN
-
Banking3 years agoSort Codes of UBA Branches in Nigeria
-
Banking3 years agoFirst Bank Announces Planned Downtime
-
Sports3 years agoHighest Paid Nigerian Footballer – How Much Do Nigerian Footballers Earn












