
Accelerate 2026: Driving AI automation across IT and networks
This year’s TM Forum Accelerate demonstrated how communications service providers (CSPs) and their partners are working to build AI-native operations that deliver business value securely and at scale.
The event brought together more than 200 business leaders, architects, and technology experts for four days of intensive collaboration that centered on applying the Open Digital Architecture (ODA) roadmap to AI-driven automation across IT and networks.
As a result, not only did TM Forum members advance specific technical work across autonomous networks, AI and data, Open APIs, composable IT, and skills transformation. They showed how TM Forum members are aligning strategy, architecture, governance, and execution so that AI can be deployed at scale.
One of the standout trends at Accelerate 2026 was the shift from AI experimentation towards building enterprise-grade AI systems that deliver business value.
Project Foundation is an important base for TM Forum members’ AI-native projects because it sets out to deliver the industry’s first AI‑Native ODA Canvas Sandbox. This is a secure, collaborative, Kubernetes orchestrated environment where CSPs, hyperscalers, and technology partners can co-develop, integrate, and test interoperable AI agents aligned with the TM Forum AI-Native Blueprint (see below).
At Accelerate, participants made significant progress with Project Foundation, including the establishment of a shared terminology and the definition of detailed requirements for each Canvas use-case. This will feed into plans to build a reference implementation that includes AI & Data foundations based on the AI-Native Blueprint and ODA Canvas extensions in time for DTW Ignite, which takes place 23-25 June in Copenhagen.
Concurrently, members advanced TM Forum’s AI Native Blueprint, which addresses core issues associated with moving AI into production environments, namely trust, security, governance, lifecycle management, and accountability. At Accelerate participants tackled these challenges during workstreams on Model-as-a-service (MODaaS), the Data Product Lifecycle Management (DPLM) and secure agent interactions.
Systems of trust and security are essential to widescale AI deployment and addressing trust is integral to TM Forum’s work on Agentic Interaction Security within the AI-Native blueprint.
Discussions at Accelerate showed a growing consensus for AI-native security frameworks that are practical, actionable, and embedded in standards from the outset.
During the week members explored how to operationalise trust in agentic AI. This included work on aligning security, data, and AI domains and using collaboration assets to accelerate security sign-off.
The DPLM sessions examined how data can be consumed as a product in an ODA environment, complete with ownership, lifecycle controls, and consumer context.
Representatives from CSPs taking part confirmed that TM Forum’s DPLM model aligns closely with practices in production environments. In addition, as part of the push to make data models machine-readable and operational, participants proposed integrating DPLM with SID and Open APIs.
Notable progress was also made on developing a foundation for AI-native networks and platforms in the MODaaS workstream. By introducing a standard SLA template for AI models, members began defining how models can be sourced, governed, and operated at enterprise-grade. Debates around hyperscaler dependency, transparency in blackbox models, and alignment with standards such as NIST 2.0 highlighted the urgency of creating a shared operating model for AI services.
Crucially, members also advanced convergence of TM Forum’s core frameworks SID and the AI Native Blueprint within the ODA environment to create a unified foundation for enterprise AI.
Discussions addressed:
A major outcome was a shared commitment to embedding real, high-impact industry use cases from the CIT&E Mission directly into the Project Foundation pipeline. This will ensure that TM Forum standards are shaped by practical implementation architecture and execution, helping members move from AI vision to operational reality.
Discussions also encompassed onboarding AI agents through a declarative framework, governing models, data, and security. This reflects an overall trend to integrate AI into ODA rather than treat it as a collection of isolated tools.
Autonomous Networks (AN) workstreams were very busy during Accelerate 2026. Here again, there was an emphasis on demonstrating business value and enhancing unification across TM Forum assets, with ODA as a common basis for the AI transformation of both networks and IT.
Participants agreed that scalable Level 4 autonomy depends on grounding AN scenarios in ODA components, APIs, and value streams. They pointed out that the benefits of greater alignment include reduced service loss, improved mean time to repair (MTTR), fewer manual interventions, and more predictable service performance. Importantly, it will also facilitate replication across markets and domains, which promises greater efficiency and lower costs.
Measurement and benchmarking tools are essential for capturing business value. Participants progressed on enhancing AN level assessments with key effectiveness indicators (KEIs) and key capability indicators (KCIs). Their aim is to move beyond measuring autonomy and to deliver objective indications of the business value of autonomy, such as operational savings, energy efficiency, and customer impact.
Discussions also covered the use of digital twins and the future of agent-to-agent communication. When it comes to digital twins, they are increasingly seen as essential tools for prediction and optimization. However, participants made it clear there is no single “universal twin”. Instead, multiple, domain twins must be governed carefully to manage cost, accuracy, and security, especially as they become embedded in autonomous decision loops.
Teams worked hard during the week across multiple strands of Open API development and usage, including on how to balance a rapidly expanding Open API portfolio with practical deployment and monetization requirements.
Achievements included reaching agreement on tightening Gen5 Open API rules, and formalizing API specialization.
There was also progress in the development of Open APIs to support wholesale broadband and Open Gateway business models.
In parallel, work on ODA Components, Canvas, and Conformance reinforced members’ commitment to modular, composable architectures.
AI-native transformation is as much an organisational challenge as it is technical, and skills and culture remain one of the biggest obstacles to its success.
With this in mind, the TechCo: People & Culture sessions validated new AI components against TM Forum’s Digital Talent maturity Model (DTMM) to identify gaps and overlaps. This allowed them to establish a common path for building a unified DTMM and AI framework.
The Autonomous Networks Upskilling Hub, meanwhile, tackled the industry-wide challenge of scaling skills fast enough to realize AN value, proposing a shared, vendor agnostic foundation complemented by differentiated offerings.
And the soft launch of Pathways for Progress showed how to reward employees for skills development, career progression, and collaboration.