Skip to content

Tips and Tricks on How to Use AI-assistant for Orchestration Workflows

AI Can Help, but not fully replace an engineer

The experiment confirmed that the use of AI is best as a collaborative assistant. Engineers need to supply architectural context, constraints, and intent, for which the AI provides synthesis, structure, and acceleration, but all of these need to be monitored and double checked. When given clear, detailed information, the AI produced high-quality orchestration scaffolds. However, if something is omissed, then it usually makes incorrect assumptions (such as assuming that LSO uses a TM Forum API until the engineer specifically explains its structure and use with examples).

The overall experience however, validated the idea that automation can improve human expertise, but accountability should remain human.

AI Excels at Service Design and Blueprinting

One of the strongest outcomes was the AI’s exceptional ability to perform high-level service design and blueprinting. It demonstrated remarkable intrinsic knowledge of TM Forum Open Digital Architecture (ODA) and Apache Airflow concepts without any supplied documentation during the prompting.

The assistant could generate full service design hierarchies and workflow structures that closely mirrored industry best practices. It could also explain the reasons behind the design and compare different designs which is very helpful during this process.

However, it required explicit guidance for component-specific logic, especially if the component is not a popular choice such as Maat and LSO in our example. Once the required information was provided (the JSON specification of the Maat APIs), the AI consistently produced correct, standards-aligned orchestration flows. If the information can not be provided in sufficient detail, preferably in standardised format, then expect a lot of trial and error until it gets it right.

Excellent Process Guidance and Adaptability

The AI proved valuable not just for code generation but for guiding the overall development process.

It accurately identified missing dependencies, prioritised task order, and maintained logical sequencing across multiple iterations.

When initial requirements change, the assistant can adapt smoothly with just a few updated prompts, restructuring the workflow while preserving earlier design principles.

Iterative Prompting Significantly Improves Implementation

Incremental, feedback-driven prompting has proven to be the most effective strategy.

When provided with error traces, the AI typically proposed correct fixes or optimisations in the next iteration.

This granular prompting approach with refining one task at a time produced reliable and explainable improvements while maintaining version traceability.

In essence, debugging becomes a conversational process where each loop (Prompt → AI → Draft → Test → Refine) became a controlled learning cycle.

Works Best with Clear Standards and Well-Documented APIs

The AI’s best results came when working within well-defined standards and structured data models.

Providing the assistant with a JSON schema for the Maat data model and API patterns for each component led to dramatically better outcomes.

With these, the AI understood data dependencies, validated object hierarchies, and adhered to proper naming and payload structures.

Without them, it tended to invent plausible but incorrect API calls or attributes.

In other words, it is essential that you provide clear and structured information, not just a lot of non-structured information.

Prompt Quality Determines Workflow Quality

The link between prompt design and output quality was undeniable.

Holistic prompts that clearly described intent, context, and constraints produced orchestration scaffolds needing minimal correction.

Conversely, vague instructions or missing data references led to incomplete or misaligned workflows.

In short, yes you would benefit from learning prompt engineering.

Starting from Scratch Beats Reusing Existing Code

Attempts to have the AI extend manually written workflows proved inefficient.

However, starting from a clean slate by defining intent and policies from the beginning enabled the AI to reason freely and produce coherent, standards-compliant designs.

The GP4L team concluded that intent reuse is far more powerful than code reuse.

Standardisation Enables Reuse and Collaboration

Adhering to the TM Forum ODA framework and using Maat as the single source of truth ensured interoperability and prevented architectural drift.

Standard compliance also allow the AI-generated workflows to be shared, compared, and reused across different environments without modification.

This approach can be used as a foundation for building a community repository of reusable orchestration templates.

Smaller Teams Gain the Most

Smaller NRENs and institutional IT teams benefit disproportionately from this approach.

AI assistance compensates for the lack of specialised manpower and provides immediate access to structured, standards-based orchestration practices.

By lowering the entry barrier, the methodology effectively serves as a capacity-building accelerator for resource-limited organisations.

Impact

Ultra-Fast Design and Prototyping

The most immediate impact of introducing AI assistance into orchestration design is speed. Workflows that previously took days to structure and document can now be scaffolded in minutes. The AI delivers complete orchestration blueprints with all the bells and whistles including validation, rollback, telemetry hooks, and logging placeholders. This transforms workflow creation from a time-intensive engineering process into a rapid, iterative design exercise.

Lowering the Barrier to Entry

The methodology lowers the barrier for less experienced automation engineers and newcomers to orchestration frameworks. Team members no longer need to master every aspect of Apache Airflow or understand the internal logic of each component before contributing. By focusing on intent expression rather than syntax, the process empowers network engineers, researchers, and operations staff to participate meaningfully in workflow creation. This makes orchestration development more inclusive and scalable across institutions.

Faster Service Rollout and Innovation

The acceleration in design and prototyping translates directly into faster service rollout. This rapid turnaround enables more agile experimentation, faster iteration on new service models, and earlier feedback from real users. In practice, it shortens the innovation loop for network services, making it easier to deploy, refine, and scale new capabilities.

AI-Assisted, Standards-Compliant Orchestration at Scale

Because the methodology builds directly on TM Forum Open Digital Architecture and the Maat data model, every generated workflow adheres to shared design principles and API conventions. This opens the door to cross-organisational collaboration, where orchestration templates can be exchanged, reused, and extended across NRENs and partner environments. It marks a shift from isolated automation efforts to a community-driven ecosystem of AI-generated, standards-aligned orchestration logic—scalable, transparent, and reproducible.