Skip to content

Example of Using an AI-assistant to Develop an Orchestration Workflow

Phase 1 – From Intent to Workflow Scaffold

Objective

The first phase of the AI-assisted workflow experiment focuses on generating a complete orchestration scaffold for a real service use case. The example is focused on generating a solution for the provisioning of a Layer-2 point-to-point circuit in the GP4L testbed.

The aim was to verify whether an AI copilot could design the service blueprint together with the overall logic and dependencies of a provisioning workflow from scratch without any human manual coding.

Choosing an AI-assistant

Before starting a full experiment with a single AI assistant, the GP4L team tested three leading assistants: ChatGPT, Google Gemini, and Microsoft Copilot.

Each was evaluated using sample prompts for the same L2-circuit orchestration task. The comparison showed that ChatGPT consistently produced the best results. It delivered the most coherent orchestration scaffolds, correctly applied the mentioned standards and the Maat data model, and responded flexibly to iterative prompts.

Based on these tests, ChatGPT-4o was selected as the primary assistant for all subsequent tests.

From Scratch vs Continuing Work in Progress

Initially, the team has started testing with a scenario where the provisioning workflow is already manually developed and was to be used as a prompt for the AI-assistant asking it to develop other related workflows such as service termination or service modification.

However, the results have shown right away that this is very hard to achieve. Given the provisioning workflow the AI assistant could not fully understand the implementation, nor correctly identify each workflow step and actions performed inside it. It required a full explanation of almost every line of code to understand what is happening and even then it could not produce a usable workflow implementation for a different task such as termination providing a flow with incorrect and illogical steps in the workflow.

Because of these reasons, the team has concluded that the best way to approach the problem is to start from a clean slate: starting a conversation with the AI assistant without providing any "hints" in terms of existing code. This allows the AI to reason about the orchestration logic at the conceptual level and generate workflows that are structurally sound, logically consistent, and fully aligned with the required standards.

The Holistic Prompt

When first engaging the AI, the high-level context is provided describing:

  • The service that is to be implemented.
  • The workflow that is needed.
  • The orchestration environment, defined by Apache Airflow and its folder structure for service DAGs.
  • The components available, each exposing REST APIs.
  • The standards to be respected: compliance with TM Forum.

This information was provided as the holistic prompt:

Help me develop a software solution that will orchestrate the creation of L2 circuits. I would like to use Airflow for the orchestration processes and Maat as a single source of truth. I also want to use Ansible to define the configuration changes for the devices. All of these tools are available in different containers. The solution should have a simple GUI in which I can choose the start and end point of the circuit and its parameters. The L2 circuit data model and processes should be TM Forum compliant. Tell me how should I proceed with this task?

This prompt invited the AI to reason globally about the workflow and components and to create a step by step approach to solving the problem that can be easily followed.

AI-Generated Output

The AI produced the following output that includes a step-by-step orchestration plan, task prioritisation, useful tips and identifying related standards.

High-Level Step-by-Step Plan:

  1. Define the Data Model (based on TM Forum)
    • Use TMF634 (Resource Inventory) and TMF638 (Service Inventory) as guidance.
    • Key Entities you need to define in Maat:
      • Service: L2 Circuit
      • Resources: Physical Ports, Logical Ports, VLANs
      • Relationships: which resource supports which service
    • Attributes for the L2 Circuit:
      • Start and End Point (device, port)
      • Bandwidth
      • VLAN ID
      • QoS Parameters (optional)
      • Encapsulation (e.g., QinQ)
      • Protection Type (optional)
  2. Prepare Ansible Playbooks
    • Template Playbooks for L2 provisioning per device type (Cisco, Juniper, etc.)
    • Parameters driven: ports, VLANs, encapsulation, etc.
    • Use Jinja2 templates for flexibility.
    • Make sure Playbooks can be run non-interactively from Airflow.
  3. Design Airflow DAGs
    • Each L2 Circuit request should trigger a DAG that:
      • Validates user input (start and end points exist, resources available)
      • Fetches device and port information from Maat
      • Reserves resources (update Maat to “reserved” state)
      • Generates configuration (render Jinja templates with parameters)
      • Applies configuration (trigger Ansible playbooks)
      • Verifies status (optional)
      • Activates service (update Maat to “active” state)
  4. Develop a Simple GUI
    • Technology: Lightweight Flask or FastAPI app (could be containerized)
    • GUI screens:
      • Start L2 Circuit Wizard: Select Start Device/Port, End Device/Port
      • Specify Parameters: Bandwidth, VLAN, QoS
      • Submit Request: triggers the Airflow DAG via API
      • Monitor Status: shows progress (pull from Airflow or Maat)
  5. Containerize Everything
    • Each component runs in a container:
      • Maat container (already available)
      • Airflow container (or set of containers: webserver, scheduler, worker)
      • Ansible container
      • GUI container
    • Use Docker Compose or Kubernetes for orchestration between them (depends how heavy you want to go).

First Tasks to Start With

Step Description Priority
1 Extend Maat data model to store L2 Circuit services and resource mappings Highest
2 Develop prototype Ansible Playbooks (for basic port/VLAN configuration) High
3 Define Airflow DAG structure (dummy version first, print tasks) High
4 Prototype GUI form with endpoint/parameters input High
5 Set up container networking for Airflow-Maat-Ansible High

Tips

  • Mock first: Before touching live devices, use mock inventory and test with dummy devices.
  • Use Airflow Variables and Connections to store credentials and dynamic parameters.
  • Secure communications: especially if you expose APIs.
  • Log everything: Airflow and Ansible logs are critical for troubleshooting.
  • Error handling: Always prepare rollback mechanisms in case a configuration fails.

TM Forum References:

  • TMF634: Resource Inventory Management API
  • TMF638: Service Inventory Management API
  • TMF641: Service Ordering Management API (optional if you want circuit “ordering”)
  • TMF640: Service Activation and Configuration API

I can help you draft the first example Airflow DAG, design the GUI screens, or template an Ansible Playbook if you’d like to proceed step-by-step!

Human Review and Adjustment

The complete answer of the initial holistic input was valid and the reasoning provided sound. The AI-assistant has correctly laid out the steps needed for the implementation except for the last one "containerization" which is not needed because it is already in place. This will become clear to the assistant in the subsequent prompts and it will adjust appropriately without any problems. The provided tips and references to related standards are absolutely correct and, even more important, very indicative of a solution that is covering all implications including security by design and error handling using rollback.

Service Design

The holistic prompting continues with providing the intent and necessary information for using the single source of truth Maat using the prompt:

Lets start with the highest priority task, data model for L2 circuit. Maat supports a dynamic json based schema. The default schema that comes with Maat is attached to this message, one file for resources and one for services. Suggest a data model for the L2 circuit based on this information. Ask me if you need more information.

L2 Circuit Data Model Proposal for Maat

The AI assistant has provided a correct proposal for designing the necessary resources and services:

  • Physical Resource - Switch/router ports involved in the L2 circuit (Start Port, End Port)
  • Logical Resource - The L2 Circuit as a logical entity that “connects” two ports (including a relationship to the ports)
  • Resource Facing Service (RFS) representing the L2 circuit activation with a related supporting resource

Note: The service access points (SAPs) were added to the design only after explicitly prompting with a question: "shouldn't the L2 circuit service be modelled with two service access points in the service layer?"

After this prompt the AI assistant has additionally added to service access points as related entities to the RFS and defined each SAP as a logical resource.

It has also been shown that the AI assistant takes into account the TM Forum standards into account for each decision. When prompted with a question on a given proposal such as: "why use the relatedEntity and not service relationship between the circuit and the SAPs?", the AI assistant provides a very detailed valid response correctly related to TM Forum best practices.

Aspect relatedEntity serviceRelationship
Meaning Generic link to an entity (Resource, Service, Party, Place, etc.) Specifically links two Services
Context Used when pointing to an entity that is supporting or enabling the service (but is not itself a service) Used when one service depends on another service or they are bundled/composed
Typical Usage Links to things like physical ports, SAPs, customer accounts, places Links a CFS to an RFS; service bundles to component services; chaining services
Relation Type role attribute describes the relationship (e.g., “accessPoint”, “billingAccount”, “location”) relationshipType attribute describes service dependency (e.g., “dependsOn”, “bundled”, “composedOf”)

Final Answer:

  • If SAPs are modeled as Resources or Entities (recommended) → Use relatedEntity with role accessPoint
  • If SAPs were modeled as full-blown Services (not recommended for L2 SAPs) → Then you would use serviceRelationship

But in your architecture — modeling SAPs as Logical Resources makes more sense (simpler, cleaner, and matches networking reality better).

Additional Tip:

If you want full alignment with TM Forum, you could also create a ResourceRelationship between:

  • The SAP resource
  • And the physical port resource (physical device and port)

This way the model becomes very clean and navigable from all layers.

Similarly, its reasoning is very correct when answering validation prompts such as: "what if I want to model a SAP as a full blown service?".

When and why model a SAP as a full-blown Service?

You should model a SAP as a Service if:

  • You want separate lifecycle management for each SAP (design → reserved → active → terminated).
  • You want to instantiate, monitor, modify, or terminate SAPs independently of the main service (e.g., bandwidth change on a SAP without impacting the full circuit).
  • You want SAPs to expose features (like QoS, VLAN tagging, access control) individually.
  • You want a reusable SAP service (e.g., one SAP connected to multiple circuits, useful for Multipoint/EVPN services).

What changes if SAP is modeled as a Service?

Aspect SAP as Logical Resource SAP as Service
Entity Type LogicalResource (TMF639) Service (TMF638)
Relationship to L2 Circuit relatedEntity with role accessPoint serviceRelationship with role accessPoint
Lifecycle Attached to the main service Has its own lifecycle (design, reserve, activate, modify, terminate)
Parameters Simple (device, port, vlan) Rich (bandwidth, QoS profiles, ACLs, monitoring)
Operations Usually passive, part of a bigger transaction Can be actively managed, upgraded, modified separately

When to choose which model?

  • If you want simplicity now and faster deployment → Model SAPs as Logical Resources
  • If you want flexibility, independent SAP management, and scalability later → Model SAPs as full-blown Services

Final Decision Matrix for You:

Question Answer Recommendation
Will SAPs change dynamically (e.g., upgrade bandwidth independently)? Yes SAP as Service
Will SAPs have different administrative owners? Maybe SAP as Service
Will SAPs participate in multipoint services (EVPN, etc)? Future SAP as Service
Is it enough to just provision L2 circuits quickly for now? Yes SAP as Logical Resource

Using this guidance, it is very easy for the engineer to understand and use the appropriate modelling.

Based on the decisions taken by the engineer the AI assistat can then provide a correct JSON schema to be used for the extending the original Maat data model with the new service and resources.

Our experiments have also shown that at this point the AI assistant is able to compare its proposal to another design and provide a correct and extensive comparison and proposal for alignment.

The final service model to be used is as follows:

[Physical Port] ← (supportingResource) [SAP Service] ← (accessPoint) [L2 Circuit Service] → (accessPoint) → [SAP Service] ← (supportingResource) ← [Physical Port]

Workflow Blueprint

In the last part of phase 1, the work on the orchestration starts. This is prompted by the AI assistant itself:

Now you have a fully extended, clean, TM Forum-compliant, QinQ-ready model. Would you now like me to move on and show you a minimal Airflow DAG outline that can automatically create SAP Services and L2 Circuit Service from form input?

The engineer at this point, just needs to make sure that the AI assistant is aware of the exact orchestrator that is to be used and any requirements. Thus the prompt for this part should be as follows:

Ok, now we can move on to orchestration. I want processes that will be easily reusable for other services as well. We will be using Airflow version 2.11.

Based on this input, the AI assistant correctly reasons the blueprint of the workflow:

Overall Approach for Reusable Orchestration Processes

We will split orchestration into small reusable process blocks, each responsible for a logical unit of work. Then compose them into Airflow DAGs depending on the service you want to create.

  • Modular
  • Reusable
  • Service-agnostic (generic steps + parameters)

Reusable Process Building Blocks (Reusable Airflow Tasks)

Step Purpose Inputs Outputs Reusable for Other Services?
validate_request Validate input parameters from GUI/API serviceType, endpoints, params Validated request context (L2 circuits, L3 VPNs, etc.)
fetch_physical_resources Fetch port/device info from Maat port IDs or names resource objects (ports, devices)
create_sap_service Create SAP (AccessPointService) instances port info, customer_vlan_id SAP Service IDs (general access points)
create_service_instance Create the main service (e.g., L2CircuitService) SAP IDs, service parameters Service instance ID (works for any service type)
reserve_resources Update Maat to reserve ports if needed port IDs Reservation status
configure_network Trigger Ansible to configure network devices validated params config applied
activate_service Mark service as active in Maat service ID service state updated
monitor_service (optional) Start monitoring the service (e.g., status polling) service ID monitoring setup
rollback (optional) Undo partial changes on failure context rollback complete

The initial DAG template provided is:

from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime

with DAG(
    dag_id="create_l2_circuit",
    start_date=datetime(2024, 1, 1),
    schedule_interval=None,
    catchup=False,
    description="Reusable DAG to create L2 Circuit Service",
) as dag:

    validate_request = PythonOperator(
        task_id="validate_request",
        python_callable=validate_request_function,
        provide_context=True,
    )

    fetch_physical_resources = PythonOperator(
        task_id="fetch_physical_resources",
        python_callable=fetch_physical_resources_function,
        provide_context=True,
    )

    create_sap_a = PythonOperator(
        task_id="create_sap_a",
        python_callable=create_sap_function,
        op_kwargs={"sap_role": "A"},
        provide_context=True,
    )

    create_sap_b = PythonOperator(
        task_id="create_sap_b",
        python_callable=create_sap_function,
        op_kwargs={"sap_role": "B"},
        provide_context=True,
    )

    create_l2_circuit = PythonOperator(
        task_id="create_l2_circuit_service",
        python_callable=create_l2_circuit_function,
        provide_context=True,
    )

    reserve_resources = PythonOperator(
        task_id="reserve_resources",
        python_callable=reserve_resources_function,
        provide_context=True,
    )

    configure_network = PythonOperator(
        task_id="configure_network",
        python_callable=configure_network_function,
        provide_context=True,
    )

    activate_service = PythonOperator(
        task_id="activate_service",
        python_callable=activate_service_function,
        provide_context=True,
    )

    # Task dependencies
    validate_request >> fetch_physical_resources
    fetch_physical_resources >> [create_sap_a, create_sap_b]
    [create_sap_a, create_sap_b] >> create_l2_circuit
    create_l2_circuit >> reserve_resources
    reserve_resources >> configure_network
    configure_network >> activate_service

Note: It has been concluded that providing the correct version of the orchestrator greatly helps the AI assistent to provide correct outputs when it comes to code for the workflow.

In essence, the combination of vibe coding and holistic prompting allowed engineers to focus on what the workflow should achieve while the AI handled the first draft of how to realise it. It also exhibited a very welcomed behaviour of an exemplary guide throughout the whole process which is perfect for engineers with little skills that can learn fast with this approach.

Phase 2 - Granular prompting

Once the service and workflow blueprints are in place, the next step is to work on each step of the workflow and generate the code and verify it using more granular prompting with very short loops.

During this phase, the AI assistant helps the engineer implement the workflow step by step. As in the previous phase, the interactive conversation is guided by the AI assistant and it is able to identify the correct steps to proceed.

It can provide correct guidance on:

  • setting up reusable connections to the other components in Airflow
  • separating the how to use information from the code using configuration files
  • using an operator for communicating with Maat
  • adding error handling and rollback
  • ensuring the implementation of security best practices

It is interesting to note that for standard components such as Airflow and Ansible, the AI assistant did not need any additional input to understand their capabilities and use them appropriately. For non-standard components such as Maat and LSO the AI assistant needed extra information in order to fully understand how to define correct API calls to these components. Once provided with the JSON definition of the Resource and Service API for Maat, the API calls for these component and the developed operator for the interaction with Maat were developed quite easily with very few iterations. However, in the case of LSO these was no formal specification of the API available, and only examples of API calls could be provided as input to the assistant. This has made the job of producing a correct API call for it much more difficult and a lot more iterations were required for this to work.

The performances in this phase how ever are much lower compared to phase 1, where the outputs where always correct, sometimes needing enhancement with additional prompts. In phase 2, the AI assistant can provide a code that does not work when copy pasted in the Git repository in the specific task of the workflow. In these cases the error provided in the log when running the code was provided back to the assistant and it analysed it and suggested a new code with improvements. This approach has worked for most of the cases, sometimes required to be combined with additional prompting that describes the error in more detail. Eventually, the team was able to produce a working workflow code without any manual intervention and bug fixes by the engineer.