AI-assisted workflows Experimental Setup¶
To explore AI-assisted orchestration in a realistic setting, we implemented the approach in a representative digital twin of the GP4L lab environment which corresponds to the setup described in the Orchestrated service provisioning in GP4L
Core Components¶

The experimental environment consists of the following building blocks:
- Maat as the single source of truth
- Maat holds the service and resource inventories, including endpoints, logical services, and relationships to underlying devices. For the experiment we were interested inl ogical services (L2 circuits) and their mapping to device ports and VLANs, as well as basic service constraints (capacity, QoS requirements).
- Airflow as orchestration engine
- Apache Airflow is used to execute the workflows generated with the help of the AI assistant. Every experiment produces one or more Airflow DAGs that interact with Maat to validate and retrieve device/service data or call other APIs to provision a servece.
- GP4L digital twin / lab topology
- The workflows are tested against a small, but realistic, GP4L lab topology using ContainerLab.
- AI assistant environment
- The LLM is accessed through an interactive environment (chat-style interface), where the engineer provides prompts, context, and constraints. Relevant snippets of Maat’s data model, API specs, and existing DAG examples are provided as prompt context. The AI returns service designs, workflow scaffolds, and incremental patches.
The Example Service¶
For the first round of experiments, we focused on a single, well-defined use case:
Provision a Layer 2 point-to-point circuit between two sites.
This service is simple enough to be understandable at a glance, but rich enough to capture all the important orchestration steps. Keeping the use case fixed allowed us to clearly observe what the AI does well, where it needs more guidance, and how reusable the resulting workflows are.
Workflow of an Experiment¶
Each experiment follows the same loop:
- Prepare context - where we provide the AI assistant with:
- A short description of the network
- Description of all available components that should be used
- description of the requirements and policies such as standards and rules
- Run Phase 1: holistic prompt → initial scaffold
- The engineer writes a holistic prompt describing the service intent, constraints, and environment.
- The AI responds with a high-level orchestration plan (steps and dependencies).
- A first Airflow DAG containing placeholder tasks and comments
- A service design blueprint
- Deploy and parse the scaffold
- The generated DAG is placed into the Airflow environment and checked
- Iterate Phase 2
- Using granular prompts, the engineer and AI then refine one task at a time, testing each change in the digital twin until the workflow can successfully provision and tear down the L2 service end-to-end.
This controlled setup provides a repeatable testbed for evaluating AI-assisted orchestration: the service, topology, and tools remain fixed, so differences in outcomes can be attributed to the prompting strategy and refinement process rather than to changing infrastructure.