Skip to content

AI-Assisted Workflow Development in GP4L

Why Experiment with AI-Assisted Orchestration

One of the persistent challenges in modern network management is bridging the gap between high-level service intent (“I need a secure, low-latency connection between two sites”) and the executable orchestration workflows that actually provision and monitor that service across domains.

Traditionally, creating these workflows requires deep technical knowledge, long design cycles, and detailed understanding of each component and API involved. Every new service or specific network device variation might require days, or even weeks, of workflow engineering.

Many smaller National Research and Education Networks (NRENs), campus networks, and institutional IT groups aspire to adopt automation and orchestration, yet they lack the manpower or in-house expertise required for extensive manual implementation. Building and maintaining production-grade workflows and modular components demands both time and specialised skills and resources that are often lacking in small scale environments.

This is one of the main reasons why the GP4L team has started experimenting with a new approach: an AI-assisted development process in which a large language model acts as a copilot for workflow engineers.

By demonstrating how AI-assisted workflow generation can lower this barrier, GP4L hopes to showcase an example where orchestration can be implemented by all no matter the size or level of experience. In this sense, the work presented in this lab is focusing on how to achieve faster capacity building in orchestration.

It offers a practical guide for smaller NRENs and institutions to modernise their operations without the steep learning curve traditionally associated with network automation frameworks.

The Approach

Rather than designing orchestration workflows manually from scratch, the orchestration designers and developers communicate the service intent, context, and policies in natural language to an LLM. The AI assistant then translates this intent into a service design and workflow scaffold, which can then be iteratively refined, verified, and operationalised.

In essence the approach is about using “vibe coding” to develop orchestration workflows. This entails using conversational, iterative form of development that focuses on expressing intent and verifying understanding rather than typing syntax.

The AI loop

This means that AI is used as a creative assistant that is capable of generating service blueprints and structured orchestration logic, as well as suggesting improvements, while all critical design and validation decisions remain with the human expert.

To achieve this goal the work follows a two-step methodology, each supported by a distinct prompting strategy.

Phase 1 – From Intent to Workflow Scaffold (“Holistic Prompting”)

  • The engineer provides a holistic prompt: a complete, context-rich description of the service intent, environment, policies, and technical boundaries.
  • The AI interprets this as a design challenge, producing a high-level orchestration plan, a dependency graph, and an executable workflow scaffold with placeholder tasks and explanatory comments.
  • The holistic prompt enables the AI to view the system as a whole, thus ensuring the resulting workflow reflects the architecture, standards, and data model defined in Maat.
  • This mirrors an architect’s first design pass: broad, integrated, and policy-aware.

Phase 2 – Iterative Refinement and Validation (“Granular Prompting”)

  • Once the scaffold exists, the process shifts to granular prompting.
  • Here, the engineer and AI focus on one function, one API call, or one step at a time.
  • The human provides precise input such as a requirement, log excerpt, an error trace, or an API snippet
  • The AI proposes a minimal patch or enhancement.
  • Each update is reviewed and tested before integration.
  • This keeps evolution incremental, auditable, and safe.

This approach is part of GP4L’s broader vision to make network automation declarative, transparent, and explainable, reducing engineering overhead while maintaining strict human oversight. If service logic can be expressed in clear intent statements and then scaffolded automatically under human supervision, smaller teams can achieve results that previously required large engineering departments.

Design requirements

Before diving into vibe coding, the GP4L team defined clear technical and architectural requirements for this explorative work:

  1. Standards-compliant workflows and services
    • All artefacts must align with TM Forum Open Digital Architecture.
    • This ensures that any generated workflow can interoperate with other TMF-compliant platforms and be shared without modification.
  2. Reusable
    • Generated components must be written in a way that allows them to be used as templates for other services or environments.
    • The focus is on creating blueprints and patterns.
  3. Modular
    • Each task in the workflow is isolated, self-contained, and replaceable.
    • This modularity supports version control, unit testing, and parallel development of different components.
  4. Easily adaptable
    • Configuration parameters, API endpoints, and credentials must remain external to the code so that the same workflow can be deployed in other testbeds or production environments with minimal adjustment.
  5. Human-in-the-loop
    • Every AI-generated output requires explicit human review and approval before deployment.

In the next section we will provide a description of the example environment we used for our test and how we implemented the first part of our approach.