Skip to content

Configure the environment for GP4L service provisioning

Prepare the GP4L digital twin in Containerlab

Step 1. Create gp4l Docker network

sudo docker network create --subnet=172.16.26.0/24 gp4l

Step 2. Deploy GP4L Containerlab topology

Copy containerlab/gp4l.clab.yml from the the provided repo into the freeRtr-containerlab directory

cd freeRtr-containerlab
sudo containerlab deploy --topo gp4l.clab.yml

Step 3. Apply configuration to GP4L topology

From the .txt files in the containerlab/ directory of the the provided repo, apply the corresponding configuration to each switch (repeat for each of the four switches):

  • Login into the switch using telnet 172.16.26.151. (Get the actual OOB IP address to telnet into from containerlab)
  • Now do the following twice to apply the configuration (you need to do it twice because of some configuration dependencies):
    • Enter conf t
    • Paste the configuration
    • Press Enter.

Step 4. Configure HTTP API in GP4L devices

curl -X POST -H "Content-Type: application/json" -d @/home/ubuntu/gp4l_service_provisioning/lso/http_api.json http://localhost:8000/api/playbook/ -v

This will run the playbook specified in playbook_name of lso/http_api.json, which is configure_http_api.

The result can be traced through the HTTP server or by looking at LSO container logs through sudo docker container logs $(sudo docker container ls | grep "lso:latest" | awk '{print $1}').

Populate Maat

Populate Maat with the physical resources of the four GP4L switches. From the maat directory in the provided repo run:

sudo apt install jq
./post_gp4l_topology.sh

Note that to do the above, the schemas must be running.

You can now view the current resources from Maat using:

curl http://127.0.0.1:8080/resourceInventoryManagement/v4.0.0/resource | python3 -m json.tool

To view the services use:

curl http://127.0.0.1:8080/serviceInventoryManagement/v4.0.0/service | python3 -m json.tool

If needed, you can delete all resources from Maat using the following examples that work from the maat directory:

python3 delete_all_resources.py for resources, and

python3 delete_all_services.py for services.

Configure and launch Airflow

Step 1. Configure Airflow Edit airflow.cfg in the airflow installation directory. Find the following parameters in this file and edit them as follows:

web_server_port = 8081
access_control_allow_headers = *
access_control_allow_methods = POST, GET, OPTIONS, DELETE
access_control_allow_origins = *
auth_backends = airflow.api.auth.backend.basic_auth
The port change from 8080 to 8081 is not to clash with Maat container's port.

Step 2. Copy the DAGs to the Airflow directory

From airflow/dags directory in the provided repo run:

mkdir ~/airflow/dags
cp * ~/airflow/dags

Or alternatively set the AIRFLOW_HOME env variable to the repo's airflow/dags directory.

Step 3. Launch Airflow

The following command will initialize the database, and show the user/password that was created, in order to be able to login later. Leave it running.

airflow standalone

You can restart airflow (stop and rerun the above command) at any point in order to refresh changes in the airflow/dags directory.

Install, configure, and launch Maat GUI

This is an optional step as you can either trigger service provisioning workflows using Airflow UI or Maat UI. A Maat UI image adapted for GP4L is used.

Fist clone the Maat GUI repository: ssh://git@bitbucket.software.geant.org:7999/ossbss/maat-gui.git

In the cloned repository, inside src/inventory3-ui/containers/gp4lL2CircuitForm.jsx (line 168) and src/inventory3-ui/containers/gp4lL2CircuitRemoveForm.jsx (line 56), edit the auth settings to match your Airflow environment. An Airflow user with permissions to trigger DAGs other than the default admin user is needed.

Then inside the docker/ directory of the cloned repository, run sudo make build to build the Maat GUI image.

Finally, run the container adapted for GP4L with sudo ./production up from inside the bin/ directory of the cloned repository.