Skip to content

githedgehog/nvidia-air-demo

Repository files navigation

NVIDIA Air Demo

Overview

Env Diagram

Guide

  1. Using NVIDIA Air 2.0 (air-ngc.nvidia.com) create a new simulation from topology.json file from this repository, keeping ZTP disabled
    • Keep OOB enabled and just start simulation
  2. After it's active, enable SSH in Services tab
    • Provided access info allows to SSH to the Hedgehog Fabric Control Node from which you can ssh to all switches and servers using their hostnames
    • SSH keys are automatically provisioned
    • use admin username for switches (password: HHFab.Admin!)
    • use ubuntu username for servers (password: nvidia)
    • e.g. ssh admin@leaf-su00-r0 or ubuntu@server-su00-n00
  3. Install Hedgehog Fabric
    • SSH to control node (.e.g. ssh -p 22176 ubuntu@dc5d2f73.workers.ngc.air.nvidia.com), default password: nvidia
    • Clone this repository
    • Prepare node for installing Hedgehog Fabric by running ./0_prepare_control.sh
    • Relogin to the node (to get PATH and hostname updated)
    • Install Hedgehog Fabric by running ./1_install_control.sh, it installs K8s and bunch of a software including downloading about 1GB of artifacts and so it can take up to 10-20 minutes to complete
    • You should see INF Control node installation complete when it's done
    • If it failed you need to run /opt/bin/k3s-uninstall.sh and re-run ./1_install_control.sh
    • Run ./2_setup_servers.sh to configure rail IPs on all servers
  4. Wait for switches to get provisioned
    • Switches will discover control node and do ZTP through DHCP, so it may take another 10-15 minutes before they are ready
    • You can check switch status using kubectl get ag command and wait for APPLIEDG to become equal to CURRENTG column for all switches.
  5. Naming/IPs
    • spines: spine-s[spine]
      • e.g. spine-s00
    • leafs: leaf-su[SU]-r[rail]
      • e.g. leaf-su00-r0
    • server-su[SU]-n[serverInSU]:
      • e.g. server-su00-n00
    • each VPC gets /13 subnet (one /16 per rail)
      • e.g. vpc-00 gets 10.0.0.0/13 with 10.5.0.0/16 for the rail-5 subnet
    • rail IPs 10.[VPC&rail].[SU].[server]/31
      • where VPC&rail is from previous step - e.g. 5 is for vpc-00/rail-5 and 8 is for vpc-01/rail-0
      • e.g. 10.1.5.0/31 - vpc-01/rail-5 server-su00-n00/eth6

Summary for control node

Run on the node:

git clone https://github.com/githedgehog/nvidia-air-demo
cd nvidia-air-demo
./0_prepare_control.sh

Relogin and run:

cd nvidia-air-demo
./1_install_control.sh
./2_setup_servers.sh

Example: ready switches

ubuntu@control-1:~/nvidia-air-demo$ kubectl get ag
NAME           ROLE          DESCR   APPLIED   APPLIEDG   CURRENTG   VERSION    REBOOTREQ
leaf-su00-r0   server-leaf           14m       12         12         v0-air-4   
leaf-su00-r1   server-leaf           8m9s      12         12         v0-air-4   
leaf-su00-r2   server-leaf           16m       13         13         v0-air-4   
leaf-su00-r3   server-leaf           23m       13         13         v0-air-4   
leaf-su01-r0   server-leaf           17m       19         19         v0-air-4   
leaf-su01-r1   server-leaf           12m       19         19         v0-air-4   
leaf-su01-r2   server-leaf           20m       19         19         v0-air-4   
leaf-su01-r3   server-leaf           25m       19         19         v0-air-4   
spine-s00      spine                 10m       9          9          v0-air-4   
spine-s01      spine                 9m4s      9          9          v0-air-4 

Example: connectivity between servers in different SUs

ubuntu@control-1:~$ ssh server-su00-n00 "ip a | grep /31"
    inet 10.0.0.0/31 scope global eth1
    inet 10.1.0.0/31 scope global eth2
    inet 10.2.0.0/31 scope global eth3
    inet 10.3.0.0/31 scope global eth4
    inet 10.4.0.0/31 scope global eth5
    inet 10.5.0.0/31 scope global eth6
    inet 10.6.0.0/31 scope global eth7
    inet 10.7.0.0/31 scope global eth8
ubuntu@control-1:~$ ssh server-su01-n00 "ip a | grep /31"
    inet 10.0.1.0/31 scope global eth1
    inet 10.1.1.0/31 scope global eth2
    inet 10.2.1.0/31 scope global eth3
    inet 10.3.1.0/31 scope global eth4
    inet 10.4.1.0/31 scope global eth5
    inet 10.5.1.0/31 scope global eth6
    inet 10.6.1.0/31 scope global eth7
    inet 10.7.1.0/31 scope global eth8
ubuntu@control-1:~/nvidia-air-demo$ ssh server-su00-n00 "ping -c 2 10.5.1.0"
PING 10.5.1.0 (10.5.1.0) 56(84) bytes of data.
64 bytes from 10.5.1.0: icmp_seq=1 ttl=62 time=2.50 ms
64 bytes from 10.5.1.0: icmp_seq=2 ttl=62 time=1.70 ms

--- 10.5.1.0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 1.697/2.097/2.497/0.400 ms

Example: Tenants (VPCs) are Isolated

server-su00-n00 is in vpc-0 and server-su00-n02 is in vpc-1. Both servers are in the same SU but different tenants so they are isolated.

ubuntu@server-su00-n00:~$ ip -oneline a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host noprefixroute \       valid_lft forever preferred_lft forever
2: eth0    inet 192.168.200.100/24 metric 100 brd 192.168.200.255 scope global dynamic eth0\       valid_lft 572sec preferred_lft 572sec
2: eth0    inet6 fe80::e20:12ff:fefe:100/64 scope link \       valid_lft forever preferred_lft forever
3: eth1    inet 10.0.0.0/31 scope global eth1\       valid_lft forever preferred_lft forever
4: eth2    inet 10.1.0.0/31 scope global eth2\       valid_lft forever preferred_lft forever
5: eth3    inet 10.2.0.0/31 scope global eth3\       valid_lft forever preferred_lft forever
6: eth4    inet 10.3.0.0/31 scope global eth4\       valid_lft forever preferred_lft forever
7: eth5    inet 10.4.0.0/31 scope global eth5\       valid_lft forever preferred_lft forever
8: eth6    inet 10.5.0.0/31 scope global eth6\       valid_lft forever preferred_lft forever
9: eth7    inet 10.6.0.0/31 scope global eth7\       valid_lft forever preferred_lft forever
10: eth8    inet 10.7.0.0/31 scope global eth8\       valid_lft forever preferred_lft forever
ubuntu@server-su00-n02:~$ ip -o a
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
1: lo    inet6 ::1/128 scope host noprefixroute \       valid_lft forever preferred_lft forever
2: eth0    inet 192.168.200.102/24 metric 100 brd 192.168.200.255 scope global dynamic eth0\       valid_lft 1291sec preferred_lft 1291sec
2: eth0    inet6 fe80::e20:12ff:fefe:102/64 scope link \       valid_lft forever preferred_lft forever
3: eth1    inet 10.8.0.0/31 scope global eth1\       valid_lft forever preferred_lft forever
4: eth2    inet 10.9.0.0/31 scope global eth2\       valid_lft forever preferred_lft forever
5: eth3    inet 10.10.0.0/31 scope global eth3\       valid_lft forever preferred_lft forever
6: eth4    inet 10.11.0.0/31 scope global eth4\       valid_lft forever preferred_lft forever
7: eth5    inet 10.12.0.0/31 scope global eth5\       valid_lft forever preferred_lft forever
8: eth6    inet 10.13.0.0/31 scope global eth6\       valid_lft forever preferred_lft forever
9: eth7    inet 10.14.0.0/31 scope global eth7\       valid_lft forever preferred_lft forever
10: eth8    inet 10.15.0.0/31 scope global eth8\       valid_lft forever preferred_lft forever

Attempt a ping

ubuntu@server-su00-n00:~$ ping -c 5 10.8.0.0
PING 10.8.0.0 (10.8.0.0) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
From 10.0.0.1 icmp_seq=4 Destination Host Unreachable

--- 10.8.0.0 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4134ms
ubuntu@server-su00-n02:~$ ping -c 5 10.0.0.0
PING 10.0.0.0 (10.0.0.0) 56(84) bytes of data.
From 10.8.0.1 icmp_seq=1 Destination Host Unreachable
From 10.8.0.1 icmp_seq=2 Destination Host Unreachable
From 10.8.0.1 icmp_seq=3 Destination Host Unreachable
From 10.8.0.1 icmp_seq=4 Destination Host Unreachable

--- 10.0.0.0 ping statistics ---
5 packets transmitted, 0 received, +4 errors, 100% packet loss, time 4089ms

Example: inspecting switch configuration

ubuntu@control-1:~/nvidia-air-demo$ ssh admin@leaf-su00-r0 "nv show qos roce"
Welcome to NVIDIA Cumulus (R) Linux (R)
                    operational  applied 
------------------  -----------  --------
state               enabled      enabled 
mode                lossless     lossless
...

Example: route summarization

admin@leaf-su01-r3:mgmt:~$ nv show vrf vpc-0 router bgp address-family ipv4-unicast route brief 

PathCount - Number of paths present for the prefix, MultipathCount - Number of
paths that are part of the ECMP, DestFlags - * - bestpath-exists, w - fib-wait-
for-install, s - fib-suppress, i - fib-installed, x - fib-install-failed

Prefix       PathCount  MultipathCount  DestFlags
-----------  ---------  --------------  ---------
10.0.0.0/24  2          1               *        
10.0.1.0/24  2          1               *        
10.1.0.0/24  2          1               *        
10.1.1.0/24  2          1               *        
10.2.0.0/24  2          1               *        
10.2.1.0/24  2          1               *        
10.3.0.0/24  2          1               *        
10.3.1.0/24  1          1               *        
10.3.1.0/31  1          1               *        
10.3.1.2/31  1          1               *        
10.4.0.0/24  2          1               *        
10.4.1.0/24  2          1               *        
10.5.0.0/24  2          1               *        
10.5.1.0/24  2          1               *        
10.6.0.0/24  2          1               *        
10.6.1.0/24  2          1               *        
10.7.0.0/24  2          1               *        
10.7.1.0/24  1          1               *        
10.7.1.0/31  1          1               *        
10.7.1.2/31  1          1               *        

admin@leaf-su01-r3:mgmt:~$ nv show vrf vpc-0 router rib ipv4 route brief

Flags - * - selected, q - queued, o - offloaded, i - installed, S - fib-
selected, x - failed

Route        Protocol   Distance  Uptime   NHGId  Metric  Flags
-----------  ---------  --------  -------  -----  ------  -----
0.0.0.0/0    kernel     255       0:27:41  42     8192    *Si  
10.0.0.0/24  bgp        20        0:26:24  122    0       *Si  
10.0.1.0/24  bgp        20        0:27:30  104    0       *Si  
10.1.0.0/24  bgp        20        0:27:30  101    0       *Si  
10.1.1.0/24  bgp        20        0:27:30  105    0       *Si  
10.2.0.0/24  bgp        20        0:27:30  102    0       *Si  
10.2.1.0/24  bgp        20        0:27:27  109    0       *Si  
10.3.0.0/24  bgp        20        0:27:30  103    0       *Si  
10.3.1.0/24  bgp        200       0:27:36  54     0       *Si  
10.3.1.0/31  connected  0         0:27:41  33     0       *Si  
10.3.1.1/32  local      0         0:27:41  33     0       *Si  
10.3.1.2/31  connected  0         0:27:41  35     0       *Si  
10.3.1.3/32  local      0         0:27:41  35     0       *Si  
10.4.0.0/24  bgp        20        0:26:24  122    0       *Si  
10.4.1.0/24  bgp        20        0:27:30  104    0       *Si  
10.5.0.0/24  bgp        20        0:27:30  101    0       *Si  
10.5.1.0/24  bgp        20        0:27:30  105    0       *Si  
10.6.0.0/24  bgp        20        0:27:30  102    0       *Si  
10.6.1.0/24  bgp        20        0:27:27  109    0       *Si  
10.7.0.0/24  bgp        20        0:27:30  103    0       *Si  
10.7.1.0/24  bgp        200       0:27:36  54     0       *Si  
10.7.1.0/31  connected  0         0:27:41  34     0       *Si  
10.7.1.1/32  local      0         0:27:41  34     0       *Si  
10.7.1.2/31  connected  0         0:27:41  36     0       *Si  
10.7.1.3/32  local      0         0:27:41  36     0       *Si  

admin@leaf-su01-r3:mgmt:~$ nv show int | grep 10
swp9         up            up           1G     9216   swp       server-su01-n00         0c:20:12:fe:01:4f  IPv4 Address:                  10.3.1.1/31
swp10        up            up           1G     9216   swp       server-su01-n00         0c:20:12:fe:01:57  IPv4 Address:                  10.7.1.1/31
swp11        up            up           1G     9216   swp       server-su01-n01         0c:20:12:fe:01:5f  IPv4 Address:                  10.3.1.3/31
swp12        up            up           1G     9216   swp       server-su01-n01         0c:20:12:fe:01:67  IPv4 Address:                  10.7.1.3/31
swp13        up            up           1G     9216   swp       server-su01-n02         0c:20:12:fe:01:6f  IPv4 Address:                 10.11.1.1/31
swp14        up            up           1G     9216   swp       server-su01-n02         0c:20:12:fe:01:77  IPv4 Address:                 10.15.1.1/31
swp15        up            up           1G     9216   swp       server-su01-n03         0c:20:12:fe:01:7f  IPv4 Address:                 10.11.1.3/31
swp16        up            up           1G     9216   swp       server-su01-n03         0c:20:12:fe:01:87  IPv4 Address:                 10.15.1.3/31

Move a Server from vpc-0 to vpc-1

Every logical connection from a server is represented as a connection object. For a server to be a attached to a VPC the connection objects for the server need to be attached to the VPC in a vpcattachment. To see the connection objects associated with a server:

ubuntu@control-1:~/nvidia-air-demo$ kubectl get connections | grep su00-n00
server-su00-n00-r0-leaf-su00-r0   unbundled   52m
server-su00-n00-r1-leaf-su00-r1   unbundled   52m
server-su00-n00-r2-leaf-su00-r2   unbundled   52m
server-su00-n00-r3-leaf-su00-r3   unbundled   52m
server-su00-n00-r4-leaf-su00-r0   unbundled   52m
server-su00-n00-r5-leaf-su00-r1   unbundled   52m
server-su00-n00-r6-leaf-su00-r2   unbundled   52m
server-su00-n00-r7-leaf-su00-r3   unbundled   52m

The connections are currently attached to vpc-0 in the default subnet, vpc-0/default:

ubuntu@control-1:~/nvidia-air-demo$ kubectl get vpcattachments | grep su00-n00
vpc-0-server-su00-n00-r0-leaf-su00-r0   vpc-0/default   server-su00-n00-r0-leaf-su00-r0                57m
vpc-0-server-su00-n00-r1-leaf-su00-r1   vpc-0/default   server-su00-n00-r1-leaf-su00-r1                57m
vpc-0-server-su00-n00-r2-leaf-su00-r2   vpc-0/default   server-su00-n00-r2-leaf-su00-r2                57m
vpc-0-server-su00-n00-r3-leaf-su00-r3   vpc-0/default   server-su00-n00-r3-leaf-su00-r3                57m
vpc-0-server-su00-n00-r4-leaf-su00-r0   vpc-0/default   server-su00-n00-r4-leaf-su00-r0                57m
vpc-0-server-su00-n00-r5-leaf-su00-r1   vpc-0/default   server-su00-n00-r5-leaf-su00-r1                57m
vpc-0-server-su00-n00-r6-leaf-su00-r2   vpc-0/default   server-su00-n00-r6-leaf-su00-r2                57m
vpc-0-server-su00-n00-r7-leaf-su00-r3   vpc-0/default   server-su00-n00-r7-leaf-su00-r3                57m

To move server su00-n00 from vpc-0 to vpc-1:

  1. Delete existing vpcattachments. Copy and paste this yaml file to the control node, then kubectl delete -f old_attachments.yaml:
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.0.0.0/31
  name: vpc-0-server-su00-n00-r0-leaf-su00-r0
  namespace: default
spec:
  connection: server-su00-n00-r0-leaf-su00-r0
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.1.0.0/31
  name: vpc-0-server-su00-n00-r1-leaf-su00-r1
  namespace: default
spec:
  connection: server-su00-n00-r1-leaf-su00-r1
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.2.0.0/31
  name: vpc-0-server-su00-n00-r2-leaf-su00-r2
  namespace: default
spec:
  connection: server-su00-n00-r2-leaf-su00-r2
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.3.0.0/31
  name: vpc-0-server-su00-n00-r3-leaf-su00-r3
  namespace: default
spec:
  connection: server-su00-n00-r3-leaf-su00-r3
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.4.0.0/31
  name: vpc-0-server-su00-n00-r4-leaf-su00-r0
  namespace: default
spec:
  connection: server-su00-n00-r4-leaf-su00-r0
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.5.0.0/31
  name: vpc-0-server-su00-n00-r5-leaf-su00-r1
  namespace: default
spec:
  connection: server-su00-n00-r5-leaf-su00-r1
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.6.0.0/31
  name: vpc-0-server-su00-n00-r6-leaf-su00-r2
  namespace: default
spec:
  connection: server-su00-n00-r6-leaf-su00-r2
  subnet: vpc-0/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.7.0.0/31
  name: vpc-0-server-su00-n00-r7-leaf-su00-r3
  namespace: default
spec:
  connection: server-su00-n00-r7-leaf-su00-r3
  subnet: vpc-0/default
  1. After the vpcattachments have been deleted. The connections need to be attached to their new vpc, vpc-1. As above, copy this yaml to the control node, then kubectl create -f new_attachments.yaml
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.8.0.4/31
  name: vpc-1-server-su00-n00-r0-leaf-su00-r0
  namespace: default
spec:
  connection: server-su00-n00-r0-leaf-su00-r0
  subnet: vpc-1/default
---

apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.9.0.4/31
  name: vpc-1-server-su00-n00-r1-leaf-su00-r1
  namespace: default
spec:
  connection: server-su00-n00-r1-leaf-su00-r1
  subnet: vpc-1/default
---

apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.10.0.4/31
  name: vpc-1-server-su00-n00-r2-leaf-su00-r2
  namespace: default
spec:
  connection: server-su00-n00-r2-leaf-su00-r2
  subnet: vpc-1/default
---

apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.11.0.4/31
  name: vpc-1-server-su00-n00-r3-leaf-su00-r3
  namespace: default
spec:
  connection: server-su00-n00-r3-leaf-su00-r3
  subnet: vpc-1/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.12.0.4/31
  name: vpc-1-server-su00-n00-r4-leaf-su00-r0
  namespace: default
spec:
  connection: server-su00-n00-r4-leaf-su00-r0
  subnet: vpc-1/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.13.0.4/31
  name: vpc-1-server-su00-n00-r5-leaf-su00-r1
  namespace: default
spec:
  connection: server-su00-n00-r5-leaf-su00-r1
  subnet: vpc-1/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.14.0.4/31
  name: vpc-1-server-su00-n00-r6-leaf-su00-r2
  namespace: default
spec:
  connection: server-su00-n00-r6-leaf-su00-r2
  subnet: vpc-1/default
---
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPCAttachment
metadata:
  annotations:
    fabric.githedgehog.com/p2p-link: 10.15.0.4/31
  name: vpc-1-server-su00-n00-r7-leaf-su00-r3
  namespace: default
spec:
  connection: server-su00-n00-r7-leaf-su00-r3
  subnet: vpc-1/default
  1. Finally configure the servers to have the correct IP addresses, by copying,pasting, and running this script on the control node.
echo -e "\nMoving server server-su00-n00 from vpc-0 to vpc-1, staying in same rail"

SSHPASS='nvidia' sshpass -e ssh-copy-id -o StrictHostKeyChecking=accept-new -i ~/.ssh/id_rsa.pub ubuntu@server-su00-n00

cat <<'EOF' | ssh ubuntu@server-su00-n00 bash
hostname


# Clear out vpc-0 config
echo "Flushing Old Config"

sudo ip r f 10.0.0.0/8
sudo ip r f 10.0.0.0/24 nexthop via 10.0.0.1
sudo ip r f 10.1.0.0/24 nexthop via 10.1.0.1
sudo ip r f 10.2.0.0/24 nexthop via 10.2.0.1
sudo ip r f 10.3.0.0/24 nexthop via 10.3.0.1
sudo ip r f 10.4.0.0/24 nexthop via 10.4.0.1
sudo ip r f 10.5.0.0/24 nexthop via 10.5.0.1
sudo ip r f 10.6.0.0/24 nexthop via 10.6.0.1
sudo ip r f 10.7.0.0/24 nexthop via 10.7.0.1

# Set vpc-1 config

echo "Setting New Config"

sudo ip link set dev eth1 up
sudo ip a flush dev eth1
sudo ip a a 10.8.0.4/31 dev eth1 # leaf 1

sudo ip link set dev eth2 up
sudo ip a flush dev eth2
sudo ip a a 10.9.0.4/31 dev eth2 # leaf 2

sudo ip link set dev eth3 up
sudo ip a flush dev eth3
sudo ip a a 10.10.0.4/31 dev eth3 #leaf 3

sudo ip link set dev eth4 up
sudo ip a flush dev eth4
sudo ip a a 10.11.0.4/31 dev eth4 # leaf 4

sudo ip link set dev eth5 up
sudo ip a flush dev eth5
sudo ip a a 10.12.0.4/31 dev eth5 # leaf 1

sudo ip link set dev eth6 up
sudo ip a flush dev eth6
sudo ip a a 10.13.0.4/31 dev eth6 # leaf 2

sudo ip link set dev eth7 up
sudo ip a flush dev eth7
sudo ip a a 10.14.0.4/31 dev eth7 # leaf 3

sudo ip link set dev eth8 up
sudo ip a flush dev eth8
sudo ip a a 10.15.0.4/31 dev eth8 # leaf 4


sudo ip r a 10.8.0.0/24 nexthop via 10.8.0.5

sudo ip r a 10.0.0.0/8 nexthop via 10.8.0.5 nexthop via 10.9.0.5 nexthop via 10.10.0.5 nexthop via 10.11.0.5 nexthop via 10.12.0.5 nexthop via 10.13.0.5 nexthop via 10.14.0.5 nexthop via 10.15.0.5

sudo ip r a 10.9.0.0/24 nexthop via 10.9.0.5

sudo ip r a 10.10.0.0/24 nexthop via 10.10.0.5

sudo ip r a 10.11.0.0/24 nexthop via 10.11.0.5

sudo ip r a 10.12.0.0/24 nexthop via 10.12.0.5

sudo ip r a 10.13.0.0/24 nexthop via 10.13.0.5

sudo ip r a 10.14.0.0/24 nexthop via 10.14.0.5

sudo ip r a 10.15.0.0/24 nexthop via 10.15.0.5

EOF

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages