ASBR nodes and egress peers configuration

Summary

This chapter describes the configuration of Autonomous System Border Routers (ASBR) and Egress Peers which can be used for Egress Peer Engineering (EPE).

This config section is not mandatory and you can leave it empty if:

1) Not using EPE

2) Using EPE but without Null endpoints

3) Using EPE but without bandwidth of affinity constraints for egress peers

Bandwidth and Affinity for Egress Peers

OSPF and IS-IS can advertise Traffic Engineering Extensions such as available bandwidth and admin group (also known as link affinity or color). These extensions can be used to define constraints for MPLS-TE policies.

Traffic Dictator extends this logic to Egress Peer Engineering. Refer to the diagram:

The operator can configure an SR-TE policy to send traffic from R1 to AS100 only using blue links, even if a specific prefix has a preferred path via AS200. SR-TE policy configuration details and examples are in the EPE and Null-endpoint policies chapter, while this chapter focuses on assigning bandwidth and affinities to egress peers.

At the moment there are no standards regularing bandwidth or affinity advertisements with BGP Peer SID, so Traffic Dictator provides a special config syntax.

Configuring affinities for Egress Peers

traffic-eng nodes
   !
   node 2.2.2.2
      !
      neighbor 10.100.16.101
         affinity BLUE
         affinity YELLOW

This adds affinities “BLUE” and “YELLOW” to neighbor 10.100.16.101 of node 2.2.2.2. Node name should be the same as BGP router ID of the relevant ASBR, as it is advertised in BGP Peer SID or in BGP-LU session. Neighbor IPv4 or IPv6 address should correspond to the neighbor IP in Peer SID or /32 prefix in BGP-LU (/128 for IPv6). Traffic Dictator ignores prefixes that are not /32 or not /128.

Affinity names should be the same as configured under affinity-set:

traffic-eng affinities
   !
   affinity-set BLUE_ONLY
      constraint include-all
      name BLUE

Affinity-map configuration doesn’t matter for egress peers, it is used only to map affinity names to bit values advertised in IGP extensions.

Configuring bandwidth for Egress Peers

traffic-eng nodes
   !
   node 2.2.2.2
      !
      neighbor 10.100.16.101
         bandwidth 10 gbps

This configuration assigns the bandwidth of 10 gbps to neighbor 10.100.16.101 of node 2.2.2.2. Similarly to bandwidth advertised in IGP extensions, Egress Peer bandwidth is split into 8 priorities and SR-TE policies with bandwidth requirements will book bandwidth of the relevant policy.

Configuring bandwidth-group for Egress Peers

When using EPE with bandwidth reservations, there can be a situation where multiple egress peers share the same physical bandwidth. For example, IPv4 and IPv6 sessions with the same neighbor:

traffic-eng nodes
   !
   node 2.2.2.2
      !
      neighbor 10.100.16.101
         affinity BLUE
         affinity YELLOW
         bandwidth 10 gbps
         bandwidth-group 101
      !
      neighbor 2001:100:16::101
         affinity BLUE
         affinity YELLOW
         bandwidth 10 gbps
         bandwidth-group 101

Another example is when multiple ASBR are connected to the same Internet Exchange Point (IXP) using a shared L2 network, and peer with the same neighbors over the IXP LAN.

traffic-eng nodes
   !						 
   node 5.5.5.5
      !
      neighbor 10.100.18.103
         affinity ORANGE
         affinity YELLOW
         bandwidth 10 gbps
         bandwidth-group 103
   !
   node 6.6.6.6
      !
      neighbor 10.100.18.103
         affinity ORANGE
         affinity YELLOW
         bandwidth 10 gbps
         bandwidth-group 103

In both of these scenarions, you can configure a shared bandwidth group id for multiple egress peers, which can be under the same ASBR or different ones. When doing bandwidth reservations for a policy, Traffic Dictator will reserve bandwidth for all egress peers sharing the same bandwidth-group.

Note: The second scenario (same neighbor IP under different ASBR) is supported only with BGP Peer SID but not with BGP-LU routes. This is because BGP-LU will advertise neighbor IP as prefix NLRI which will be the same in this case, so TD will use only the best BGP route. BGP Peer SID does not have this limitation.

Correlating affinity and bandwidth with Peer SID and LU routes

When calculating EPE policies, Traffic Dictator will correlate Egress Peer bandwidth and affinity configuration with the actual Peer SID and LU routes received from peers, and use all of the available information to calculate a policy.

For example:

  • When resolving policy endpoint IP, Traffic Dictator finds a Peer SID with the relevant neighbor IP. If the policy has affinity or bandwidth constraints, TD will check “traffic-eng nodes” config section to see if there is a relevant egress peer, and apply the constraints.
    • If no such egress peer is configured, TD will assume the operator wants to use constraints only within the IGP domain and will use the Peer SID anyway
    • If an egress peer is configured, it MUST match the specified constraints
  • When resolving Null endpoints, Traffic Dictator will first go through “traffic-eng nodes” configuration and prepare a list of matching endpoints. Then it will look for matching Peer SIDs and eventually build the policy to the endpoint closest to headend (by IGP or TE metric, as configured)

To check Egress Peers, use “show topology epe”:

TD1#show topology epe

  Topology EPE

  BGP Peer SIDs

  2.2.2.2                        [E][B][I0][N[c65002][b0][q2.2.2.2]][R[c101][b0][q100.1.1.1]][L[i10.100.16.2][n10.100.16.101]]
                                 [E][B][I0][N[c65002][b0][q2.2.2.2]][R[c101][b0][q100.1.1.1]][L[i2001:100:16::2][n2001:100:16::101]]
  ---

  Configured egress nodes

  2.2.2.2                        10.100.16.101                 
                                 Affinities:                    ['BLUE', 'YELLOW']
                                 Shared bandwidth group:        101
                                 Unrsv-bw priority 0:           10000000000
                                 Unrsv-bw priority 1:           10000000000
                                 Unrsv-bw priority 2:           10000000000
                                 Unrsv-bw priority 3:           10000000000
                                 Unrsv-bw priority 4:           10000000000
                                 Unrsv-bw priority 5:           10000000000
                                 Unrsv-bw priority 6:           10000000000
                                 Unrsv-bw priority 7:           10000000000
                                 2001:100:16::101              
                                 Affinities:                    ['BLUE', 'YELLOW']
                                 Shared bandwidth group:        101
                                 Unrsv-bw priority 0:           10000000000
                                 Unrsv-bw priority 1:           10000000000
                                 Unrsv-bw priority 2:           10000000000
                                 Unrsv-bw priority 3:           10000000000
                                 Unrsv-bw priority 4:           10000000000
                                 Unrsv-bw priority 5:           10000000000
                                 Unrsv-bw priority 6:           10000000000

 

ASBR node IP for EPE-only policies

Traffic Dictator supports EPE-only policies. For networks that are:

  • Not using Segment Routing
  • Using Segment Routing but have no need for Traffic Engineering within the network
  • Using Segment Routing with another controller for Traffic Engineering and are not yet ready to fully migrate to Traffic Dictator

 

EPE-only policies offer a lightweight Egress Peer Engineering capability without the need to export IGP information to Traffic Dictator.

In this example, Traffic Dictator learns the EPE labels from R2 using BGP-LS or BGP-LU, correlates the available egress peers with the configured constraints, and installs a policy to R1 using BGP-LU.

With full Traffic Engineering, TD would have the IGP topology information and use the first hop IP as a BGP-LU nexthop. However, in this scenario we don’t have that information and furthermore MPLS forwarding within the network is handled by another protocol (e.g. LDP). So Traffic Dictator requires an IP for R2 that it can set as a BGP-LU nexthop, and that IP will be resolved over the existing MPLS control plane such as LDP.

Configuration example:

traffic-eng nodes
   !
   node 2.2.2.2
      ipv4 address 2.2.2.2
      ipv6 address 2002::2

This config applies strictly to EPE-only policies, it will be ignored for all other policies.

EPE-only policies can be installed only using BGP-LU and do not support BGP SR-TE, because SR-TE policies cannot resolve recursively over LDP or RSVP.

This also means EPE-only policies cannot have color and instead need a service-loopback to map services to them. Refer to the EPE only policies chapter for configuration details.