Summary
This chapter describes the configuration of Egress Peer Engineering (EPE) policies and advertising them to routers using BGP SR-TE and BGP-LU.
Similarly to the regular SR-TE policies, EPE policies can use dynamic or explicit path, anycast SID (within the IGP domain), and can have multiple segment lists, can be installed directly to peer or sent to a peer-group of route-reflectors with headend route target.
In other words, EPE policies are just like normal SR-TE policies but with an EPE label added to them. Refer to the section about Regular SR-TE policies for more details.
Requirements
While IGP topology is always advertised to Traffic Dictator using BGP-LS, EPE topology can be advertised using BGP-LS or BGP-LU.
The standard, and preferred way to advertise EPE topology to Traffic Dictator is to configure routers to generate BGP Peer SID, and advertise those using BGP-LS, together with IGP topology.
If a router does not support BGP Peer SID, the alternative method is to have the router generate a BGP-LU route per egress peer and advertise those to Traffic Dictator.
Important: when using BGP-LU to advertise EPE routes to TD, you must setup a separate BGP session between TD and each egress ASBR that generates LU routes. Also, LU routes must be filtered out in route-maps on all other routers, so that TD receives only one EPE route for a given prefix, and that route is from the ASBR that generated it.
BGP Peer SID and BGP-LS are not impacted by this limitation. You can generate Peer SID on multiple egress ASBR, and have one router (e.g. a route-reflector) advertise all of those to TD.
When using EPE with affinity or bandwidth constraints, or using Null endpoints, make sure you have configured relevant ASBR nodes and egress peers in the relevant config section.
Simple EPE policy with BGP SR-TE
Consider the following topology:
The network is running IS-IS L2. In order to create an IPv6 policy to steer traffic from R1 to ISP5 only using blue links, we can configure the following policy:
traffic-eng policies ! policy R1_ISP5_BLUE_ONLY_IPV6 headend 1.1.1.1 topology-id 101 endpoint 2001:100:20::105 color 104 binding-sid 15104 priority 7 7 install direct srte 2001:192::101 ! candidate-path preference 100 metric igp affinity-set BLUE_ONLY bandwidth 100 mbps
Affinity-set BLUE_ONLY and relevant affinity mapping is configured to match the links with affinity 0x1 as advertised by IGP.
traffic-eng affinities affinity-map name BLUE bit-position 0 ! affinity-set BLUE_ONLY constraint include-all name BLUE
While not strictly required for the simple EPE case, if we want to make bandwidth reservation, it is also useful to add egress peer configuration:
traffic-eng nodes ! node 11.11.11.11 ! neighbor 2001:100:20::105 affinity BLUE bandwidth 10 gbps
With this config, Traffic Dictator will lookup for endpoint 2001:100:20::105. If no node with such IGP prefix is found, TD will check BGP-LS and BGP-LU routes for relevant EPE route. In this case, a relevant prefix with BGP Peer SID is found:
TD1#show bgp link-state [E][B][I0][N[c65002][b0][q11.11.11.11]][R[c105][b0][q100.1.1.1]][L[i2001:100:20::11][n2001:100:20::105]] detail BGP-LS routing table information Router identifier 111.111.111.111, local AS number 65001 Prefix codes: E link, V node, T IP reacheable route, S SRv6 SID, u/U unknown, I Identifier, N local node, R remote node, L link, P prefix, S SID, L1/L2 ISIS level-1/level-2, O OSPF, D direct, S static/peer-node, a area-ID, l link-ID, t topology-ID, s ISO-ID, c confed-ID/ASN, b bgp-identifier, r router-ID, s SID, i if-address, n nbr-address, o OSPF Route-type, p IP-prefix, d designated router address BGP routing table entry for [E][B][I0][N[c65002][b0][q11.11.11.11]][R[c105][b0][q100.1.1.1]][L[i2001:100:20::11][n2001:100:20::105]] NLRI Type: link Protocol: BGP Identifier: 0 Local Node Descriptor: AS Number: 65002 BGP Identifier: 0.0.0.0 BGP Router Identifier: 11.11.11.11 Remote Node Descriptor: AS Number: 105 BGP Identifier: 0.0.0.0 BGP Router Identifier: 100.1.1.1 Link Descriptor: Local Interface Address IPv6: 2001:100:20::11 Neighbor Interface Address IPv6: 2001:100:20::105 Paths: 2 available, best #1 Last modified: June 10, 2024 11:22:36 65002 192.168.0.101 from 192.168.0.101 (1.1.1.1) Origin igp, metric 0, localpref 100, weight 0, valid, external, best Link-state: Peer-SID: 24015 Last modified: June 10, 2024 11:22:36 65002 2001:192::101 from 2001:192::101 (1.1.1.1) Origin igp, metric 0, localpref 100, weight 0, valid, external, not best reason: Higher IP Link-state: Peer-SID: 24015
Verifying EPE policy
TD1#show traffic-eng policy R1_ISP5_BLUE_ONLY_IPV6 detail Detailed traffic-eng policy information: Traffic engineering policy "R1_ISP5_BLUE_ONLY_IPV6" Valid config, Active Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10 Endpoint 2001:100:20::105, color 104 Endpoint type: Egress peer, Topology-id: 101, Protocol: isis, Router-id: 0011.0011.0011.00 Setup priority: 7, Hold priority: 7 Reserved bandwidth bps: 100000000 Install direct, protocol srte, peer 2001:192::101 Binding-SID: 15104 ENLP not configured, ENLP active: "none", *Warning: ENLP set to "none" because this is an EPE policy Candidate paths: Candidate-path preference 100 Path config valid Metric: igp Path-option: dynamic Affinity-set: BLUE_ONLY Constraint: include-all List: ['BLUE'] Value: 0x1 This path is currently active Calculation results: Aggregate metric: 40 Topologies: ['101'] Segment lists: [17005, 17010, 24015, 24015] Policy statistics: Last config update: 2024-06-10 11:21:05,375 Last recalculation: 2024-06-10 12:21:06.340 Policy calculation took 0 miliseconds
It shows that endpoint 2001:100:20::105 has been resolved to Egress Peer connected to node 0011.0011.0011.00.
Segment list is [17005, 17010, 24015, 24015] which is R5 node SID, R10 node SID, R10 Adj SID towards R11 using blue link and R11 Peer SID towards ISP5.
Since the policy is configured as “install direct srte 2001:192::101”, it will be sent to BGP peer 2001:192::101 using IPv6-SRTE AF, if such a peer is up and the AF is negotiated.
Refer to https://vegvisir.ie/regular-sr-te-policies/#Advertising_SR-TE_policy_to_headend for more options to advertise the policy to headend.
ENLP override
Note also the line “ENLP not configured, ENLP active: “none”, *Warning: ENLP set to “none” because this is an EPE policy”.
Explicit Null Label Policy (ENLP) instructs SR-TE headend whether or not to push explicit null label to the bottom of Segment List. Explicit Null can be useful to preserve QoS markings to last hop, or to forward IPv4 traffic over IPv6 policies and vice versa.
The following article has a good explanation of Explicit Null in Segment Routing: https://routingcraft.net/explicit-null-in-segment-routing/
Traffic Dictator will always override ENLP to “none” for EPE policies, regardless of what is configured. This is because traffic will go outside of the MPLS network and having it labeled can lead to egress peer dropping packets.
Null Endpoint policies
Consider the following topology:
There is a requirement to build a path from R1 to the closest exit matching yellow color.
Relevant policy config:
traffic-eng policies ! policy R1_NULL_YELLOW_ONLY_IPV6 headend 1.1.1.1 topology-id 101 endpoint :: color 106 binding-sid 15106 priority 7 7 install direct srte 2001:192::101 ! candidate-path preference 100 metric igp affinity-set YELLOW_ONLY bandwidth 100 mbps
traffic-eng affinities affinity-map name YELLOW bit-position 1 ! affinity-set YELLOW_ONLY constraint include-all name YELLOW
Endpoint is :: (Null for IPv6). In this case, Traffic Dictator will do the following:
1. Check for configured egress peers and prepare a list of peers matching affinity and bandwidth constraints. In this topology, it is <R6, ISP2> and <R11, ISP4>.
2. Then it will build both paths and choose the one with lowest IGP metric as best. In this case, the path via R6 is preferred.
Verify the policy:
TD1#show traffic-eng policy R1_NULL_YELLOW_ONLY_IPV6 detail Detailed traffic-eng policy information: Traffic engineering policy "R1_NULL_YELLOW_ONLY_IPV6" Valid config, Active Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10 Endpoint ::, color 106 Endpoint type: Egress peer, Topology-id: 101, Protocol: isis, Router-id: 0006.0006.0006.00 Setup priority: 7, Hold priority: 7 Reserved bandwidth bps: 100000000 Install direct, protocol srte, peer 2001:192::101 Binding-SID: 15106 ENLP not configured, ENLP active: "none", *Warning: ENLP set to "none" because this is an EPE policy Candidate paths: Candidate-path preference 100 Path config valid Metric: igp Path-option: dynamic Affinity-set: YELLOW_ONLY Constraint: include-all List: ['YELLOW'] Value: 0x2 This path is currently active Calculation results: Aggregate metric: 20 Topologies: ['101'] Segment lists: [17003, 17006, 24012] Policy statistics: Last config update: 2024-06-10 11:21:05,376 Last recalculation: 2024-06-10 13:21:08.551 Policy calculation took 0 miliseconds
EPE policies with BGP-LU
Similarly to the usual BGP-LU policies, it is possible to have EPE policies installed by BGP-LU instead of BGP SR-TE. Consider the following topology:
Policy config:
traffic-eng policies ! policy R1_ISP5_BLUE_ONLY_IPV4 headend 1.1.1.1 topology-id 101 endpoint 10.100.20.105 service-loopback 102.11.11.11 priority 7 7 install direct labeled-unicast 192.168.0.101 ! candidate-path preference 100 metric te affinity-set BLUE_ONLY bandwidth 100 mbps
Service-loopback must be configured on egress ASBR, in this case R11. It MUST NOT be advertised into IGP, but it MUST be advertised via BGP-LU to headend (R1) with low LOCAL_PREF so it is less preferred than the policy advertised by Traffic Dictator. Then nexthop for BGP routes can be changed to the service-loopback, to achieve behaviour similar to automated steering in SR-TE. For details, refer to the chapter https://vegvisir.ie/bgp-lu-policies
Verify the policy:
TD1#show traffic-eng policy R1_ISP5_BLUE_ONLY_IPV4 detail Detailed traffic-eng policy information: Traffic engineering policy "R1_ISP5_BLUE_ONLY_IPV4" Valid config, Active Headend 1.1.1.1, topology-id 101, Maximum SID depth: 6 Endpoint 10.100.20.105, service-loopback 102.11.11.11 Endpoint type: Egress peer, Topology-id: 101, Protocol: isis, Router-id: 0011.0011.0011.00 Setup priority: 7, Hold priority: 7 Reserved bandwidth bps: 100000000 Install direct, protocol labeled-unicast, peer 192.168.0.101 Candidate paths: Candidate-path preference 100 Path config valid Metric: te Path-option: dynamic Affinity-set: BLUE_ONLY Constraint: include-all List: ['BLUE'] Value: 0x1 This path is currently active Calculation results: Aggregate metric: 2000 Topologies: ['101'] Segment lists: [900010, 100002, 100001] BGP-LU next-hop: 10.100.3.5 Policy statistics: Last config update: 2024-06-17 08:56:35,584 Last recalculation: 2024-06-17 09:56:35.630 Policy calculation took 0 miliseconds
Unlike a similar SR-TE policy, label stack does not have node SID of R5 because R5 is directly connected and the policy has nexthop 10.100.3.5 which points to R5. The rest of label stack is: R10 node SID, R10 adj SID towards R11 using blue link, R11 EPE label towards ISP5.
Null endpoint and BGP-LU
While Null endpoint is specific to SR-TE policies, Traffic Dictator allows to configure a BGP-LU policy with null endpoint as well. It makes sense if you think that the prefix advertised in BGP-LU is service-loopback, which is like a color in SR-TE. Another way to say it is that BGP-LU policies always do “any-endpoint” steering, like with CO bits 10 in SR-TE.
Consider the following topology:
Policy config:
traffic-eng policies ! policy R1_NULL_YELLOW_ONLY_IPV4_LU headend 1.1.1.1 topology-id 101 endpoint 0.0.0.0 service-loopback 100.6.6.6 priority 7 7 install direct labeled-unicast 192.168.0.101 ! candidate-path preference 100 metric igp affinity-set YELLOW_ONLY bandwidth 100 mbps
The policy is configured to send traffic to the EPE endpoint closest to R1, using only yellow links.
Verify:
TD1#show traf pol R1_NULL_YELLOW_ONLY_IPV4_LU detail Detailed traffic-eng policy information: Traffic engineering policy "R1_NULL_YELLOW_ONLY_IPV4_LU" Valid config, Active Headend 1.1.1.1, topology-id 101, Maximum SID depth: 6 Endpoint 0.0.0.0, service-loopback 100.6.6.6 Endpoint type: Egress peer, Topology-id: 101, Protocol: isis, Router-id: 0006.0006.0006.00 Setup priority: 7, Hold priority: 7 Reserved bandwidth bps: 100000000 Install direct, protocol labeled-unicast, peer 192.168.0.101 Candidate paths: Candidate-path preference 100 Path config valid Metric: igp Path-option: dynamic Affinity-set: YELLOW_ONLY Constraint: include-all List: ['YELLOW'] Value: 0x2 This path is currently active Calculation results: Aggregate metric: 200 Topologies: ['101'] Segment lists: [900006, 100000] BGP-LU next-hop: 10.100.21.3 Policy statistics: Last config update: 2024-06-17 10:58:58,894 Last recalculation: 2024-06-17 10:58:59.205 Policy calculation took 0 miliseconds
In order for Null endpoint with BGP-LU to work properly, you must configure the same service loopback on all egress ASBR that can be potential exit points for a Null endpoint policy. Of course, the same ASBR can have many service loopbacks, used for different policies.
Bandwidth reservations
When a policy has bandwidth constraint, Traffic Dictator will perform bandwidth reservations. It will use configured setup priority to check bandwidth constraint, and hold priority to check if the policy can be kicked out to reserve bandwidth a higher-priority policy. Priority 0 is the highest and 7 is the lowest. Also setup priority cannot be higher than hold priority for the same policy.
You can check bandwidth reservations using “show topology” command. “show topology epe” shows bandwidth reservations specifically for egress peers.
TD1#show topology epe --- 6.6.6.6 10.100.17.102 Affinities: ['ORANGE', 'YELLOW'] Shared bandwidth group: 102 Unrsv-bw priority 0: 10000000000 Unrsv-bw priority 1: 10000000000 Unrsv-bw priority 2: 10000000000 Unrsv-bw priority 3: 10000000000 Unrsv-bw priority 4: 10000000000 Unrsv-bw priority 5: 10000000000 Unrsv-bw priority 6: 10000000000 Unrsv-bw priority 7: 9800000000 Policies priority 0: [] Policies priority 1: [] Policies priority 2: [] Policies priority 3: [] Policies priority 4: [] Policies priority 5: [] Policies priority 6: [] Policies priority 7: ['R1_NULL_YELLOW_ONLY_IPV6', 'R1_NULL_YELLOW_ONLY_IPV4'] 2001:100:17::102 Affinities: ['ORANGE', 'YELLOW'] Shared bandwidth group: 102 Unrsv-bw priority 0: 10000000000 Unrsv-bw priority 1: 10000000000 Unrsv-bw priority 2: 10000000000 Unrsv-bw priority 3: 10000000000 Unrsv-bw priority 4: 10000000000 Unrsv-bw priority 5: 10000000000 Unrsv-bw priority 6: 10000000000 Unrsv-bw priority 7: 9800000000 Policies priority 0: [] Policies priority 1: [] Policies priority 2: [] Policies priority 3: [] Policies priority 4: [] Policies priority 5: [] Policies priority 6: [] Policies priority 7: ['R1_NULL_YELLOW_ONLY_IPV6', 'R1_NULL_YELLOW_ONLY_IPV4']
Note that in this example the same 2 policies reserve bandwidth on both IPv4 and IPv6 peers. This is because they have been configured with the same shared bandwidth group, as both peers use the same link. Refer to: https://vegvisir.ie/asbr-nodes-and-egress-peers-configuration/#Configuring_bandwidth-group_for_Egress_Peers