Summary
This chapter describes the configuration of the usual SR-TE policies, starting and ending within the same IGP domain, with endpoint being a node.
Simple policy with dynamic path
Consider the following topology:
The network is running IS-IS L2. In order to create an IPv6 policy to steer traffic from R1 to R11 only using blue links, we can configure the following policy:
traffic-eng policies
!
policy R1_R11_BLUE_ONLY_IPV6
headend 1.1.1.1 topology-id 101
endpoint 2002::11 color 101
binding-sid 15101
priority 7 7
install direct srte 2001:192::101
!
candidate-path preference 100
metric igp
affinity-set BLUE_ONLY
bandwidth 100 mbps
Affinity-set BLUE_ONLY and relevant affinity mapping is configured to match the links with affinity 0x1 as advertised by IGP.
traffic-eng affinities
affinity-map
name BLUE bit-position 0
!
affinity-set BLUE_ONLY
constraint include-all
name BLUE
Verifying SR-TE policy
TD1#show traffic-eng policy R1_R11_BLUE_ONLY_IPV6 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R11_BLUE_ONLY_IPV6"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10
Endpoint 2002::11, color 101
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0011.0011.0011.00
Setup priority: 7, Hold priority: 7
Reserved bandwidth bps: 100000000
Install direct, protocol srte, peer 2001:192::101
Policy index: 11, SR-TE distinguisher: 16777227
Binding-SID: 15101
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: igp
Path-option: dynamic
Affinity-set: BLUE_ONLY
Constraint: include-all
List: ['BLUE']
Value: 0x1
This path is currently active
Calculation results:
Aggregate metric: 40
Topologies: ['101']
Segment lists:
[17005, 17010, 24015]
Policy statistics:
Last config update: 2024-09-05 16:48:27,660
Last recalculation: 2024-09-05 16:50:20.473
Policy calculation took 0 miliseconds
There is a lot of information. Key things are:
- Endpoint 2002::11 has been resolved to IS-IS system-ID 0011.0011.0011.00 in topology-id 101 (same as headend)
- Constraint has been resolved to 0x1
- Candidate-path preference 100 is active
- Calculated segment list is [17005, 17010, 24015] which is R5 node SID, R10 node SID and R10 adj SID towards R11 using blue link. Segment list is within the MSD limit of R1 so it’s allowed.
- 100 mbps of bandwidth has been reserved (can be verified with “show topology” outputs).
Advertising SR-TE policy to headend
As the install method has been configured as “direct srte 2001:192::101”, Traffic Dictator will create an SR-TE route with NO_ADVERTISE community and send it to peer 2001:192::101 if it’s up and has IPv6-SRTE AF negotiated.
Luckily, in this case such a peer exists:
TD1#show bgp summary BGP summary information Router identifier 111.111.111.111, local AS number 65001 Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State Received NLRI Active AF 192.168.0.101 4 65002 379 286 0 0 4:44:40 Established 164 IPv4-LU, LS 2001:192::101 4 65002 377 414 0 0 4:44:36 Established 164 IPv4-SRTE, IPv6-LU, IPv6-SRTE, LS
Verify that the route has been created and send to peer:
TD1#show bgp ipv6 srte [192][16777227][101][2002::11]
BGP-SRTE routing table information
Router identifier 111.111.111.111, local AS number 65001
BGP routing table entry for [192][16777227][101][2002::11]
Paths: 1 available, best #1
Last modified: September 05, 2024 16:50:20
Local, inserted
- from - (0.0.0.0)
Origin igp, metric 0, localpref -, weight 0, valid, -, best
Endpoint 2002::11, Color 101, Distinguisher 16777227
Tunnel encapsulation attribute: SR Policy
Policy name: R1_R11_BLUE_ONLY_IPV6
Preference: 100
Binding SID: 15101
Segment lists:
[17005, 17010, 24015], Weight 1
TD1#show bgp neighbors 2001:192::101 ipv6 srte advertised-routes | fgrep [192][16777227][101][2002::11] *>+ [192][16777227][101][2002::11] - 0 - 0 i
Note: Since SR-TE NLRI uses some special symbols, use fgrep or grep -F to filter them instead of regular grep.
SR-TE distinguisher
BGP SR-TE NLRI has a 4-byte field called “distinguisher”. It is similar to route distinguisher in MPLS VPN and EVPN and is used to separate different NLRI from the BGP perspective.
Traffic Dictator uses distinguisher for 2 purposes:
- Separate policies with different headend but same endpoint and color.
- Facilitate redundancy when multiple controllers advertise the same policy with different distinguishers.
When generating SR-TE distinguisher, Traffic Dictator takes first byte from the configured global value (under router general), and the next 3 bytes are auto-generated from policy index. E.g. if configured distinguisher is 1, and first policy has index 0, the distinguisher will be 16777216 (0x01000000) and so on.
Indirect install to peer-group
The following policy has been configured as “install indirect”:
traffic-eng policies
!
policy R11_R1_BLUE_OR_ORANGE_IPV4
headend 11.11.11.11 topology-id 101
endpoint 1.1.1.1 color 3
binding-sid 15003
priority 5 5
install indirect srte peer-group R1
!
candidate-path preference 100
metric igp
affinity-set BLUE_OR_ORANGE
bandwidth 100 mbps
Relevant peer group config:
traffic-eng peer-groups
!
peer-group R1
neighbor 2001:192::101
In this example, Traffic Dictator will advertise the policy to 2001:192::101 (or to all members of peer group, when applicable) do not attach NO_ADVERTISE community and set route-target to 11.11.11.11.
TD1#show bgp ipv4 srte [96][16777216][3][1.1.1.1] detail
BGP-SRTE routing table information
Router identifier 111.111.111.111, local AS number 65001
BGP routing table entry for [96][16777216][3][1.1.1.1]
Paths: 1 available, best #1
Last modified: September 05, 2024 16:50:20
Local, inserted
- from - (0.0.0.0)
Origin igp, metric 0, localpref -, weight 0, valid, -, best
Endpoint 1.1.1.1, Color 3, Distinguisher 16777216
Extended Community: Route-Target-IP:11.11.11.11:0
Tunnel encapsulation attribute: SR Policy
Policy name: R11_R1_BLUE_OR_ORANGE_IPV4
Preference: 100
Binding SID: 15003
Segment lists:
[16910, 16025, 16001], Weight 1
Indirect install is a very powerful technique whereby you can configure Traffic Dictator to peer only with route reflectors, and leverage the existing BGP infrastructure to distribute policies to all headends.
Policy priority
Traffic Dictator allows to set setup and hold priority for each policy. Value 0 is the highest and 7 is the lowest. Setup priority cannot be higher (lower numerical value) than hold priority.
When a policy has bandwidth constraint, TD will use setup priority to check available bandwidth. When reserving bandwidth, it will use hold priority. Therefore, a policy with higher can kick out another policy which has hold priority lower (higher numerical value) than the setup priority of the first policy, forcing the kicked policy to be recalculated over a different path or fail if there is not enough bandwidth.
Set priority:
TD1(config-traffic-eng-policies-policy)#priority ? <0-7> Setup priority (lower is better) TD1(config-traffic-eng-policies-policy)#priority 0 ? <0-7> Hold priority (lower is better)
Binding SID
Binding SID or BSID is one of the methods to steer traffic into an SR-TE policy. When policy headend receives a packet with topmost MPLS label equal to BSID, that packet will be sent via the SR-TE policy. Regardless if you’re using this method or not, BSID is mandatory for any SR-TE policy. It must be unique per policy and is allocated from the headend Segment Routing Global Block (SRLB) range.
Configure BSID:
TD1(config-traffic-eng-policies-policy)#binding-sid ? <16-1048575> Binding SID for SRTE policy
ENLP
Explicit Null Label Policy (ENLP) instructs the SR-TE headend about whether ot not to impose explicit null for different types of traffic. The default behaviour is that IPv6 traffic sent over IPv4 policy (via color-only steering) will have IPv4 exp-null (label 0) and IPv4 traffic sent over IPv6 policy will have IPv6 exp-null (label 2) imposed. It is possible to change this behaviour to set exp-null for all IPv4/IPv6 or both prefixes, or to never set it.
See also: https://routingcraft.net/explicit-null-in-segment-routing/
Configure ENLP:
TD1(config-traffic-eng-policies-policy)#enlp ? ipv4 Set explicit null for IPv4 prefixes ipv6 Set explicit null for IPv6 prefixes both Set explicit null for all prefixes none Do not set explicit null
Note: if the policy endpoint has been found to be an Egress peer, ENLP will always be set to “none”, regardless of configuration. This is because traffic will go outside of the MPLS network and having it labeled can lead to egress peer dropping packets.
Endpoint-override
It is possible to configure a candidate path to override policy endpoint when this candidate path is active. Color (for SR-TE) or service-loopback (for LU) remains the same. The logic is that each policy corresponds to a service which should be treated according to the specified intent, but the destination where to forward traffic can change based on available network resources.
Configure endpoint-override:
TD1(config-traffic-eng-policies-policy-cpath)#endpoint-override ? <ipv4|ipv6> Egress node or egress peer IP address
For more details and use-cases see: https://vegvisir.ie/2024/06/20/traffic-dictator-v1-1-release-notes/#New_feature_endpoint-override
Anycast SID in policies
Another interesting thing about the previous policy (R11_R1_BLUE_OR_ORANGE_IPV4) is that it uses anycast SID.
Constraint requests to use either blue or orange links. Refer to the diagram:
This path happens to be ECMP but across 2 paths, but we also need to exclude the yellow links, so some SID are required.
Traffic Dictator finds out that it needs to do ECMP across 2 segment lists, and finds out that R9 and R10 share anycast SID 910 and R2 and R5 share anycast SID 25. Therefore it uses the anycast SID:
TD1#show traffic-eng policy R11_R1_BLUE_OR_ORANGE_IPV4 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R11_R1_BLUE_OR_ORANGE_IPV4"
Valid config, Active
Headend 11.11.11.11, topology-id 101, Maximum SID depth: 10
Endpoint 1.1.1.1, color 3
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0001.0001.0001.00
Setup priority: 5, Hold priority: 5
Reserved bandwidth bps: 100000000
Install indirect, protocol srte, peer-group R1
Install peer list:
2001:192::101
Policy index: 0, SR-TE distinguisher: 16777216
Binding-SID: 15003
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: igp
Path-option: dynamic
Affinity-set: BLUE_OR_ORANGE
Constraint: include-any
List: ['BLUE', 'ORANGE']
Value: 0x5
This path is currently active
Calculation results:
Aggregate metric: 40
Topologies: ['101']
Segment lists:
[16910, 16025, 16001]
Policy statistics:
Last config update: 2024-09-05 16:48:27,559
Last recalculation: 2024-09-05 16:50:20.468
Policy calculation took 0 miliseconds
If there was no anycast SID in this topology, the policy would use 2 segment lists which is not optimal as that would use more TCAM entries on routers.
Multiple segment lists
While this is suboptimal and should be avoided, sometimes multiple segment lists are required to steer traffic over a desired path.
TD1#show run | sec R1_R11_YELLOW_OR_ORANGE_IPV6
traffic-eng policies
!
policy R1_R11_YELLOW_OR_ORANGE_IPV6
headend 1.1.1.1 topology-id 101
endpoint 2002::11 color 102
binding-sid 15102
priority 6 6
install direct srte 2001:192::101
!
candidate-path preference 100
metric igp
affinity-set YELLOW_OR_ORANGE
bandwidth 100 mbps
traffic-eng affinities
affinity-map
name BLUE bit-position 0
name YELLOW bit-position 1
name ORANGE bit-position 2
!
affinity-set YELLOW_OR_ORANGE
constraint include-any
name ORANGE
name YELLOW
See the diagram:
R1-R3-R4 are on broadcast segment with IS-IS pseudonode. We should include these 2 paths via R3 and R4 + the orange path, via R2. Such a peculiar requirement results in 3 segment lists being used:
TD1#show traffic-eng policy R1_R11_YELLOW_OR_ORANGE_IPV6 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R11_YELLOW_OR_ORANGE_IPV6"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10
Endpoint 2002::11, color 102
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0011.0011.0011.00
Setup priority: 6, Hold priority: 6
Reserved bandwidth bps: 100000000
Install direct, protocol srte, peer 2001:192::101
Policy index: 17, SR-TE distinguisher: 16777233
Binding-SID: 15102
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: igp
Path-option: dynamic
Affinity-set: YELLOW_OR_ORANGE
Constraint: include-any
List: ['ORANGE', 'YELLOW']
Value: 0x6
This path is currently active
Calculation results:
Aggregate metric: 40
Topologies: ['101']
Segment lists:
[24011, 17007, 17011]
[17002, 17011]
[24015, 17011]
Policy statistics:
Last config update: 2024-09-05 16:48:27,661
Last recalculation: 2024-09-05 16:50:20.469
Policy calculation took 0 miliseconds
1. R2 node SID, R11 node SID
2. R1 adj SID to R4, R7 node SID, R11 node SID
3. R1 adj SID to R3, R11 node SID
The same segment lists are advertised via BGP SR-TE:
TD1#show bgp ipv6 srte [192][16777233][102][2002::11] detail
BGP-SRTE routing table information
Router identifier 111.111.111.111, local AS number 65001
BGP routing table entry for [192][16777233][102][2002::11]
Paths: 1 available, best #1
Last modified: September 05, 2024 16:50:20
Local, inserted
- from - (0.0.0.0)
Origin igp, metric 0, localpref -, weight 0, valid, -, best
Endpoint 2002::11, Color 102, Distinguisher 16777233
Tunnel encapsulation attribute: SR Policy
Policy name: R1_R11_YELLOW_OR_ORANGE_IPV6
Preference: 100
Binding SID: 15102
Segment lists:
[24011, 17007, 17011], Weight 1
[17002, 17011], Weight 1
[24015, 17011], Weight 1
While in this particular case it is necessary to meet the policy constraints, the network designer should avoid using multiple segment lists and rely on anycast SID whenever possible.
Explicit path
In order to steer traffic via specific links or nodes, or to exclude links or nodes, you can use explicit path.
traffic-eng policies
!
policy R1_R9_EP_STRICT_IPV4
headend 1.1.1.1 topology-id 101
endpoint 9.9.9.9 color 8
binding-sid 15008
priority 4 4
install direct srte 2001:192::101
!
candidate-path preference 100
explicit-path R2_R6_R9_STRICT
bandwidth 100 mbps
Explicit path config:
traffic-eng explicit-paths
!
explicit-path R2_R6_R9_STRICT
index 10 strict 10.100.1.2
index 20 strict 10.100.4.6
index 30 strict 10.100.10.9
This policy does not use affinities, but forces traffic via R2-R6-R9 as on the diagram below:
TD1#show traffic-eng policy R1_R9_EP_STRICT_IPV4 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R9_EP_STRICT_IPV4"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10
Endpoint 9.9.9.9, color 8
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0009.0009.0009.00
Setup priority: 4, Hold priority: 4
Reserved bandwidth bps: 100000000
Install direct, protocol srte, peer 2001:192::101
Policy index: 18, SR-TE distinguisher: 16777234
Binding-SID: 15008
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: igp
Path-option: explicit
Explicit path name: R2_R6_R9_STRICT
This path is currently active
Calculation results:
Aggregate metric: 30
Topologies: ['101']
Segment lists:
[16002, 16009]
Policy statistics:
Last config update: 2024-06-09 13:21:29,818
Last recalculation: 2024-06-09 16:24:41.371
Policy calculation took 0 miliseconds
Note that explicit path is not the same as configuring explicit segment list as it’s possible on some routers. Traffic Dictator will still check the IGP topology and use only required segments. In this example, R6 SID is not used because it’s not required, as shortest path from R2 to R9 satisfies policy constraints.
It is possible to use various constraints in explicit path. For example:
traffic-eng policies
!
policy R1_R11_EXCLUDE_SOME_IPV6
headend 1.1.1.1 topology-id 101
endpoint 2002::11 color 110
binding-sid 15110
priority 4 4
install direct srte 2001:192::101
!
candidate-path preference 100
explicit-path EXCLUDE_SOME_IPV6
bandwidth 100 mbps
traffic-eng explicit-paths
!
explicit-path EXCLUDE_SOME_IPV6
index 10 exclude 2002::4
index 20 loose 2002::7
index 30 exclude 2001:100:13::11
index 40 exclude 2001:100:14::11
This instructs the policy to do the following:
1. Get to R7 (2002::7) excluding R4 (2002::4)
2. Get to R11 (2002::11) excluding links R9-R11 (2001:100:13::11) and one of the links R10-11 (2001:100:14::11).
See the topology diagram:
Verify:
TD1#show traffic-eng policy R1_R11_EXCLUDE_SOME_IPV6 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R11_EXCLUDE_SOME_IPV6"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10
Endpoint 2002::11, color 110
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0011.0011.0011.00
Setup priority: 4, Hold priority: 4
Reserved bandwidth bps: 100000000
Install direct, protocol srte, peer 2001:192::101
Policy index: 15, SR-TE distinguisher: 16777231
Binding-SID: 15110
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: igp
Path-option: explicit
Explicit path name: EXCLUDE_SOME_IPV6
This path is currently active
Calculation results:
Aggregate metric: 50
Topologies: ['101']
Segment lists:
[17005, 17007, 17010, 24015]
Policy statistics:
Last config update: 2024-06-09 13:21:29,817
Last recalculation: 2024-06-09 16:25:43.528
Policy calculation took 0 miliseconds
The relevant Segment list is: R5 node SID, R7 node SID, R10 node SID, R10-R11 adj SID for blue link.
Anycast IP in explicit path
Explicit path loose can also include anycast IP.
traffic-eng policies
!
policy R1_R11_EP_LOOSE_IPV4
headend 1.1.1.1 topology-id 101
endpoint 11.11.11.11 color 9
binding-sid 15009
priority 4 4
install direct srte 2001:192::101
!
candidate-path preference 100
explicit-path R25_LOOSE
bandwidth 100 mbps
traffic-eng explicit-paths
!
explicit-path R25_LOOSE
index 10 loose 202.0.2.5
202.0.2.5 is an anycast IP shared betwen R2 and R5. Traffic Dictator will resolve it to the following path:
Note how this is different from just using blue or orange links, because path after R5 will also split into ECMP to reach R11.
Policy result:
TD1#show traffic-eng policy R1_R11_EP_LOOSE_IPV4 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R11_EP_LOOSE_IPV4"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 10
Endpoint 11.11.11.11, color 9
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0011.0011.0011.00
Setup priority: 4, Hold priority: 4
Reserved bandwidth bps: 100000000
Install direct, protocol srte, peer 2001:192::101
Policy index: 12, SR-TE distinguisher: 16777228
Binding-SID: 15009
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: igp
Path-option: explicit
Explicit path name: R25_LOOSE
This path is currently active
Calculation results:
Aggregate metric: 40
Topologies: ['101']
Segment lists:
[16025, 16011]
Policy statistics:
Last config update: 2024-06-09 13:21:29,817
Last recalculation: 2024-06-09 16:25:43.525
Policy calculation took 2 miliseconds
While in this particular example, SID 25 is used in segment list, there is no strict relation between anycast IP in explicit path and anycast SID. Explicit path IP is used to determine the path, and segment list is calculated independently, to steer traffic along that path.
For instance, if I add anycast SID shared between R9-R10 to the same policy, segment list will not change as the path is still the same and there is no need for an extra SID:
TD1#conf TD1(config)#traffic-eng explicit-paths TD1(config-traffic-eng-explicit-paths)#explicit-path R25_LOOSE TD1(config-traffic-eng-explicit-paths-ep)#index 20 loose 202.0.9.10
Updated policy:
TD1#show traffic-eng policy R1_R11_EP_LOOSE_IPV4 detail | grep -A1 Segment
Segment lists:
[16025, 16011]
There are 2 rules regarding various explicit path indexes:
1. Strict index cannot go after exclude. This is because “exclude” means we run SPF to next loose index or endpoint excluding specified links or nodes. Strict after exclude would result in an undefined behaviour.
2. Strict index cannot to after loose index which resolves to an anycast IP. Similarly, that would result in an undefined behaviour, because to process a strict index TD needs to know exactly what is the current node.
Disjoint group (path diversity)
Starting from 1.4, it is possible to configure disjoint-groups under candidate-path. Disjoint-group ensures that multiple policies that are in the same group never use the same links. Common use case is have 2 redundant traffic flows with guarantee that if any link in the network fails, one of the traffic flows will be unaffected.
Config model:
traffic-eng policies
policy [name]
!
candidate-path preference <0-4294967295>
disjoint-group <1-65535>
Example:
In this topology, policies “R1_R5” and “R1_R5_disjoint” are configured with the same disjoint-group. Config:
traffic-eng policies
!
policy R1_R5
headend 1.1.1.1 topology-id 101
endpoint 5.5.5.5 color 101
binding-sid 966668
priority 7 7
install direct srte 192.168.123.101
!
candidate-path preference 100
metric te
disjoint-group 100
bandwidth 1 gbps
!
policy R1_R5_disjoint
headend 1.1.1.1 topology-id 101
endpoint 5.5.5.5 color 102
binding-sid 966670
priority 7 7
install direct srte 192.168.123.101
!
candidate-path preference 100
metric te
disjoint-group 100
bandwidth 1 gbps
Now let’s check detailed policy output:
knecht#show traffic-eng policy R1_R5 detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R5"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 6
Endpoint 5.5.5.5, color 101
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0005.0005.0005.00
Setup priority: 7, Hold priority: 7
Reserved bandwidth bps: 1000000000
Install direct, protocol srte, peer 192.168.123.101
Policy index: 5, SR-TE distinguisher: 16777221
Binding-SID: 966668
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: te
Disjoint-group: 100
Path-option: dynamic
This path is currently active
Calculation results:
Aggregate metric: 1000
Topologies: ['101']
Segment lists:
[900003, 900005]
Policy statistics:
Last config update: 2025-03-21 08:51:57,634
Last recalculation: 2025-03-21 08:51:58.285
Policy calculation took 0 miliseconds
knecht#show traffic-eng policy R1_R5_disjoint detail
Detailed traffic-eng policy information:
Traffic engineering policy "R1_R5_disjoint"
Valid config, Active
Headend 1.1.1.1, topology-id 101, Maximum SID depth: 6
Endpoint 5.5.5.5, color 102
Endpoint type: Node, Topology-id: 101, Protocol: isis, Router-id: 0005.0005.0005.00
Setup priority: 7, Hold priority: 7
Reserved bandwidth bps: 1000000000
Install direct, protocol srte, peer 192.168.123.101
Policy index: 6, SR-TE distinguisher: 16777222
Binding-SID: 966670
Candidate paths:
Candidate-path preference 100
Path config valid
Metric: te
Disjoint-group: 100
Path-option: dynamic
This path is currently active
Calculation results:
Aggregate metric: 1000
Topologies: ['101']
Segment lists:
[900002, 900005]
Policy statistics:
Last config update: 2025-03-21 08:51:57,634
Last recalculation: 2025-03-21 08:51:58.285
Policy calculation took 0 miliseconds
Note that these 2 policies have different segment lists, even though the candidate path option is the same. This is because TD keeps track of links used for policies in a disjoint-group and when calculating another policy with the same disjoint group, these links are excluded when running CSPF.
Disjoint-groups are supported with SR-TE and RSVP-TE policies; it is also possible to put SR-TE and RSVP-TE policies in the same disjoint-group.
Limitations
1. When calculating a policy with disjoint-group, TD doesn’t use ECMP. This is an obvious design decision because if the path has ECMP links, different disjoint policies can just use different links.
2. Disjoint-groups are not supported with explicit path option.
PCEP as an installation protocol
BGP SR-TE is a recommended protocol to advertise SR-TE policies to the network.
However, starting from 1.3, PCEP is also supported. The advantage of PCEP is that it’s stateful – i.e., the PCC (router) reports policy status to the PCE (controller).
In order to use PCEP to install policies, change install protocol from “srte” to “pcep”:
router pcep
!
neighbor 192.168.0.101
!
traffic-eng policies
!
policy R1_R9_EP_STRICT_IPV4
headend 1.1.1.1 topology-id 101
endpoint 9.9.9.9 color 109
binding-sid 15008
priority 4 4
install direct pcep 192.168.0.101
!
candidate-path preference 100
explicit-path R2_R6_R9_STRICT
bandwidth 100 mbps
Refer to the PCEP configuration section for more details.
Note that only one segment list is supported per PCEP policy due to protocol limitations; also in order for TD to advertise SR-TE color via PCEP, the sr-policy capability must be advertised in Open messages.
Bandwidth reservations
When a policy has bandwidth constraint, Traffic Dictator will perform bandwidth reservations. It will use configured setup priority to check bandwidth constraint, and hold priority to check if the policy can be kicked out to reserve bandwidth a higher-priority policy. Priority 0 is the highest and 7 is the lowest. Also setup priority cannot be higher than hold priority for the same policy.
You can check bandwidth reservations using “show topology” command. There is a very long output, relevant excerpt:
TD1#show topology
---
ISIS links
0004.0004.0004.00 [E][L2][I101][N[c65002][b0][s0004.0004.0004.00]][R[c65002][b0][s0007.0007.0007.00]][L[i10.100.6.4][n10.100.6.7][i2001:100:6::4][n2001:100:6::7]]
IGP metric: 10
TE metric: 500
Affinity: 0x2
Max-bw: 10000000000
Unrsv-bw priority 0: 10000000000
Unrsv-bw priority 1: 10000000000
Unrsv-bw priority 2: 10000000000
Unrsv-bw priority 3: 10000000000
Unrsv-bw priority 4: 10000000000
Unrsv-bw priority 5: 10000000000
Unrsv-bw priority 6: 9800000000
Unrsv-bw priority 7: 9600000000
Policies priority 0: []
Policies priority 1: []
Policies priority 2: []
Policies priority 3: []
Policies priority 4: []
Policies priority 5: []
Policies priority 6: ['R1_R11_YELLOW_OR_ORANGE_IPV4', 'R1_R11_YELLOW_OR_ORANGE_IPV6']
Policies priority 7: ['R1_ISP4_ANY_COLOR_IPV6', 'R1_ISP4_ANY_COLOR_IPV4']
In this example, each policy reserves 100 mbps, but policies of priority 6 reserve bandwidth on lower priorities as well, that’s why priority 7 has less bandwidth available.
A big advantage of Segment Routing over RSVP-TE is the support for ECMP and anycast. One caveat is that as of the current version, Traffic Dictator will reserve 100% of requested bandwidth on links in ECMP set, even if in reality traffic will be balanced across them.






