Traffic Dictator v1.1 Release Notes

Summary

Traffic Dictator version 1.1 has been released on 20.06.2024. This article describes changes in the new version.

New feature: endpoint-override

Now each candidate path can change policy endpoint. Color (for SR-TE) or service-loopback (for LU) remains the same. The logic is that each policy corresponds to a service which should be treated according to the specified intent, but the destination where to forward traffic can change based on available network resources.

Configuration example:

traffic-eng policies
   !
   policy R1_ASBR1_PEER2_ORANGE_30GBPS
      headend 1.1.1.1 topology-id 101
      endpoint 10.100.13.102 color 200
      binding-sid 967002
      priority 7 7
      install direct srte 192.168.0.101
      !
      candidate-path preference 200
         affinity-set INCLUDE_ORANGE
         bandwidth 30 gbps
      !
      candidate-path preference 100
         endpoint-override 10.100.12.102
         affinity-set INCLUDE_ORANGE
         bandwidth 30 gbps

This policy has endpoint 10.100.13.102 but if candidate path 200 fails, the next candidate path 100 will reroute traffic to another endpoint 10.100.12.102.

When override is active, “show traffic-eng policy” marks the endpoint with *:

TD1#show traffic-eng policy R1_ASBR1_PEER2_ORANGE_30GBPS
Traffic-eng policy information
Status codes: * valid, > active, e - EPE only, s - admin down, m - multi-topology
Endpoint codes: * active override
       Policy name                             Headend             Endpoint            Color/Service loopback   Protocol             Reserved bandwidth        Priority   Status/Reason
   *>  R1_ASBR1_PEER2_ORANGE_30GBPS            1.1.1.1             *10.100.12.102      200                      SR-TE/direct         30000000000               7/7        Active

Detailed output with endpoint-override, when the primary path failed:

TD1#show traffic-eng policy R1_ASBR1_PEER2_ORANGE_30GBPS detail 
Detailed traffic-eng policy information:

Traffic engineering policy "R1_ASBR1_PEER2_ORANGE_30GBPS"

    Valid config, Active
    Headend 1.1.1.1, topology-id 101, Maximum SID depth: 6
    Endpoint 10.100.13.102, color 200
        Active endpoint override: 10.100.12.102
        Endpoint type: Egress peer, Topology-id: 101, Protocol: isis, Router-id: 0008.0008.0008.00

    Setup priority: 7, Hold priority: 7
    Reserved bandwidth bps: 30000000000
    Install direct, protocol srte, peer 192.168.0.101
    Binding-SID: 967002
    ENLP not configured, ENLP active: "none", *Warning: ENLP set to "none" because this is an EPE policy

    Candidate paths:
        Candidate-path preference 200
            Path config valid
            Metric: igp
            Path-option: dynamic
            Affinity-set: INCLUDE_ORANGE
                Constraint: include-all
                List: ['ORANGE']
                Value: 0x4
            Path failed, reason: Suitable endpoint not found in topology 101
        Candidate-path preference 100
            Path config valid
            Endpoint override set: 10.100.12.102
            Metric: igp
            Path-option: dynamic
            Affinity-set: INCLUDE_ORANGE
                Constraint: include-all
                List: ['ORANGE']
                Value: 0x4
            This path is currently active

    Calculation results:
        Aggregate metric: 110
        Topologies: ['101']
        Segment lists:
            [900008, 100000]

    Policy statistics:
        Last config update: 2024-06-20 10:21:03,698
        Last recalculation: 2024-06-20 10:51:41.119
        Policy calculation took 0 miliseconds

Practical applications for endpoint-override

This feature can be useful in Egress Peer Engineering when you want some policies to reroute to another exit when the primary exit doesn’t have enough bandwidth after some links fail. For example:

Assuming all links have 100 Gbps capacity and policies reserve 30 Gbps each. If ASBR2 link to AS200 fails, some policies can be rerouted over the remaining blue link, but there is not enough bandwidth to accomodate all policies so some policies should be rerouted via the IXP or IP transit.

This can be easily achieved by null endpoint policies without endpoint-override, but if the peering partner advertises different sets of prefixes over the different private private peering sessions, null endpoint can cause routing loops. Hence another, safer way to achieve the goal is configuring endpoint-override so that a policy can reroute to another endpoint. Add-path and color-only steering can help to map BGP routes to the respective SR-TE policies.

Also a normal policy with node endpoint can reroute to an egress peer or vice versa. For example:

In this example, dark fibre connections are preferred for communication between Site 1 and Site 2, but also both sites announce their prefixes to the Internet so it is possible for them to communicate via IP transit (although less preferred). If one of the dark fibre links (blue) fails and the remaining link doesn’t have enough bandwidth for all traffic, some traffic can be rerouted via IP transit.

Bug fixes and improvements

1. Fixed the bug where link bandwidth was displayed inaccurately for links with 25Gbps and higher bandwidth. This is not a TD issue per se, but imprecise IEEE754 encoding; apparently when they wrote RFC 3630 / RFC 5305 / RFC 5329, nobody could imagine that bandwidths would be 25 Gbps or more so now many implementations display this inaccurately. Traffic Dictator now rounds received bandwidth to highest 1 Gbps when the link bandwidth is >10 Gbps.

Note: this fix can potentially cause a problem when 2 following conditions occur at the same time: (1) SR-TE and RSVP-TE are used in parallel and both doing bandwidth reservations, so Traffic Dictator receives links with partially reserved bandwidth; (2) RSVP-TE reserves less than 1 Gbps of bandwidth on >10 Gbps links. I think such situation is extremely unlikely so it’s better with this fix.

2. Fixed the bug where if the operator deleted all SR-TE policies with command “no traffic-eng policies”, Policy Engine was not correctly informed about deletion so it kept calculating some of the deleted policies.

3. Fixed the bug where if the operator deleted egress peer affinity under “traffic-eng nodes”, it deleted all affinities for this peer. Now it’s possible to add/delete affinities one-by-one.

4. Fixed various CLI display issues (mostly cosmetic).

Download

You can download the new version of Traffic Dictator from the Downloads page.

Leave a Comment