Topic
The purpose of this article is to explain how the licensed throughput rate is enforced on a BIG-IP Virtual Edition (VE) system.
Description
The BIG-IP VE product license determines the maximum allowed throughput rate. When calculating throughput, the BIG-IP VE system accounts for packets ingressing and egressing the system separately. Additionally, the licensed throughput rate for ingress and egress is enforced separately. For example, if you have a 200 Mbps license, ingress into the Traffic Management Microkernel (TMM) has a limit of 200 Mbps and egress from TMM also has a limit of 200 Mbps.
Ingress packets to be processed by the system are queued in a buffer that is polled frequently. The traffic shaper admits ingress traffic to the rest of the system at the licensed rate from this buffer. This is sometimes enough to shape the traffic with no further action. If bursts of traffic arrive, the buffer begins to fill, and at a threshold some packets begin to be dropped to signal to sending TCP stacks that congestion is occurring. Normally drops are very light and TCP retransmissions replace them. Light drops are a normal part of the traffic shaper mechanism and are more common with lower-licensed rates and higher numbers of TMM threads. Drops are tied to buffer usage and not to any measurement or logging of ingress traffic in a period, although there is a clear relationship between the two:
For ingress, the system is very similar in behavior to a network switch, accepting traffic over a high-bandwidth link and passing it on through a lower-bandwidth bottleneck link.
Throughput for egress traffic is determined by the amountnumber of bytes sent over predefined, sliding time windows. If the number of bytes sent surpasses the limit for a given time window, the rate shaper starts limiting the egress traffic.
The rate shaper throttles throughput to the licensed rate at the packet level, rather than at the connection level. Ingress and egress packets are dropped during the limiting process and no management is done at the connection level.
Important: For BIG-IP versions prior to 12.1.0, promiscuous traffic received on the BIG-IP interface is considered when determining the throughput rate. Starting with BIG-IP 12.1.0 and later, the throughput calculation no longer includes promiscuous traffic (for example: for BIG-IP 12.1.0 and later, failover, heartbeat, mirroring, configsync, and monitors are no longer counted as throughput traffic. Regular BIG-IP traffic is still counted (including ARP, broadcasts, UDP, and TCP).
Note: The censed bandwidth is dynamically distributed among TMM threads several hundred times a second. This allows for extremes of a single connection to consume most of the bandwidth on an otherwise-quiet BIG-IP VE but distributes fairly and evenly during typical busy periods. Before BIG-IP version 11.6.1, each TMM thread has a fixed equal share of the bandwidth. Beginning in BIG-IP 11.4.0, when the throughput rate exceeds 75 percent of the maximum allowed throughput rate licensed, a notification similar to the following example is logged in the /var/log/ltm file: 01010045:5: Bandwidth utilization is 8 Mbps, exceeded 75% of Licensed 10 Mbps Note: These logs are based on half-second intervals. You should judge them collectively and not individually. Whether or not the BIG-IP VE is congested or busy over meaningful time periods is usually better determined by looking at throughput graphs that average over longer time periods: 10 seconds for three-hour graphs, 30 seconds for 24-hour graphs, one minute for seven-day graphs, and 10 minutes for 30-day graphs. Burst of traffic may register in the logs but appear increasingly lower in graphs because they are averaged over increasingly longer periods as the data ages. It’s also normal for flows to run with the bandwidth use, shaped to the licensed limit as long as the BIG-IP VE system is not congested. Determine licensed throughput To determine the maximum allowed throughput rate for a BIG-IP VE system, perform the following procedure: Impact of procedure: Performing the following procedure should not have a negative impact on your system. View dropped ingress/egress packets To view the number of ingress or egress packets that have been dropped, perform the following procedure: Impact of procedure: Performing the following procedure should not have a negative impact on your system. The output of the command appears similar to the following example: Note: The following output is from an idle system that has not experienced any ingress or egress packet drops. # tmctl -i -d blade tmm/if_shaper -w 180 shaper_tid ingress_max ingress_mbo ingress_red ingress_drops ingress_shaped_bytes ingress_unshaped_bytes egress_shaped_bytes egress_unshaped_bytes egress_mbo egress_drops Note: It’s typically most useful to calculate a packet drop rate between two samples. This example doesn’t record packets through the system (though the tmm_stat table does), but a quick way to approximate packet drop is to divide ingress_shaped_bytes by 1500 to get a probably low estimate of packets that have passed through the shaper. You can express the ingress_drops divided by this packet number as a percentage. Usually, drops greater than about 1% are considered high, although this is an approximation not suited for every situation. Unshaped traffic, such as performing a ConfigSync, does not count against your license. Note: A correctly sized system may have a non-zero number of drops. A light rate of packet drops is a normal part of traffic shaper operation and is not detrimental to TCP traffic. Heavy dropping, particularly coupled with obviously high use in throughput graphs, likely indicates need for increased bandwidth. It’s not typical, but, for example, if traffic arrives in a substantially synchronized way every minute at the top of the minute, then you may need to over-provision bandwidths to meet requirements for latency and drops at these times.
Recommendations
shaper_tid
ingress_max
ingress_red
ingress_drops
egress_drops
1
0
0
0
0
0
0
0
0
0
Supplemental Information