[ad_1]
As clients migrate to community materials primarily based on Digital Extensible Native Space Community/Ethernet Digital Non-public Community (VXLAN/EVPN) expertise, questions in regards to the implications for utility efficiency, High quality of Service (QoS) mechanisms, and congestion avoidance typically come up. This weblog submit addresses among the widespread areas of confusion and concern, and touches on a number of finest practices for maximizing the worth of utilizing Cisco Nexus 9000 switches for Information Heart material deployments by leveraging the obtainable Clever Buffering capabilities.
What Is the Clever Buffering Functionality in Nexus 9000?
Cisco Nexus 9000 collection switches implement an egress-buffered shared-memory structure, as proven in Determine 1. Every bodily interface has 8 user-configurable output queues that contend for shared buffer capability when congestion happens. A buffer admission algorithm referred to as Dynamic Buffer Safety (DBP), enabled by default, ensures honest entry to the obtainable buffer amongst any congested queues.

Along with DBP, two key options – Approximate Honest Drop (AFD) and Dynamic Packet Prioritization (DPP) – assist to hurry preliminary circulate institution, cut back flow-completion time, keep away from congestion buildup, and preserve buffer headroom for absorbing microbursts.
AFD makes use of in-built {hardware} capabilities to separate particular person 5-tuple flows into two classes – elephant flows and mouse flows:
- Elephant flows are longer-lived, sustained bandwidth flows that may profit from congestion management alerts comparable to Specific Congestion Notification (ECN) Congestion Skilled (CE) marking, or random discards, that affect the windowing conduct of Transmission Management Protocol (TCP) stacks. The TCP windowing mechanism controls the transmission price of TCP periods, backing off the transmission price when ECN CE markings, or un-acknowledged sequence numbers, are noticed (see the “Extra Data” part for extra particulars).
- Mouse flows are shorter-lived flows which are unlikely to learn from TCP congestion management mechanisms. These flows encompass the preliminary TCP 3-way handshake that establishes the session, together with a comparatively small variety of further packets, and are subsequently terminated. By the point any congestion management is signaled for the circulate, the circulate is already full.
As proven in Determine 2, with AFD, elephant flows are additional characterised in response to their relative bandwidth utilization – a high-bandwidth elephant circulate has the next chance of experiencing ECN CE marking, or discards, than a lower-bandwidth elephant circulate. A mouse circulate has a zero chance of being marked or discarded by AFD.

For readers accustomed to the older Weighted Random Early Detect (WRED) mechanism, you’ll be able to consider AFD as a type of “bandwidth-aware WRED.” With WRED, any packet (no matter whether or not it’s a part of a mouse circulate or an elephant circulate) is probably topic to marking or discards. In distinction, with AFD, solely packets belonging to sustained-bandwidth elephant flows could also be marked or discarded – with higher-bandwidth elephants extra prone to be impacted than lower-bandwidth elephants – whereas a mouse circulate is rarely impacted by these mechanisms.
Moreover, AFD marking or discard chance for elephants will increase because the queue turns into extra congested. This conduct ensures that TCP stacks again off nicely earlier than all of the obtainable buffer is consumed, avoiding additional congestion and guaranteeing that plentiful buffer headroom nonetheless stays to soak up instantaneous bursts of back-to-back packets on beforehand uncongested queues.
DPP, one other hardware-based functionality, promotes the preliminary packets in a newly noticed circulate to the next precedence queue than it could have traversed “naturally.” Take for instance a brand new TCP session institution, consisting of the TCP 3-way handshake. If any of those packets sit in a congested queue, and subsequently expertise further delay, it may well materially have an effect on utility efficiency.
As proven in Determine 3, as an alternative of enqueuing these packets of their initially assigned queue, the place congestion is probably extra seemingly, DPP will promote these preliminary packets to a higher-priority queue – a strict precedence (SP) queue, or just a higher-weighted Deficit Weighted Spherical-Robin (DWRR) queue – which leads to expedited packet supply with a really low probability of congestion.

If the circulate continues past a configurable variety of packets, packets are now not promoted – subsequent packets within the circulate traverse the initially assigned queue. In the meantime, different newly noticed flows could be promoted and luxuriate in the good thing about sooner session institution and circulate completion for short-lived flows.
AFD and UDP Site visitors
One incessantly requested query about AFD is that if it’s acceptable to make use of it with Consumer Datagram Protocol (UDP) visitors. AFD by itself doesn’t distinguish between totally different protocol varieties, it solely determines if a given 5-tuple circulate is an elephant or not. We typically state that AFD shouldn’t be enabled on queues that carry non-TCP visitors. That’s an oversimplification, in fact – for instance, a low-bandwidth UDP utility would by no means be topic to AFD marking or discards as a result of it could by no means be flagged as an elephant circulate within the first place.
Recall that AFD can both mark visitors with ECN, or it may well discard visitors. With ECN marking, collateral harm to a UDP-enabled utility is unlikely. If ECN CE is marked, both the applying is ECN-aware and would modify its transmission price, or it could ignore the marking utterly. That stated, AFD with ECN marking gained’t assist a lot with congestion avoidance if the UDP-based utility shouldn’t be ECN-aware.
Alternatively, if you happen to configure AFD in discard mode, sustained-bandwidth UDP functions could undergo efficiency points. UDP doesn’t have any inbuilt congestion-management mechanisms – discarded packets would merely by no means be delivered and wouldn’t be retransmitted, at the least not primarily based on any UDP mechanism. As a result of AFD is configurable on a per-queue foundation, it’s higher on this case to easily classify visitors by protocol, and be sure that visitors from high-bandwidth UDP-based functions at all times makes use of a non-AFD-enabled queue.
What Is a VXLAN/EVPN Cloth?
VXLAN/EVPN is among the quickest rising Information Heart material applied sciences in current reminiscence. VXLAN/EVPN consists of two key parts: the data-plane encapsulation, VXLAN; and the control-plane protocol, EVPN.
You could find plentiful particulars and discussions of those applied sciences on cisco.com, in addition to from many different sources. Whereas an in-depth dialogue is exterior the scope of this weblog submit, when speaking about QOS and congestion administration within the context of a VXLAN/EVPN material, the data-plane encapsulation is the main target. Determine 4 illustratates the VXLAN data-plane encapsulation, with emphasis on the inside and outer DSCP/ECN fields.

As you’ll be able to see, VXLAN encapsulates overlay packets in IP/UDP/VXLAN “outer” headers. Each the inside and outer headers comprise the DSCP and ECN fields.
With VXLAN, a Cisco Nexus 9000 swap serving as an ingress VXLAN tunnel endpoint (VTEP) takes a packet originated by an overlay workload, encapsulates it in VXLAN, and forwards it into the material. Within the course of, the swap copies the inside packet’s DSCP and ECN values to the outer headers when performing encapsulation.
Transit gadgets comparable to material spines ahead the packet primarily based on the outer headers to achieve the egress VTEP, which decapsulates the packet and transmits it unencapsulated to the ultimate vacation spot. By default, each the DSCP and ECN fields are copied from the outer IP header into the inside (now decapsulated) IP header.
Within the technique of traversing the material, overlay visitors could cross by means of a number of switches, every implementing QOS and queuing insurance policies outlined by the community administrator. These insurance policies would possibly merely be default configurations, or they could encompass extra complicated insurance policies comparable to classifying totally different functions or visitors varieties, assigning them to distinctive lessons, and controlling the scheduling and congestion administration conduct for every class.
How Do the Clever Buffer Capabilities Work in a VXLAN Cloth?
Provided that the VXLAN data-plane is an encapsulation, packets traversing material switches encompass the unique TCP, UDP, or different protocol packet inside a IP/UDP/VXLAN wrapper. Which ends up in the query: how do the Clever Buffer mechanisms behave with such visitors?
As mentioned earlier, sustained-bandwidth UDP functions might probably undergo from efficiency points if traversing an AFD-enabled queue. Nonetheless, we should always make a really key distinction right here – VXLAN is not a “native” UDP utility, however quite a UDP-based tunnel encapsulation. Whereas there isn’t a congestion consciousness on the tunnel stage, the unique tunneled packets can carry any type of utility visitors –TCP, UDP, or just about every other protocol.
Thus, for a TCP-based overlay utility, if AFD both marks or discards a VXLAN-encapsulated packet, the unique TCP stack nonetheless receives ECN marked packets or misses a TCP sequence quantity, and these mechanisms will trigger TCP to cut back the transmission price. In different phrases, the unique purpose continues to be achieved – congestion is prevented by inflicting the functions to cut back their price.
Equally, high-bandwidth UDP-based overlay functions would reply simply as they’d to AFD marking or discards in a non-VXLAN surroundings. You probably have high-bandwidth UDP-based functions, we suggest classifying primarily based on protocol and guaranteeing these functions get assigned to non-AFD-enabled queues.
As for DPP, whereas TCP-based overlay functions will profit most, particularly for preliminary flow-setup, UDP-based overlay functions can profit as nicely. With DPP, each TCP and UDP short-lived flows are promoted to the next precedence queue, dashing flow-completion time. Subsequently, enabling DPP on any queue, even these carrying UDP visitors, ought to present a constructive influence.
Key Takeaways
VXLAN/EVPN material designs have gained important traction in recent times, and guaranteeing wonderful utility efficiency is paramount. Cisco Nexus 9000 Collection switches, with their hardware-based Clever Buffering capabilities, be sure that even in an overlay utility surroundings, you’ll be able to maximize the environment friendly utilization of obtainable buffer, reduce community congestion, pace flow-establishment and flow-completion occasions, and keep away from drops as a consequence of microbursts.
Extra Data
You could find extra details about the applied sciences mentioned on this weblog at www.cisco.com:
Share:
[ad_2]