What is the VIPRION® and how is it different from the F5® BIG-IP®?
What is the VIPRION? How is it different from F5’s BIG-IP? The VIPRION is BIG-IP! VIPRION is a chassis based, more powerful, and more fault tolerant appliance that runs BIG-IP Traffic Management Operating System® (TMOS®) software – but it’s still BIG-IP at the core.
The chassis runs on blades – giving it added redundancy, scaling, and horsepower, but it’s all the same F5 BIG-IP Modules you’ve come to know and trust for delivering your applications across the globe securely. The VIPRION blades have the ability to work together as a super powerful cluster, distributing the load of a single virtual server over multiple blades. Not only does this increase capacity and the ability to scale, it also provides redundancy in the event of a blade failure.
VIPRION Blade Clustering
To understand how the chassis & blade architecture improves performance and redundancy, we start with the concept of blade clustering F5 has dubbed “SuperVIP®” cluster technology. This is the core piece of technology that coordinates all of the blades into a single high performance system. The technology essentially spreads processing power over all the active slots, also known as “cluster members” i.e. each slot/blade is considered a cluster member.
VIPRION Cluster Synchronization
Cluster Synchronization is an automated process that causes the “primary blade” to automatically propagate the BIG-IP software configurations to all secondary blades. Not only does this happen when routine changes are made, but also when a new blade is added to the system.
Primary and Secondary Blades – The “Quorum” process
The Primary Blade is elected as part of the power on boot-up process called “quorum”, all other blades are then Secondary Blades. You can always physically tell which blade is the primary blade as it will have a green LED marked “Pri” turned on. Once logged into the GUI you call tell if you’re logged into the primary blade by looking right below the Date / time user / role heading of the screen you’re familiar with and look for the absent of “You are currently logged in to a secondary slot!” see below:
From the CLI prompt you can see a few different pieces of information exclusive to the VIPRION chassis including the blade number, slot/blade cluster status, and if it’s the primary or secondary blade you’re connected to.
Note, you can’t choose or force which blade is elected primary. Since the floating cluster management IP always connects you to the primary blade, the auto election process is not an issue. What does the VIPRION primary blade do you ask? Here are main tasks of the primary blade in a VIPRION chassis:
- Holds the cluster Floating IP management address (more on management address later)
- Collect and logs information from all blades
- Initially accepts client application traffic to all VIPs and utilizes all blades in the cluster to process the traffic before sending the traffic to back-end pool members
- Receive and distribute all configuration tasks and files. Note – if you were to manually access one of the secondary blades on their direct mgmt IP’s and make a configuration change, they will be instantly overwritten by the primary blade’s configuration. This is why it’s important to always administer your VIPRION from the floating cluster management IP.
VIPRION Connection Mirroring
To truly ensure users don’t have to re-establish connections if a blade fails or is swapped, administrators should consider utilizing “connection mirroring” to ensure in-process connections remain intact and processed by available blades.
Like the non chassis based BIG-IP boxes, the VIPRION’s are able to “Mirror” connections between HA pairs – referred to as “INTER-cluster mirroring”. Additionally, the VIPRIONs introduce a new way to mirror connections within a chassis between blades called “INTRA-Cluster Mirroring”. Intra & Inter cluster mirroring are mutually exclusive – i.e. you use one or the other, not both. If you want to mirror SSL connections make sure you’re running TMOS v12 or higher, as the ability to mirror SSL connections was added in version 12.
Intra-Cluster Mirroring – Mirrors connections and persistence records within the cluster between blades in the same chassis. It’s important to note, only FastL4 Virtual servers connections can be mirrored intra-cluster.
Inter-Cluster Mirroring – Mirrors connections and persistence records to another cluster – i.e. another chassis. The hardware platform and slot / blade configuration must be exact. Additionally, if vCMP® is being used, core allocation and guests would have to match as well.
How is the VIPRION different from other F5 Appliances?
The key difference between the VIPRION vs. other F5 hardware appliances, like the iSeries® or Standard platforms, is the other appliances are not modular and do not scale past the max limitations. Outside of the power supplies, and the SFPs, you can’t physically add anything to the iSeries or Standard platform appliances. On the contrary, the VIPRION with it’s chassis & slot architectures, allows you to add more blades – expanding capacity and redundancy within the same chassis. You also have the ability to easily swap fan trays or even the LCD panel.
Why would you ever need a VIPRION? If you’re an ISP or a growing enterprise who is no stranger to scale or bandwidth hungry applications – the VIPRION can offer you some crazy throughput, scaling, and redundancy options. When 2nd best won’t do, and you need state-of-the-art hardware pushed to the limits with the ability to scale on demand – the VIPRION offers the highest capacity, throughput, and performance of any ADC on the market today. The VIPRION allows organizations to scale fast without costly environment modifications. Pop another blade in, connect it to your network, and poof – more capacity, throughput, and redundancy.
iSeries vs VIPRION
The table below compares the top-of-the-line F5 non-modular appliance, the i11800 vs the modular top of the line B4450 VIPRION Blade – as a single blade as well in a maxed out VIPRION chassis.
High-End F5 Comparison – VIPRION vs iSeries
|F5 VIPRION 4800 Chassis w/ 8 4450 Blades
|L4 - 160Gbps
L7 - 80 Gbps
|L4 - 160Gbps
L7 - 160 Gbps
|L7 Requests Per Second
|Max L4 Connections
|140 Million CPS
|180 Million CPS
|1.44 Billion CPS
|Max Hardware Compression
|ECC†: 48K TPS (ECDSA P-256) RSA: 80K TPS (2K Keys)
40 Gbps bulk encryption*
|ECC: 80k TPS (ECDSA P-256) RSA: 160K TPS (2K Keys)
80 Gbps bulk encryption*
|ECC: 640k TPS (ECDSA P-256)
RSA: 1.2m TPS (2K Keys)
640 Gbps bulk encryption*
|Hardware DDoS Protection
|130m SYN Cookies
|115m SYN Cookies
|920m SYN Cookies
|Available vCPU per slot/appliance
|Possible vCPU allocation per guest per slot/appliance
|1, 2, 4, 6, 8
|2, 4, 6, 8, 12, 24
|2, 4, 6, 8, 12, 24
For each of the 8 Blades
What’s the moral of the story here? Though the Standalone Hardware Appliance has some impressive numbers, the VIPRION blade outperforms the Appliance by about 2x in all around throughput and SSL. BUT the VIPRION really shines once you start filling up the chassis – you essentially have the ability to handle 8x the throughput of a single B4450 blade. Though the overall vCMP guest capacity is lower for a single blade vs the high end appliance – 32 vs 12, the B4450 blade has more cores – 48 vs 32, and gives you the ability to assign a higher number of cores to a single guest by about 3 times – 24 vs 8.
What is vCMP?
Before we can talk about vCMP and the VIPRION, we need to ensure you have a general understanding of what vCMP is – Virtualized Clustered Multiprocessing™. vCMP allows you to deploy multiple virtual BIG-IP instances on a single platform. vCMP is available on all VIPRIONs, as well as select 5000 series appliances and above. On licensed systems, vCMP appears as an additional module to provision when you initially deploy the BIG-IP. Once you provision vCMP, you essentially turn the BIG-IP into a hypervisor and it becomes the “Host” system. On the host system you create and deploy virtual BIG-IP “Guests”and assign them a number of logical cores, but you do not provision or work with any of the other modules on the host system – ie all the other modules become unavailable to the host, and are only available on the guests directly for provisioning. You also configure all layer 2 to logic on Hosts, and assign vlans and trunks to each guest, but all the layer 3 addressing is configured directly on the guests – other than the initial guest management IP. Subsequently, you only configure HA / DSC® on guests, never the Hosts.
vCMP and the VIPRION
There is a key differences to how vCMP functions on a VIPRION chassis, vs the iSeries and standard non modular platforms. Essentially, vCMP on a VIPRION with multiple blades allows you to create guests with cores spanned across multiple blades. This allows you to take advantage of the VIPRION multi blade clustering technology for multiple independent guests.
VIPRION Specific Glossary
- Cluster – Is primarily made up of all the active slots in the chassis that work simultaneously as one system to process application traffic. Additionally, you can think of the VIPRION cluster as all the resources that make up the system including: blades, power supplies, fans, LCD, and the system ID or Annunciator card.
- Cluster Member – An enabled physical or virtual slot that contains an active blade.
- Cluster Member IP address – The individual management IP address of each blade. Each blade receives a cluster member IP address. the primary designated slot.
- Cluster IP address – The floating management IP address of the primary designated slot. Connecting to this IP to manage the VIPRION will automatically connect you to whichever slot is elected as “Primary”.
- Primary Slot – Initially accepts application traffic. The floating cluster IP address is assigned to the primary slot.
- Secondary Slots – Any slot that is not the primary slot
- Primary Blade – The blade in the primary slot
- Secondary Blades – Any blade in a secondary slot
- Cluster Synchronization – Occurs when a new blade is added to the system; The primary blade automatically propagates the BIG-IP system configuration to all the secondary blades when powered on brings them into the SuperVIP® cluster.
- Quorum – The process of electing primary and secondary blades and occurs during booting of the chassis aka “Full Cluster Start-up”. To establish a quorum all blades agree on:
- Cluster configuration
- Which blades are powered up
Understanding VIPRION management IPs
Each blade has their own unique management IP, but only the Primary blade has the floating cluster IP management address. Administrators make changes from the floating cluster IP address on the primary blade, any changes made from the secondary blades will be overwritten. Typically you would only access the non-floating IP addresses of individual blades to troubleshoot some type of issue. Because of the whole floating management IP concept, you’ll need to ensure all blades management ports are connected, and on the same mgmt subnet – this mgmt subnet should be an out of band subnet, NOT on any of the self-IP subnets. This will ensure you can access the new primary blade on the floating IP in case of a blade failure. Keeping the mgmt subnet unique, and not a part of any of the self-IP production subnets will ensure monitor traffic is not routed over the mgmt subnet.
Understanding VIPRION Interface Naming and Numbering Conventions
Like other BIG-IP platforms, you have the ability to set individual interface settings like media type, duplex, flow control and active / disabled state. The VIPRION naming conventions for ports by: SLOT / ASIC CHIP or PORT GROUPING / INTERFACE. For example:
1/1.1 = The 1st 10Gbps interface on the 1st ASIC on the blade in slot 1
2/1.2 = The 2nd 10Gbps interface on the 1st ASIC on the blade in slot 2
2/1.5 = The 5th interface on the 1st ASIC on the blade in slot 2
1/1.4 = Now for an example if the port is part of a bundled interface – this would be the 4th 10Gbps interface within a 40Gbps bundled interface on the blade in slot 1. Accessed via breakout (aka squid) cable. The image below shows how this would look in a B2250 blade with a breakout cable:
You’ll notice my reference to the “squid cable” above, it’s important to note on platforms that support QSFP+ interface ports, you can use the ports as a single “bundled” 40GbE port. By default the ports are in “unbundled” mode – i.e. four 10GbE SFP+ ports.
List of VIPRION Chassis Types and Cards
There are two groups of VIPRION chassis & blades. The C4000 series chassis use the B4000 series blades, and the 2000 series chassis use the B2000 series blades. It’s important to note the blades for the VIPRION C4000 and the 2000 are not interchangeable – the blades can only be used in their respective group.