What is the VIPRION® and how is it different from the F5® BIG-IP®?
What is the VIPRION? How is it different from F5’s BIG-IP? The VIPRION is BIG-IP! VIPRION is a chassis based, more powerful, and more fault tolerant appliance that runs BIG-IP Traffic Management Operating System® (TMOS®) software – but it’s still BIG-IP at the core.
The chassis runs on blades – giving it added redundancy, scaling, and horsepower, but it’s all the same F5 BIG-IP Modules you’ve come to know and trust for delivering your applications across the globe securely. The VIPRION blades have the ability to work together as a super powerful cluster, distributing the load of a single virtual server over multiple blades. Not only does this increase capacity and the ability to scale, it also provides redundancy in the event of a blade failure.
VIPRION Blade Clustering
To understand how the chassis & blade architecture improves performance and redundancy, we start with the concept of blade clustering F5 has dubbed “SuperVIP®” cluster technology. This is the core piece of technology that coordinates all of the blades into a single high performance system. The technology essentially spreads processing power over all the active slots, also known as “cluster members” i.e. each slot/blade is considered a cluster member.
VIPRION Cluster Synchronization
Cluster Synchronization is an automated process that causes the “primary blade” to automatically propagate the BIG-IP software configurations to all secondary blades. Not only does this happen when routine changes are made, but also when a new blade is added to the system.
Primary and Secondary Blades – The “Quorum” process
The Primary Blade is elected as part of the power on boot-up process called “quorum”, all other blades are then Secondary Blades. You can always physically tell which blade is the primary blade as it will have a green LED marked “Pri” turned on. Once logged into the GUI you call tell if you’re logged into the primary blade by looking right below the Date / time user / role heading of the screen you’re familiar with and look for the absent of “You are currently logged in to a secondary slot!” see below:
From the CLI prompt you can see a few different pieces of information exclusive to the VIPRION chassis including the blade number, slot/blade cluster status, and if it’s the primary or secondary blade you’re connected to.
Note, you can’t choose or force which blade is elected primary. Since the floating cluster management IP always connects you to the primary blade, the auto election process is not an issue. What does the VIPRION primary blade do you ask? Here are main tasks of the primary blade in a VIPRION chassis:
- Holds the cluster Floating IP management address (more on management address later)
- Collect and logs information from all blades
- Initially accepts client application traffic to all VIPs and utilizes all blades in the cluster to process the traffic before sending the traffic to back-end pool members
- Receive and distribute all configuration tasks and files. Note – if you were to manually access one of the secondary blades on their direct mgmt IP’s and make a configuration change, they will be instantly overwritten by the primary blade’s configuration. This is why it’s important to always administer your VIPRION from the floating cluster management IP.
VIPRION Connection Mirroring
To truly ensure users don’t have to re-establish connections if a blade fails or is swapped, administrators should consider utilizing “connection mirroring” to ensure in-process connections remain intact and processed by available blades.
Like the non chassis based BIG-IP boxes, the VIPRION’s are able to “Mirror” connections between HA pairs – referred to as “INTER-cluster mirroring”. Additionally, the VIPRIONs introduce a new way to mirror connections within a chassis between blades called “INTRA-Cluster Mirroring”. Intra & Inter cluster mirroring are mutually exclusive – i.e. you use one or the other, not both. If you want to mirror SSL connections make sure you’re running TMOS v12 or higher, as the ability to mirror SSL connections was added in version 12.
Intra-Cluster Mirroring – Mirrors connections and persistence records within the cluster between blades in the same chassis. It’s important to note, only FastL4 Virtual servers connections can be mirrored intra-cluster.
Inter-Cluster Mirroring – Mirrors connections and persistence records to another cluster – i.e. another chassis. The hardware platform and slot / blade configuration must be exact. Additionally, if vCMP® is being used, core allocation and guests would have to match as well.
How is the VIPRION different from other F5 Appliances?
The key difference between the VIPRION vs. other F5 hardware appliances, like the iSeries® or Standard platforms, is the other appliances are not modular and do not scale past the max limitations. Outside of the power supplies, and the SFPs, you can’t physically add anything to the iSeries or Standard platform appliances. On the contrary, the VIPRION with it’s chassis & slot architectures, allows you to add more blades – expanding capacity and redundancy within the same chassis. You also have the ability to easily swap fan trays or even the LCD panel.
Why VIPRION?
Why would you ever need a VIPRION? If you’re an ISP or a growing enterprise who is no stranger to scale or bandwidth hungry applications – the VIPRION can offer you some crazy throughput, scaling, and redundancy options. When 2nd best won’t do, and you need state-of-the-art hardware pushed to the limits with the ability to scale on demand – the VIPRION offers the highest capacity, throughput, and performance of any ADC on the market today. The VIPRION allows organizations to scale fast without costly environment modifications. Pop another blade in, connect it to your network, and poof – more capacity, throughput, and redundancy.
iSeries vs VIPRION
The table below compares the top-of-the-line F5 non-modular appliance, the i11800 vs the modular top of the line B4450 VIPRION Blade – as a single blade as well in a maxed out VIPRION chassis.
High-End F5 Comparison – VIPRION vs iSeries
F5 i11800 | F5 VIPRION 4450 Blade | F5 VIPRION 4800 Chassis w/ 8 4450 Blades | |
---|---|---|---|
Throughput | L4 - 160Gbps L7 - 80 Gbps | L4 - 160Gbps L7 - 160 Gbps | 1.28 Tbps |
L7 Requests Per Second | 5.5 Million | 5 Million | 40 Million |
Max L4 Connections | 140 Million CPS | 180 Million CPS | 1.44 Billion CPS |
Max Hardware Compression | 40 Gbps | 80 Gbps | 640 Gbps |
Hardware SSL | ECC†: 48K TPS (ECDSA P-256) RSA: 80K TPS (2K Keys) 40 Gbps bulk encryption* | ECC: 80k TPS (ECDSA P-256) RSA: 160K TPS (2K Keys) 80 Gbps bulk encryption* | ECC: 640k TPS (ECDSA P-256) RSA: 1.2m TPS (2K Keys) 640 Gbps bulk encryption* |
Hardware DDoS Protection Per Second | 130m SYN Cookies | 115m SYN Cookies | 920m SYN Cookies |
Max vCMP Virtualization Guests | 32 | 12 | 96 |
Available vCPU per slot/appliance | 32 | 48 | 384 |
Possible vCPU allocation per guest per slot/appliance | 1, 2, 4, 6, 8 | 2, 4, 6, 8, 12, 24 | 2, 4, 6, 8, 12, 24 For each of the 8 Blades |
What’s the moral of the story here? Though the Standalone Hardware Appliance has some impressive numbers, the VIPRION blade outperforms the Appliance by about 2x in all around throughput and SSL. BUT the VIPRION really shines once you start filling up the chassis – you essentially have the ability to handle 8x the throughput of a single B4450 blade. Though the overall vCMP guest capacity is lower for a single blade vs the high end appliance – 32 vs 12, the B4450 blade has more cores – 48 vs 32, and gives you the ability to assign a higher number of cores to a single guest by about 3 times – 24 vs 8.
What is vCMP?
Before we can talk about vCMP and the VIPRION, we need to ensure you have a general understanding of what vCMP is – Virtualized Clustered Multiprocessing™. vCMP allows you to deploy multiple virtual BIG-IP instances on a single platform. vCMP is available on all VIPRIONs, as well as select 5000 series appliances and above. On licensed systems, vCMP appears as an additional module to provision when you initially deploy the BIG-IP. Once you provision vCMP, you essentially turn the BIG-IP into a hypervisor and it becomes the “Host” system. On the host system you create and deploy virtual BIG-IP “Guests”and assign them a number of logical cores, but you do not provision or work with any of the other modules on the host system – ie all the other modules become unavailable to the host, and are only available on the guests directly for provisioning. You also configure all layer 2 to logic on Hosts, and assign vlans and trunks to each guest, but all the layer 3 addressing is configured directly on the guests – other than the initial guest management IP. Subsequently, you only configure HA / DSC® on guests, never the Hosts.
vCMP and the VIPRION
There is a key differences to how vCMP functions on a VIPRION chassis, vs the iSeries and standard non modular platforms. Essentially, vCMP on a VIPRION with multiple blades allows you to create guests with cores spanned across multiple blades. This allows you to take advantage of the VIPRION multi blade clustering technology for multiple independent guests.
VIPRION Specific Glossary
- Cluster – Is primarily made up of all the active slots in the chassis that work simultaneously as one system to process application traffic. Additionally, you can think of the VIPRION cluster as all the resources that make up the system including: blades, power supplies, fans, LCD, and the system ID or Annunciator card.
- Cluster Member – An enabled physical or virtual slot that contains an active blade.
- Cluster Member IP address – The individual management IP address of each blade. Each blade receives a cluster member IP address. the primary designated slot.
- Cluster IP address – The floating management IP address of the primary designated slot. Connecting to this IP to manage the VIPRION will automatically connect you to whichever slot is elected as “Primary”.
- Primary Slot – Initially accepts application traffic. The floating cluster IP address is assigned to the primary slot.
- Secondary Slots – Any slot that is not the primary slot
- Primary Blade – The blade in the primary slot
- Secondary Blades – Any blade in a secondary slot
- Cluster Synchronization – Occurs when a new blade is added to the system; The primary blade automatically propagates the BIG-IP system configuration to all the secondary blades when powered on brings them into the SuperVIP® cluster.
- Quorum – The process of electing primary and secondary blades and occurs during booting of the chassis aka “Full Cluster Start-up”. To establish a quorum all blades agree on:
- Time
- Cluster configuration
- Which blades are powered up
Understanding VIPRION management IPs
Each blade has their own unique management IP, but only the Primary blade has the floating cluster IP management address. Administrators make changes from the floating cluster IP address on the primary blade, any changes made from the secondary blades will be overwritten. Typically you would only access the non-floating IP addresses of individual blades to troubleshoot some type of issue. Because of the whole floating management IP concept, you’ll need to ensure all blades management ports are connected, and on the same mgmt subnet – this mgmt subnet should be an out of band subnet, NOT on any of the self-IP subnets. This will ensure you can access the new primary blade on the floating IP in case of a blade failure. Keeping the mgmt subnet unique, and not a part of any of the self-IP production subnets will ensure monitor traffic is not routed over the mgmt subnet.
Understanding VIPRION Interface Naming and Numbering Conventions
Like other BIG-IP platforms, you have the ability to set individual interface settings like media type, duplex, flow control and active / disabled state. The VIPRION naming conventions for ports by: SLOT / ASIC CHIP or PORT GROUPING / INTERFACE. For example:
1/1.1 = The 1st 10Gbps interface on the 1st ASIC on the blade in slot 1
2/1.2 = The 2nd 10Gbps interface on the 1st ASIC on the blade in slot 2
2/1.5 = The 5th interface on the 1st ASIC on the blade in slot 2
1/1.4 = Now for an example if the port is part of a bundled interface – this would be the 4th 10Gbps interface within a 40Gbps bundled interface on the blade in slot 1. Accessed via breakout (aka squid) cable. The image below shows how this would look in a B2250 blade with a breakout cable:
You’ll notice my reference to the “squid cable” above, it’s important to note on platforms that support QSFP+ interface ports, you can use the ports as a single “bundled” 40GbE port. By default the ports are in “unbundled” mode – i.e. four 10GbE SFP+ ports.
List of VIPRION Chassis Types and Cards
There are two groups of VIPRION chassis & blades. The C4000 series chassis use the B4000 series blades, and the 2000 series chassis use the B2000 series blades. It’s important to note the blades for the VIPRION C4000 and the 2000 are not interchangeable – the blades can only be used in their respective group.
Pavel says
2/1.5 = The 5th interface on the 2nd ASIC on the blade in slot 2, – looks like slightly not correct according with previos descrioptions
2/1.5 = The 5th interface on the 1st ASIC on the blade in slot 2 looks better
Austin Geraci says
Nice Catch Pravel! – updated!
Daniel Silva says
Hi Austin, Thank you very much for articles like these. I would like to make you a query. I can’t find the reference to connect 2 B4450 through a 40gb port (For failover), which cable is supported, it can be Transceiver to Transceiver (8-fiber parallel transceivers 40G / 100G, SR4, PSM). The idea is to connect one card to the other, between 2 chassis, to build the failover between them. Thanks a lot.
Austin Geraci says
You’ll want to find the right QSFP Breakout Cable for your platform – sometimes referred to as the “squid cable” – it has 4 ends of the cable going into the F5 QSFP. While we’ve seen generic QSFP breakout cables work, you also absolutely need the F5 QSFP+ optical transceiver – we’ve tried this with other QSFPs, like Cisco, and they do NOT work, it has to be the F5 branded SFPs.
Do note – All those QSFP ports are the same across F5 platforms, with the same cabling options – ie QSFP 40G or QSFP breakout 4x10G.
Also, if you want to use a QSFP 40G port as 4x 10G with a breakout, you can “unbundle” the port in the interface settings to switch it from being something like a “1.0” port to “1.1, 1.2, 1.3, 1.4” ports. Good Luck!
Daniel Silva says
Hi Austin thank you very much for the reply. Now I have the B4450 cards, I only have 40GB and 100GB ports so I want to make the failover connection through a 40GB port. I have the QSFP F5-UPG-QSFP + PSM4 on both cards, my question is what type of cable can I use to connect both cards to each other, from (F5-UPG-QSFP + PSM4) to (F5-UPG-QSFP + PSM4) , I am not sure what type of MTP / MPO patch cord I should use.
Thanks
Austin Geraci says
I’m not sure how much the cable would matter as long as it fits the SFPs. Let us know what you went with. Good Luck!
Tulio Ribeiro says
Thank you for your article, it is very enlightening!
In fact, I provisioned 2 VCMPs on Viprion 2400. On Vcmp01 with 2 HT (virtual cores), one for blade1 and one for blade2. On Vcmp02 with 2 HT (virtual cores) two for blade1. I noticed that Vcmp02 has twice as much memory as vcmp01, is that correct? Can I increase the memory on Vcmp01?
Austin Geraci says
Are the blades the same model number? If you have the same model blades, I would expect the memory allocation to be similar. Memory is allocated as a chunk with each core allocated, and the amount each core provides is dependent on how much memory is available on the host divided by the total number of cores.
Take a look at the dmesg log output of each blade to see how much physical memory is installed on each blade.
motahar arabdashti says
Thanks for your article that was so good and helpful.
is this possible : we can have two chassis (c4800) with multiple blades(B4450) ,one vCMP and one cluster on each chassis and still use Inter-Cluster Mirroring?
Thanks, I appreciate it!
Austin Geraci says
Glad you liked the article! Could you clarify your question for me please? Are you asking if you can use vCMP on one of your blades, and the other blade as bare metal – making that bare metal aka non vCMP blade part of an HA pair with the other chassis while still using inter-cluster mirroring?