Storage Bandwidth Performance Metrics In Cloud Space

Talking about Storage Bandwidth Performance in Cloud Computing Servers, it’s good to know how everything really works. Apparently, the bandwidth measurement is the number of bytes that are read from the bucket. Typically, these are files that your application code downloads (in most cases, uploads are not charged).

So, if you check the file size of each file, and multiply each by the number of times it was read, you’ll end up with the bandwidth that you used. How do you persuade prospects to buy? Paint an attractive picture. Motivation trumps accuracy. Selling add-on storage by getting customers to focus on an impressive bandwidth number scratches many customers’ itches.

The truth is different. Application performance can often hinge on how well your storage can serve data to end clients. For this reason, you must correctly design or choose your storage tier speed in terms of both IOPS and throughput. For one thing, this is something that usually rates the speed and bandwidth of any given cloud storage respectively.

It is vital to plan according to manufacturer and developer recommendations as well as real-world benchmarks to maximize your storage (and subsequently application) performance. Take a look at peak IOPS and throughput ratings, read/write ratios, RAID penalties, and physical latency. That said, let’s now learn more about storage bandwidth performance in detail.

What Storage Bandwidth Performance Is All About

According to the DARKReading team, Storage Bandwidth Performance is the connectivity between servers and the storage they are attached to. When it comes to understanding storage bandwidth performance you have two challenges to deal with. The first and most obvious is if the storage gets the data to the application or user fast enough.

The second and less obvious is if the applications and hardware those applications run on take advantage of that bandwidth fully. For most data centers we can get more than enough storage bandwidth performance. There is 8GB fiber, 10GB FCoE, and 10GB Ethernet. The first step should be to make sure that the application can take advantage of that performance.

That’s if you would want to upgrade to it, if not then don’t. This is one of the reasons why smaller environments are served just fine by iSCSI even on 1GB connections, for them that bandwidth is good enough. Forthwith, understanding the speed at which I/O is coming out of the server to the storage connection is the key. Something worth pulling our focus efforts in.

Learn More: Why Service Quality Is An Important Aspect In A Virtual Environment Terms

Again as mentioned in this entry, some of the built-in utilities with the OSs can give you this information. As the environment gets more complicated, server virtualization for example, then we suggest using tools that can give you a more accurate determination like those from Akorri, Virtual Instruments, or even the Tek-Tools, just to mention but a few.

In addition to the application, you also need to examine the server hardware itself. Does it have the CPU power and memory size needed to get the data requests to and from the storage network? In most cases, the chances are yes. If you have an application that needs high I/O you have already upgraded the server hardware. In the case of virtualized server environments, can that storage bandwidth be channeled or better allocated?

Well, server virtualization changes all the rules here. Instead of a single server accessing storage or the network through a single interface, now you have multiple servers all accessing storage simultaneously. Technically, optimizing high bandwidth in virtual environments may be better served when priorities can be given to each virtual machine.

The Main Storage Bandwidth Performance Metrics In Cloud Computing 

Realistically, NetApp is one great Storage Bandwidth Performance service solution giver that we can borrow some notes from. Whereby, it constitutes a Cloud Volumes Platform that contains all the essential tools for successful business computing needs. From cloud data migration, protection, optimization, governance, etc. You’ll get exactly what your company needs!

More so, in order to capture the full potential of cloud technology. Whatever your cloud approach, you can expect a non-disruptive cloud adoption experience. As well as flexible deployment across environments, and enterprise-grade data protection — Start Free Trial — all within the Cloud Volumes Platform reach. Their service solutions cover almost everyone.

That aside, we’ll now have to look at the key essentials in storage bandwidth performance metrics to consider next. But first, with the growth of solid-state storage and cloud data storage services, evaluating storage systems can be complex. Nonetheless, there are key storage performance metrics and definitions that IT teams can seamlessly make use of.

Particularly, in order to simplify comparisons between various given technologies as well as their key suppliers. With that in mind, we’ll look at some of the most useful storage performance metrics. Including but not limited to capacity, throughput and read/write capability, IOPS, latency, and hardware longevity measured by failure rates.

Moreover, assessing any investment in storage is a question of balancing cost, performance, and capacity. From capacity; throughput and read/write; input/output operations per second and latency; mean time between failures and terabytes written; form factors and connectivity. Primarily, some of which are of use for assessing on-premise storage and the cloud.

1. Storage Bandwidth Capacity Metrics

All storage systems have a capacity measurement. Storage hardware today is largely measured in Gigabytes (GB), or Terabytes (TB). Older systems measured in megabytes (MB) have largely fallen out of use, though megabytes are still a useful metric in areas such as cache memory. Considerably, one Gigabyte of storage is 1,000MB, and a Terabyte (TB) is 1,000GB.

While Petabytes (PB) contain 1,000 Terabytes of data, large storage systems are often referred to as working at the “petabyte scale”. A Petabyte of storage is enough to host an MP3 file that will play for 2,000 years. It is worth noting that although most storage suppliers round capacities to the nearest thousand, based on kilobytes of data, some systems are unique.

Simply, because some of them use units on the power of two bases. By this definition, a Kibibyte (KiB) is 1,024 or 210 bytes, a Mebibyte (MiB) is 1024bytes and a Gibibyte (GiB) is 1024bytes. Fortunately, only the decimal system, using powers of 10, applies from terabytes and upwards. Storage capacities can apply to individual drives or solid-state subsystems.

In addition to other hardware arrays, volumes, or even system-wide capacity. Such as on a storage area network, or the provisioned storage in a cloud instance.

2. Throughput/Read/Write Storage Metrics

Raw storage is of little use unless the data can be moved in or out of a Central Processing Unit (CPU) or other processing systems. Throughput measures the number of bits a system can read or write per second. Solid-state systems, in particular, will have different read and write speeds, with write speeds typically lower.

The application will determine the most important metric of the two. For example, an application such as an industrial camera will need storage media with fast writing speeds whereas an archival database will be more focused on reading. However, suppliers might use calculations based on average block sizes to market their systems. This can be misleading.

Calculating throughput (or IOPS, see below) based on either an “average” or a small block size will give a very different set of values to the same system’s performance under real-world workloads. Manufacturers also distinguish between random and sequential read and write speeds. The sequential read or write speed is how quickly a given storage device can read.

Or even write a series of blocks of data. This is a useful measure for large files or series of data, such as a video stream or a backup. Random read and write is often a more realistic guide to real-world performance, especially for local storage on a PC or server. SSDs should have a stronger performance advantage over spinning disks for random read and write.

3. IOPS Plus Latency Storage Metrics

Input/output operations per second (IOPS) is another “speed” measurement. The higher the IOPS, the better the performance of the drive or storage system. A typical spinning disk has IOPS in the range of 50 to 200, although this can be improved significantly with RAID and cache memory. SSDs will be 1,000 times or faster. Higher IOPS does, however, mean higher prices.

IOPS measurements will also vary with the amount of data being written or read, as is also the case for throughput. Latency is how quickly the input/output (I/O) request is carried out. Some analysts advise that latency is the most important metric for storage systems, in terms of real-world application performance.

The Storage Network Industry Association (SNIA) describes it as “the heartbeat of a solid-state disk,” for your information. The latency for a hard disk drive (HDD) system should be between 10ms and 20ms (milliseconds). For solid-state storage, it should be just a few milliseconds. In practical terms, applications will expect about 1 ms latency.

4. MTBF Plus TBW Performance Metrics

Mean time between failures (MTBF) is a key reliability metric across most of the industry, including IT. For storage devices, this will usually mean the number of powered-on hours it will operate before failure. Failure in the case of storage media will normally mean data recovery and replacement because drives are not repairable.

Storage subsystems such as RAID arrays will have a different MTBF because drives can be replaced. A hard drive might have a typical MTBF of 300,000 hours, although newer technologies mean this can range up to 1,200,000 hours or 120 years of operation. Some manufacturers are moving away from MTBF. Consider the metric Annualized Failure Rate (AFR) by Seagate.

Autonomously, the AFR by Seagate sets out to predict the percentage of drives that will fail in the field in a given year due to a “supplier cause.” And, as a result, excluding customer-side issues, such as damage from a power outage. Solid-state storage systems, with their different physical characteristics, are also measured by endurance.

Equally, Total Terabytes Written (TWB) over time sets out the lifespan of a Solid-State Drive (SSD) as well. Not to mention, Drive Writes Per Day (DWPD) is based on how many times the entire drive can be rewritten over its life. Manufacturers will usually state these metrics in their hardware warranties. Endurance will vary by flash generation in addition to everything else.

5. Form Factors Bandwidth Connectivity Metrics

For SSDs, Single-Level Cell (SLC) has generally been the most durable, with Multi-Level Cell (MLC), Triple-Level Cell (TLC), and Quad-Level Cell (QLC) packing more activity into smaller cells with durability for capacity. However, flash manufacturing has improved the durability of all types. Through technologies such as enterprise Multi-Level Cell (eMLC) designs.

Although not performance metrics per se, storage buyers will need to consider how equipment connects to the host system and shares data. The typical form factor for laptops, now also common in storage arrays, is the 2.5in SSD, although larger, 3.5in drive bays remain available for HDDs. These drives use Serial ATA (SATA) or, for enterprise applications, SAS interfaces.

M.2 uses a PCI Express Mini Card format to interface with the host hardware. U.2 connectors are more commonly used on 2.5in SSDs, and unlike M.2, they can be hot-swapped. NVMe is an interface allowing storage, usually, Nand flashes, to connect to a host’s PCIe bus; U.2 devices can also use the NVMe interface.

The Key Computing Volumes In Storage Bandwidth Performance 

Remember the old days, before the cloud was a reality? Back then you would buy a storage system that included a package of appropriately-sized and tested hardware and its corresponding storage software. Your storage performance was guaranteed out-of-the-box, literally. Actually, this is the way NetApp started, building easy-to-use, purpose-built storage appliances.

Storage appliances, which were something like toasters. That’s why they called the company Network Appliance. Today, NetApp not only sells these FAS storage systems but has extended the best storage OS in the industry. In particular, while trading as a software-defined storage management platform for AWS, Google Cloud, or Cloud Volumes ONTAP by Azure.

This offers great flexibility, easy consumption, a jump start into the cloud, and the possibility to run cloud more workloads that rely on traditional storage capabilities. Sounds good so far, right? Well, almost! It’s unlike the engineered systems (FAS) — where very smart storage experts picked a good mix of CPU power, network bandwidth, memory, and storage capacity.

And then, bundled it all together in small, medium, and large sizes to satisfy most workloads — now you are the person who has to size the underlying infrastructure. How can you find the right sizing to ensure you get the best storage performance? Let’s give you a helping hand — have a look at the basics of Cloud Volumes ONTAP performance sizing in detail.

Bandwidth (= Large Sequential IO)

For instance, you can pick an Amazon EC2 instance type with enough front-end network performance. Choose an Amazon EBS-optimized instance and consider the available bandwidth to Amazon EBS. Add enough capacity and volumes to drive the bandwidth. Something with enough front-end network performance and memory (the more, the better).

IOPS (= Small Random/Sequential IO)

As for this one, you can basically pick an Amazon EC2 instance as well. Choose an Amazon EBS-optimized instance and consider the available bandwidth to Amazon EBS. Add enough Amazon EBS volumes and capacity to achieve IOPS. Start with the requirement and size along the data path, eliminating bottlenecks along the way.

Storage (= Best Virtual Infrastructures)

Important to realize, that virtual infrastructures are not as precisely-defined as a given hardware box, so expect variations in the results you achieve. Therefore, plan for some headroom. Always do a proof-of-concept to test out if the expectations meet reality! When it comes to I/O in the storage system most systems today have multiple connections.

Eventually, some of them lead up to the switch infrastructure. Or, otherwise, in the case of clustered storage, the connections to the switch infrastructure scale as more storage is added. It’s important there is plenty of bandwidth for the storage controller since it is likely to receive storage requests from many servers and virtual machines simultaneously.

Why Latency Is More Important Than Bandwidth

In reality, most storage managers overbuy storage system bandwidth and it is typically not the primary issue in the performance loop. The disadvantage to this is that you are often paying for bandwidth. Something that you will never use or at least not until a future date. Ideally, you should be buying just the bandwidth you need today and then upgrading per need.

This is one of the deliverables of clustered storage systems like those from 3PARIsilon (NAS), and HP’s Lefthand Network because bandwidth can grow as you need it. Whilst, providing maximum flexibility and CAPEX optimization. Clustered storage also plays a big role in addressing the next area of performance concern — the storage controller itself.

Bandwidth is a simple number that people think they understand. The bigger the number, the faster the storage, right? Well, nope! Leaving aside that many consumer bandwidth numbers are bogus — link speed is not storage speed — actual performance is rarely dependent on pure bandwidth. Bandwidth is a convenient metric, easily measured.

Related Resource: What A Two-Factor Authentication (2FA) Is All About For Beginners

However, bandwidth itself is not a critical factor in storage performance. What most storage performance tools measure is the bandwidth with large requests. Why? Because small requests don’t use much bandwidth. But, why doesn’t the CPU issue more I/O requests to soak up that unused bandwidth? Because every I/O takes time and resources — like context switches.

As well as memory management, metadata updates, and more — to complete. Suffice it to say, there are a lot of small requests even if you’re editing huge video files. That’s because, behind the scenes, the CPU’s Memory Management Unit (MMU) is constantly swapping out least-used pages and swapping in whatever data or program segments your workload requires.

These pages are fixed in size at 4KB for Windows and 16KB for recent versions of macOS. If you have a lot of physical memory there’s less paging initially after a reboot. But, over time, as you run more programs and open more tabs, physical memory fills up and the swapping starts. Thus much of the I/O traffic to storage isn’t under your direct control — less bandwidth.

What Really Matters In Simple Storage Bandwidth Terms

Latency. How quickly does a storage device service a request? There’s an obvious reason for latency’s importance, and another subtle — but nearly as important — reason. Let’s start with the most obvious one. Say you had a storage device with infinite bandwidth but each access took 10 milliseconds. That device could handle 100 accesses per second (1000ms/10ms = 100).

If the average access was 16K, you would have a total bandwidth of 1,600,000 KB per second — less than the nominal 500 Mbits/sec USB 2.0 offers – wasting an almost infinite amount of bandwidth. A 10 ms access is around what the average hard drive handles, which is why storage vendors packaged up hundreds, even thousands, of HDDs to maximize access.

But, that was the bad old days. Today’s high-performance SSDs have latencies well into the microsecond range. Meaning, that they can handle as many I/Os as a million-dollar storage array did 15 years ago. Only if you had infinite 16KB accesses would you be limited by the bandwidth of the connection. The subtle reason for latency’s importance is more complicated.

Let’s say you have 100 storage devices with a 10ms access time. And, that your CPU is issuing 10,000 I/Os per second (IOPS). Your 100 storage devices can handle 10,000 IOPS, so no problem, right? Wrong. Since each I/O takes 10 ms, your CPU is juggling 100 uncompleted I/Os. Drop the latency to 1 ms and the CPU has only 10 uncompleted I/Os.

Learn More: Wayback Machine | The No #1 Data Internet Archive For Webpages

If there’s an I/O burst, the number of uncompleted I/Os can cause the page map to exceed available onboard memory and force it to start paging. Which, since paging is already slow, is a Bad Thing. The problem with latency as a performance metric is twofold: it’s not easy to measure; and, few understand its importance. But, people have been buying and using them.

Especially, lower-latency interfaces for decades. Most probably, without even knowing why they were better/cheaper, and nominally as fast, interfaces. For example, FireWire’s advantage over USB 2, even though the bandwidth numbers were roughly comparable, was latency. USB 2 – 500 Mbits/sec – used a polling access protocol with higher latency.

In other words, a FireWire 800 Hard Drive would always seem snappier than the same drive over USB — because of the lower-latency protocol. You could boot a Mac on a USB 2 drive, but running apps was dead slow. Similarly, Thunderbolt has always been optimized for latency. Perhaps, the main reason why it costs more than other similar USB drives.

Summary Notes:

Before we conclude, it’s good to mention that: In terms of Storage Bandwidth Performance, data is your most vital asset. And, therefore, given any Cloud Volumes Platform, it must be optimally stored and managed throughout its lifecycle. Keep in mind, that as cloud technology evolves, its needs are also becoming more sophisticated, by any given day/timeline.

Whilst, demanding countless actions and processes to work harmoniously in different environments. Behind the Cloud Volumes Platform leaders is NetApp technology from that we can borrow ideas. One thing is for sure, it’s an application-driven infrastructure that powers it up. Making it a one-stop shop for advanced storage and modernized data management.

Resource Reference: What Machine Learning (ML) Entails | Its Key Algorithm Types

Its main solutions cover anywhere across both the hybrid multi-cloud and computing technology networks. By far, it’s an integrative set of innovative storage infrastructures and intelligent data services. Specifically, deployed and managed on your choice of cloud, whether public or private. And, through an advanced API-driven control plane with comprehensive oversight.

In addition, with high levels of integrability and adaptability, the platform delivers application-driven storage anywhere. As well as seamlessly addressing data needs, and eliminating siloed operations. Furthermore, you’ll also be able to make use of different tools managed by different teams with different workflows and APIs. Their services cover almost everything.

Including: 

In nutshell, with NetApp hybrid cloud solutions, you’ll meet all your business goals faster and more efficiently. On the same note, their industry-specific service solutions — like Electronic Design Automation, Media & Entertainment, Healthcare, and Oil & Gas, among others — help you to accelerate your cloud journey. Just give it a try, and then share your thoughts.

You now have an idea, right? Fortunately, in case of something else that you’ll need more help with, you can always Consult Us and let us know. After all, our team of Web Tech Experts Taskforce will be more than glad to see you through. Bearing in mind, that you/your business is the main reason we are all here in the first place. Until next time, thanks for your time!


Get Free Updates

Please, help us spread the word!