Back to All Posts

Posted by Jamie Godbout

Advancing Automation Technology - How to Specify Shared Storage

June 9, 2020

Advancing Automation Technology series will take an in-depth look into how the latest technology is being used to develop state-of-the-art control systems. We will select various software and hardware packages being used today, and provide you with the best practice techniques on how to apply them, and when you should consider applying each topic.

New call-to-actionHow to Specify Shared Storage

Shared storage is required for any virtual system that requires fail over or high availability. There are a couple of options available in the shared storage world. The least expensive option is to go with a direct attached storage (DAS) solution. The DAS solution does have its limitations, in that you can only connect the storage array to a limited number of servers, maybe 2 or 3 depending on the manufacturer. However, we only need 2 or 3 servers at the most to connect to the storage array so this is acceptable. The DAS will connect to an SAS card in the server, and will contain an array of hard drives where the virtual machines will be stored.

The other option is to go with a network attached storage solution (NAS). The NAS solution is similar to the DAS, but is connected over an Ethernet network instead of a direct connection. The benefit is that more servers can access the storage arrays. Unfortunatly, NAS solutions tend to be very expensive, somewhere in the $50K to $100K range.

Shared storage does offer several inherent benefits as well. They generally come with their own built-in redunancy systems, so business continuity is achieved with the device. They are also very scalable, so as your storage requirements increase you can usually just add an additional arrays to increase your available storage.


Hardware Requirements

I don’t have any typical hardware requirements when it comes to the SAN. As it turns out every time I go to the market to look for a new storage array, the technology has changed and it seems like I have to start from scratch. So let’s cover the most general requirements that are going to be common for whatever system you choose to employ.

The most important requirements you need to consider are the storage capacity and performance. Typical SANs are used for enterprise level storage, that require huge amounts of space, from 10TB to thousands of terabytes. Our control system is likely to need between 3TB and 5TB at the most, and you will want your virtual machines to be stored on high performance drives.

This brings us to the next important requirement; performance. Virtual machines will need to read and write to the storage array constantly. This read/write performance is unit measured in IOPS (input outputs per second). The more IOPs you have available the faster your virtual machines will respond. Hard disks have very few IOPs available when compared to solid state drives (SSD), because there is a mechanical operation the drive needs to perform to search for the requested data. You may see IOPS in the range of 1200 IOPS for a small array of hard disks, but the same array and capacity but with SSDs might give you 80,000 IOPS. My advice is that, if you can afford the premium, it is much better to go with an SSD array to store the virtual machines, and use hard disks for file storage.

There are a couple of tricks you can implement to get the best performance out of hard disks. Select fast drives, no less than 10,000 RPMs, but I prefer the 15,000 RPM variety if possible. Also select several smaller drives rather than a few large drives. You will double the performance if you select eight 300GB drives than if you use four 600GB drives. The reason for this is that more IOPs can take place in parallel if there are more drives in the system. The IOP operation is serial, so if you only have a couple of drives the data must pass through the same pipe. If you increase the number of pipes the data can get in and out much more quickly.

The other performance enhancing trick is to use RAID10 instead of the common RAID5 array. RAID5 is commonly used because it allows a hard drive to fail, and when that drive is replaced the array can be rebuilt without loss of data. The downside is that the array has to calculate a parity bit that is used to rebuild the array when a hard drive fails. That calculation limits the number of IOPs available to the storage array. RAID10 is a mirrored/striped array and provides the same fault tolerance as RAID5, but without the IOPs penalty. The downside of RAID10 is that it this most expensive option, because the available storage is cut in half. If you have 2.4TB of hard drives in your array, RAID10 will give you about 1.2TB of useable storage. The other 1.2TB is used as a mirror so that when a drive fails that image can be accessed and then copied to the replacement drive after it is installed.

Software Requirements

Most shared storage solutions will come with their own software packages or will provide some sort of embedded web page to help you configure the device. See your manufacturer’s literature for specific software packages that may be available


If you need high availability or a fault tolerant system for your FactoryTalk Batch control system, then some sort of shared storage solution will be necessary. The virtual machines will be stored on the storage array, and the servers will be used for computing power. That way if a server fails, the computing power is transferred to another server that is in the host array.

Shared storage technology is constantly evolving, but the requirements to really focus on is capacity and performance. A typical control system may require 2TB to 4TB of storage, anything less than 2TB will be difficult to implement with a shared storage solution.

Performance is going to be driven by the available IOPs, which is determined by drive speed, type, configuration, and quantity.

Click here to return to Part I of What Do You Need to Implement a FactoryTalk Batch Solution.

Click here to see my previous post discussing difference between a FactoryTalk Batch solution and a traditional PLC project.

If you have any questions concerning your process control system or require some assistance, please feel free to  reach out to our Controls and Automation experts through our

Help Desk

About the Author:

Jamie has left Hallam-ICS to pursue other endeavors.  If you have questions about this article or other Ignition questions, contact Tom Toppin, Process Controls Engineer. 

About Hallam-ICS:

Hallam-ICS is an engineering and automation company that designs MEP systems for facilities and plants, engineers control and automation solutions, and ensures safety and regulatory compliance through arc flash studies, commissioning, and validation. Our offices are located in Massachusetts, Connecticut, New York, Vermont and North Carolina and our projects take us world-wide.  Contact Us

Topics: Process Control and Plant Automation

Jamie Godbout

By Jamie Godbout

Find me on:

June 9, 2020

Back to All Posts

Subscribe to Email Updates


How a Variable Frequency Drive Works

Ignition Tips, Tricks and Cheats-How To Dynamically Build Template Repeater Datasets

Programming with Rockwell Automation's PlantPAx

Reduce your Infrared Thermography costs by up to 25% Get a Quote
7 Reasons why TGMS and FAS should communicate Webinar Recording Access