5 major storage automation trends in 2022

5 major storage automation trends in 2022

Storage automation has been around for a couple of decades. It started as a way to save storage administrators time when it came to moving data, provisioning or disabling storage, dividing disks into volumes and LUNs (logical unit numbers), and a host of other manual tasks. that consumed the day.

These capabilities have gradually been integrated into storage with vendors such as EMC and NetApp among the leaders in advancement in storage automation. These days, the number and sophistication of automation features have multiplied significantly.

Here are five of the top trends in storage automation:

1. Automation of larger data archives

Automation was vital when storage administrators only had to deal with gigabytes (GB) of storage. But as things progressed to terabyte (TB) size and beyond, the need for automation expanded exponentially.

“Customers are looking for efficient scalable NAS solutions designed to store PB of capacity,” said Brian Henderson, director of marketing for unstructured data storage products at Dell Technologies.

“Studies have shown that a single administrator can manage storage PBs in these environments thanks to advanced and powerful storage management capabilities, such as replication, performance management, data management and snapshots.

“These modern NAS solutions must provide a variety of interfaces such as CLI, webUI, script and API to automate many of the tasks and data, using tools that provide enterprise-grade management, reporting, monitoring and troubleshooting capabilities.”

2. Automation of object storage

The use of object storage has exploded in recent years. As more data was downloaded to the object repositories, the demand for automation features increased. Major object storage workloads include content storage and applications, both on-premises and in the public cloud.

Users want object storage to provide the functionality they expect in Network Area Storage (NAS) and Storage Area Network (SAN) environments. This includes integration with accelerated processing technologies, quality of service (QoS), integration with artificial intelligence (AI) software stacks, support for higher performance storage tiers leveraging flash media, ease of deployment in the cloud and automated data life. cycle management, according to Eric Burgener, an IDC analyst.

DataCore swarm object storage software, for example, is designed from the ground up to securely manage billions of files and petabytes of information. Swarm provides a foundation for archiving, accessing and analyzing hyperscale data while ensuring data integrity and eliminating hardware dependencies. This is achieved through a liberal use of automation.

3. Ease of use in storage

The cloud has brought a consumer mindset to storage. People now want their cloud resources to work just as if they were using a personal service on their tablet. This mindset, in turn, has found its way into the entire storage field.

People want the extra storage, services and capacity now. Therefore, storage and cloud administrators need easy-to-use features, which can only come from higher levels of automation.

Easier management of clusters, for example, makes life easier for a storage administrator. That’s why investments in automation and orchestration tools, such as Ansible and Kubernetes for easier data lifecycle management, are on the rise.

“CIOs are trying to reduce the number of independent storage silos by consolidating workloads onto fewer high-density platforms,” ​​Burgener said with IDC.

“This creates new requirements to support multiple access methods, storage tiering, flexible QoS controls, and automation that works in bare metal, virtualized and containerized environments.”

4. Hybrid storage

Storage arrays and NAS filers used to include hard disk drives (HDDs) only. But the rise of flash has given rise to equipment containing large amounts of storage consisting of flash and HDD in a hybrid arrangement. In more and more cases, they are going all flash.

Storage personnel, therefore, must handle many more configurations, configuration changes, and requests to add more flash, and must have ways to quickly provision in these hybrid and increasingly flash-dominated environments.

The supplier community responded with software up to the task. NetApp, for example, has constantly updated NetApp ONTAP 9 data management software to provide greater levels of functionality and automation. These capabilities provide greater simplicity and flexibility for cloud and data center storage. It also helps IT implement a range of storage architectures, including hardware storage systems, software-defined storage (SDS), and the cloud.

NetApp FAS storage arrays, for example, leverage this software to build storage infrastructures that balance and automate provisioning of capacity and performance. It is optimized to simplify deployment and operations, while still having the flexibility to manage future growth and cloud integration. The FAS family has unified capabilities for SAN, NAS, and object-oriented workloads.

5. Enhanced functions

Today it could be argued that storage alone is no longer enough for a vendor. Just as NetApp has evolved from NAS filers to be able to automatically provision any type of storage and orchestrate it in the cloud and on-premises, other vendors have realized the need to integrate more features, many far beyond traditional. storage concepts. This includes areas like security, ransomware protection, data protection, and storage.

FalconStorfor example, it went far beyond its original backup sources to include detailed information data protection, archiving, and secure data containers that can leverage the various capabilities offered by leading object storage offerings, both on-premises and in the cloud.

FalconStor StorSafe added automation to enhance its object storage metadata management capabilities to access the most applicable data. Leverage immutable storage of WORM-compliant offerings to provide perpetual, always available archive. By splitting the data into fragments and automatically dispersing it across the cluster, availability increases, while a data center breach resulting in theft of a machine results in no data loss and a complete dataset cannot be mounted.

Leave a Comment

Your email address will not be published.