-->

Sunday, May 9, 2021

Here's why DNA storage won't replace tape anytime soon

In the world of archival storage, tape is the undisputed king. While advances in technologies such as DNA and glass storage offer a glimpse into the future, there's currently no alternative capable of rivalling tape for reliability, longevity and cost.

That said, businesses still face a number of challenges when it comes to managing and preserving data in the long-term, as cloud storage or on-premise.

TechRadar Pro spoke to David Trachy, Senior Director of Emerging Markets at storage firm Spectra Logic, to find out how hybrid perpetual storage could solve  some of the trickiest data problems facing companies today.

What does the future look like for flash? What impact will this have on the storage industry?

The fastest growing technology in the storage market continues to be NAND flash. It has capabilities of durability and speed that find favor in both the consumer and enterprise segments, but the key innovation focus for the future of the flash market lies in seeking greater capacity. Though transitioning from planar (2D) to 3D NAND looked highly promising at the time, future capacity gains are proving to be unviable, as increasing writes concurrently decreases the number of times the cell can be programmed, impacting long-term flash capacity. Another option to increase flash capacity is to decrease the cell size. But given that 19 nano-meters (nm) is as small as the industry plans on producing, and we are already at 20 nm on the flash roadmap, this also looks like a dead end.

The greatest opportunity to achieve flash capacity gains is by increasing the number of layers on a chip; however, there are complex issues with building 100-plus layer parts. For this reason and others, there are no vendors talking about building past 136-layers in a single-stack part. So, we predict that future capacity gains in flash will be primarily achieved by string stacking parts together. The string-stacking technique is where multi-layer flash dies are connected together to create a flash chip with more layers. This may result in fewer cost decreases in flash. System and cloud providers will take advantage of the zone-based interface (enabling the physical placement of data into zones matching the performance needs of the data) to get longer life, better performance and greater capacity out of their flash assets.

What market influences have had the greatest impact on magnetic disk? What lies ahead for disk?

Volumes of disk drive shipments shipped over the last four quarters experienced about a 20 percent drop in volume -- 255 million, compared to 328 million for the prior year. This decline can be attributed to flash technology eroding markets where disk was once the only choice.  For instance, most laptops now utilize flash storage. More recently the new generation of gaming systems are all flash-based. Despite the decline of the 2.5-inch disk category, the 3.5-inch nearline disk drive category has experienced year-on-year increases in both capacity and volume shipments. It now comprises more than 50% of all disk revenue, and is predominantly sold to large IT shops and cloud providers. Developing a singular product, with a few variations, has allowed the disk companies to focus their resources, enabling them to remain profitable even as a good portion of their legacy business erodes.

With a number of ongoing advancements and a lengthy LTO roadmap, it would seem that tape continues to show no signs of disappearing. What are the key takeaways in terms of tape innovation, and what is next for tape?

Tape is certainly here to stay. It is a perfect medium for long-term archive. And with its air gap capability, tape has undoubtedly helped thousands of companies survive ransomware attacks. The biggest organizations in the world -- including cloud providers -- are utilizing tape. In fact, we are seeing a resurgence of tape because there is no storage medium in use today in the world that has greater density and lower cost than tape, period.

While the digital tape business for backing up primary disk systems has seen year-to-year declines (as IT backup has moved to disk-based technology), the need for tape in the long-term archive market continues to grow. Tape technology is well suited for this space as it provides the benefits of low environmental footprint on both floor space and power; a high level of data integrity over a long period of time; unlimited scalability; and a much lower cost per gigabyte of storage than any other storage medium.

Linear Tape Open (LTO) technology has been and will continue to be the primary tape technology. The LTO consortium assures interoperability for manufacturers of both LTO tape drives and media. In 2018, the eighth generation of this technology was introduced, providing 12TB native (uncompressed) capacity per cartridge. It is expected that later in 2021, the ninth generation, LTO-9, will be introduced at 18TB (uncompressed): a 50% capacity increase over LTO-8. The LTO consortium provides a very robust LTO roadmap in terms of future products all the way to LTO-12 at a capacity point of 144TB on a single piece of media.

A historical issue with tape has been the perception that it is “hard to manage.” HSM (Hierarchical Storage Management) attempted to solve the complexity of tape by providing a standard network file interface to an application and have the HSM manage the tape system. What is needed to make tape much easier to manage is an interface that accepts long retrieval times with the capability to specify that an unlimited number of data entities be retrieved at one time. A new de-facto standard interface has emerged that, when supported by tape system suppliers, would greatly expand the number of applications that could utilise tape. An S3 interface would be presented to the application and all data stored on tape would be mapped as being in an offline tier. The application is hidden from any details of tape management and, at the same time, the tape system could not just manage the tape system, but could provide advanced features such as multi-copy, offsite tape management and remastering -- all done transparently to the application. By having a tape system that supports this interface, countless S3 applications could utilise tape without need of modifications. A future product has already been announced with this capability, with another said to be released in 2021.

tape storage

(Image credit: Shutterstock / kubais)

What are the deciding factors for organizations when it comes to choosing between cloud vs on-premise storage and what predictions can you share on this topic?

Recently there has been talk, even from cloud providers, about the onset of new hybrid systems, (essentially hybrid perpetual storage), which will allow for utilizing either cloud and/or on-premise processing capabilities, while providing for the long-term retention of the raw and refined data of that processing, independent of where that processing occurs. The two tiers of storage are defined as the Project Tier and the Perpetual Tier. Project storage will always be resident where the data is active/being processed, either in the cloud or on-premise. However, with the advent of a new generation of storage solutions, organisations will now have a choice, regardless of where the Project Tier is located, as to whether the Perpetual Tier (with inactive data) should be located in the cloud or on-premise.

The first decision an organization needs to make when deciding on the locality of both the Project and Perpetual Tiers is to determine where to perform the processing -- either in the cloud or on-premise. There are many factors that need to be weighed in making this decision, such as the total cost of ownership, the versatility each provides the organization, and the business preference toward capital or operating expenses. When analyzing the advantages and disadvantages of a cloud or on-premise Perpetual Tier solution, there are several things organisations should ask themselves, such as: 1) How much data will be stored? 2) How long will the data need to exist? 3) How frequently and how much of the data will need to be restored? 4) How quickly will data need to be restored? 5) How committed is my organisation long-term to a particular cloud vendor? And 6) Do we have the required facilities and staff to maintain an on-premise solution?

Once the decision has been made to process in the cloud or on-premise or some combination of the two, the next decision to make is where to locate the Perpetual Tier -- in the cloud or on-premise. Running processes in the cloud requires the project data to be in an online storage pool of the respective cloud provider.

The ideal scenario might be for customers to have the option of running the Project Tier on-premise or in the cloud, while ensuring the Perpetual storage system is on-premises. This would require a next-generation storage system. Consider a future on-premise storage system whereby all the raw data is sent to it instead of the cloud and, upon receiving that data, would perform two actions. It first would “sync” the data to the cloud, in order for cloud processing to occur on that data, and secondarily, it would make an archive copy of that data to either on- premise disk or tape. Additionally, the system could be programmed to automatically delete the data in the cloud after a pre-set period of time or the customer could manually delete the data when processing was complete.

What can you tell us about future technologies and which will make it to maturity?

Being a $50 billion a year market, the storage industry has and will always continue to attract venture investment in new technologies. Many of these efforts have promised a magnitude of improvement in one or more of the basic attributes of storage, those being cost (per capacity), low latency, high bandwidth, and longevity. To be clear, over the last 20 years, a small portion of the overall venture capital investment has been dedicated to the development of low-level storage devices, with the majority dedicated to the development of storage systems that utilise existing storage devices as part of their solution. These developments align more with the venture capital market in that they are primarily software-based and require relatively little capital investment to reach production. Additionally, they are lower risk and have faster time-to-market as they do not involve scientific breakthroughs associated with materials, light or quantum physics phenomenon.

Much of the basic research for advanced development of breakthrough storage devices is university or government funded or is funded by the venture market as purely a proof-of-concept effort. For example, an announcement was made regarding storing data in five dimensions onto a piece of glass or a quartz crystal capable of holding 360TB of data, literally forever. Advanced development efforts continue in attempting to store data into holograms, a technology that, for a long time, has been longer on promises than results. Another group is researching storing data using DNA and, just recently, a company received $40 million for an idea of storing data by continually bouncing the data between space satellites in low earth orbit.

Developments at the quantum level include storing data through controlling the “spin” of electrons. Though these and other efforts have the potential to revolutionise data storage, it is difficult to believe that any are mature enough at this point in time to significantly impact the digital universe through at least 2030. Historically many storage technologies have shown promise in the prototype phase, but have been unable to make the leap to production products that meet the cost, ruggedness, performance, and most importantly, reliability of the current technologies in the marketplace. Given the advent of cloud providers, the avenue to market for some of these technologies might become easier.



from TechRadar - All the latest technology news https://ift.tt/3ewg61X
NEXT ARTICLE Next Post
PREVIOUS ARTICLE Previous Post
NEXT ARTICLE Next Post
PREVIOUS ARTICLE Previous Post
 

Sports

Delivered by FeedBurner