Almost half a year ago I was at Storage Field Day 7 in San Jose (CA) where we had a couple of awesome presentations by multiple companies. One of these companies was Catalogic who presented on their ECX copy datamanagement platform. A couple of my fellow delegates have written some great content on this technology and I’ll include them at the end of this post, and encourage you to read them. Also I’ll include the first presentation done by Ed Walsh (CEO Catalogic)
As said we’re almost half a year further, and I was curious what changed in this time with the companies that presented at SFD7
Copy Data Management
So what is Copy Data Management according to Catalogic, and what challenges does it solve? If you’ve watched the above video you’ve seen that Catalogic defines three challenges: data growth, manageability and business agility.
In a world where data seems to exploding it seems more then iminent to have a mechanism to create order in this data sprawl and that’s where ECX comes in. By implementing an OVF (docker based) and without any agents on your servers, you’ll get a system which provides you with: Orchestration, Automation DR and Data analytics. Using this for Test/Dev better RTO/RPO, reduce Capex/Opex, create orchastration to use the power of the cloud, and analyze and report on your data is very interesting.
And hearing about all these awesome posibilities, it kind of struck me that this was only possible with NetApp storage. I understand you need to start somewhere, but for a company in business since 1996 it must be doable to support more then just NetApp…
Fast forward 6 months
As mentioned this was what I absorbed during the presentation during Storage Field Day 7 and I kinda lost track, mainly because of the NetApp only thing, to be honest. I really think that ECX has a lot of potential, but it just needs to be available for all (or almost all ;-P) storage systems.
In the week before VMworld Catalogic announced ECX 2.2 which introduced support new storage vendor IBM. As of version 2.2 the IBM storage customers can use ECX to do the amazing things ECX provides. Although I would love to see more storage vendors on the list, it shows Catalogic is working hard to get more and more on the HCL 😀
But that’s not all for the 2.2 version ,the other new key features are:
Enhanced Policy-Based Copy Data Management Workflow Automation
Copy Data Management for IBM platforms
Improved Role Based Access Control (RBAC)
Expanded scalability and performance
Improved fault tolerance
I’ll be keeping a close watch on Catalogic to see what news will follow in the next couple of months.
A couple of my SFD7 friends also wrote some very interesting posts on ECX and I’ll include their post here (just click the links):
As I mentioned in my last post, my good friend Enrico is the organizer of a great event that started in London a couple of months ago. Because this event was a big succes, Enrico figured it was time to take the event on a tour and proceed to Amsterdam. He asked me to help him (a little) with the organization of this event. And during this amazing event, in one of the greatest capitials in the world (*cough*,*cough*) it would be great to see and meet you. And the event is free, so that can’t be your excuse 😀
To make sure you know what this event is all about, I would like to add the video of one of the presentations during the London event. Just to make sure you know what content will be shared during the event:
Looking at the speakers presenting in Amsterdam, this is a great opportunity to learn a lot about what is happening in IT at the moment and what the future holds in the next couple of years… To make sure you won’t be bored Enrico made sure the speaker line up is just amazing:
As you can see this is a very compelling list of speakers and most of the speakers will have the opportunity to speak for the entire audience, while a couple of them will need to share because an additional track/room in the afternoon. For more information on the speakers look here. For more information on the agenda look here.
This event couldn’t be done without the help of some great sponsors. They pay for everything that is needed to make this such a great event, and for that I would like to ask you to take a look at their websites, as well as talk with them during the event.
These are all really cool companies. I for one am very interested in to hear more about their products, and I hope you feel the same. Where do these technologies fit in my DC strategy and what problem do they solve, in which way.
Join us in Amsterdam
If this isn’t enough for you to join us in Amsterdam, I don’t know what would. So if you’re interested in joining us for a great 1 day event (and some beers the evening before) make sure to sign up using this link:
My good friend Enrico Signoretti organizes a great event named TechUnplugged. We started a couple of months ago in London and for those who want to know what was presented during the event, here is a link to the videos and the website: Video link and Website Link (and for those interested in the presentations in PDF format click here).
But as the title of the post suggests this post is mainly on BEER! and that’s because although it’s great to be informed and learn from some great people in the industry, it is also great to get to have a more informal meeting with each other. The evening before the event there will be a storagebeers in the Beer temple in Amsterdam!
Last week I had a great conversation with Bipul Sinha, Mike Tornincasa and Julia Lee about their new adventure: Rubrik. The conversation focussed on the new technology that Rubrik brings in an old fashioned site of the IT infrastructure, the backup (and recovery) site.
As not all of you might know, Bipul is a very well known gentleman in the startup world. In his former/present life (sorry, I meant career) he was a well known VC with a lot of great companies he helped to set up. Companies like Nutanix,PernixData and others are started in the last couple of years and really changed the IT landscape. This looks like a strange move, going from VC to CEO of a Startup in a segment that’s not that well known for it’s capability to change…
But as with most of the startups, changing perspective and making sure a customer gets, what a customer needs, is not easy and in need of a person like Bipul to guide one in the right direction. Providing a game changing solution that will help the business move to a better performing, easy to scale and easy to manage environment is key here. Companies are challenged with so many changes these days and so many marketing shit like Software Defined everything, cloud, webscale IT and so on, while struggling maintaining their environments, most of them just want a way to make things better. Bipul and his team have seen (and provided) the change needed for better scale, easier management and moving to the next level of infrastructure.
Making sure your data is save, whereever it resides, is a important to every company. And most backup vendors have some kind of backup tool, but most of the time it is a solution for one silo in the backup environment. Focussed on virtualization, some on tape, some on cloud, and so on. But it always seems to be one of these and they always seem to need resources (CPU/Mem/Storage) from your excisting environment to backup your environment. Off course, there are backup solutions that brings their own hardware, but it’s always for a certain use case, and to make sure your data is save in the changing IT world of today requires something new.
An administrator is a human (really ;P) just like you. And as a human being they like simplicity and efficiency just like us. When the first mobile phones came to market, most people were amazed by the possibilities that came with this. When the Iphone came and changed the way mobile apps were being used, making it that easy to install and use the applications, people were even more amazed. Now almost a decade later we’re all used to that kind of simplicity. Even the most sceptic people are slowly moving towards Ipad’s, Microsoft Surface or other tablets because of their ease of use. An administrator wants the same thing. Spin it up, and making sure he’ll only needs to add more capacity (that’s with performance included) if the system tells him so. He wants to concentrate on giving the user the experience they want, and not firefighting the environment all day long just to start over again the next day.
In IT things are going quick. 10 years ago you probably had a mobile for one (maybe two) reasons. Reaching out to other people, by call or text, and using it as an agenda. That was it, for most of us. Fast forward ten years and were using our phones in a complete other way. Calling is almost gone, and if we do call we like to use things like Skype because it’s free… But these days we use our phones for surfing the web, social media, watching television/movies, watching the weather forecast and so on. Things change fast, not only with your phone but even more in the datacenters around the world.
10 years ago your average datacenter would look something like this:
Server hardware for compute (and some local storage) and a SAN or NAS for your shared storage, making a couple of racks for typical datacenter not uncommon.
Fast forward again 10 years, and a lot of companies are building their datacenter like this:
Converged, Hyperconverged, WebscaleIT, give it a name, but what companies are looking for is high performance, easy to scale and simplistic infrastructure. Making this change is not going to happen over a year, more like a decade. But with companies like Nutanix, Simplicity and big companies like VMware (EVO) things will change quicker. The common use cases for Hyperconverged (VDI/Test/DEV) are relatively easy to convince where the strength of hyperconverged is, and now that hyperconverged has proven his reliability for a couple of years, more and more companies move their production workloads to hyperconverged too.
With Hyperconverged and cloud the way to backup is changing too. Traditional backup vendors are able to provide you with some solutions in this change of compute usage, but that’s not always the case. That’s where Rubrik comes in.
If Apple is the company that made the mobile world change, Rubrik will probably do the the same for the backup market. And although the two are completely different the first (Apple) can not operate without the second (Rubrik). What would you think if a something goes wrong at apple and they just tell you, all pictures you moved (saved) to iCloud are gone and we can’t retrieve them because we don’t have a backup? Hell breaks loose on earth, Apple will be gone in a couple of days, and people will look for solutions where their data is save. As said the first can not live without the other. Which backup Apple uses is not relevant, replication is great, but if a virus would affect data, it might just impact all systems, not only the data in the primary datacenter (if that would be what Apple uses ;P). No data needs to backup.
But with traditional backup, a lot calculation comes into account to make sure resources aren’t over utilized. Because a lot of the traditional backup solutions use those resources during the quiet hours within a company, this might be no problem. But a lot of companies are using their infrastructure 24/7 these days. With the Internet of Things closing your company during night time is killing. Reaching people across the globe is easy, and most companies are trying to do this or are moving to this. And so it’s getting more important to be able to make sure your environment won’t stall during backup hours, as well as making sure you’ll have your backup to fall back to in case of an emergency.
Rubrik will change the way we are thinking of backup. The software will be intuitive, easy to use and to the point, like IOS, Android and Windows Phone. But Rubrik will be based on commodity hardware, which will leverage Flash to meet the performance needs of these days. Making this change in backup with software and hardware is something I really look forward to. There is so much more to say about Rubrik, but it will just have to wait a bit, as I’m sure I’ll write about them more often. But I just wanted to give you a feel of what I thought of when I first had contact with this company. In the next posts I’ll dive into their technology more and figure try to figure out what’s really under the hood, in hardware as well as in software.
I’m not the first to have been writing about Rubrik, and I recommend you to read the following post too:
Next week I’ll fly to London to attent the Techunplugged event created by Enrico Signoretti. I do admire what Enrico is achieving and it is awesome to so see all the effort he’s putting into these events to serve the community. I would highly recommend you to join us next week in London if you’re in the neighbourhood or have the opportunity to join.
Looking at the agenda it is going to be a very enjoyable and leaningfull day in which you’ll be able to ask your questions, listen to the experts and talk with them face to face.
With a great line-up of sponsors who’ll tell their story. With companies like PernixData, Zerto, Cloudian, Load DynamiX and Zadara Storage you’ll know you’ll join the beer party J with a head full of knowledge. But wait hearing from these companies alone would be awesome, but there is much more.
The last couple of weeks I’ve been busy with a couple vR Ops designs and implementation in very different environments, and the question I get a lot is what the differences are between vCOPS and vR Ops. First of all I must point at the naming difference where vR stands for v Realize and Operation manager has become a part of much larger suite. A suite that will give you the opportunity to leverage, monitor, automate and build hybrid cloud environments.
Back to the question:
The vR Ops architecture consists of 1 Virtual Machines (VM) that works on a scale out basis, which differs from ealier version that consisted of a vApp with 2 VM’s and which was based on a Scale-up architecture. You’ll get a better picture looking at figure 1 and reading the information below.
As shown in the figure above, the deployment of vR Ops starts with a single VM (which will become the Master Node) and can easily be scaled out with additional nodes (which can be data nodes or remote collectors). To provide HA ,a master node can have a replica node (holding the same data as the MasterNode) which will take over if the master node fails. see the figure below for more information.
The Master node as well as a replica node holds the Global xDB and is responsible for collecting data from the vCenter Server, other vR Ops suite product and 3rd party data sources (metrics, topology and change events) and storing that raw data in its scalable File System Database (FSDB).
I’ll dive into other differences and more in depth posts in a later stage, but for now I just wanted to get this information out 😉
During Storage Field Day 7 we had the privilege to get a presentation from the founders of Springpath. Springpath is a start-up which came out of stealth a couple of weeks ago and is trying to solve one of the major problems in the datacenter, storage, through a software only solution. Surely it still needs hardware, but Springpath is one of those few companies which provide you with an excellent peace of software to put on top of the hardware you choose, although there still is a HCL for supported hardware. Please watch the Springpath HALO Architecture Deep Dive below for a deep dive into this solution (promise it is worth your time):
In the datacenters around the world companies are struggling with the datagrowth and it’s related cost. Where a lot of companies were used to buying server hardware seperate from storage, the price of scaling both silos independantly creates a lot of friction between the people managing these silos within the IT department. A lot of the older SAN’s are purely Scale Up and we all know that might be effecient enough for capacity, but the problems arise when the need excists for an increas storage performance.
The solution is in the software!?
The last two years, or so we’re hearing that the solution for all are datacenter problems are in the software. Software Defined Everything (which off course includes Software Defined Bacon :D) is the credo these days. Building upon this believe Springpath made their choice to only provide software for their customers, which can then leverage their own hardware, either already in place or newly bought. For now, and to be honest I don’t know if this will change at any given time, but the HCL now includes Cisco, HP, Dell and SuperMicro. Which is a large piece of the datacenter pie, if you ask me…
To leverage the full potential of hardware we always needed the versatility that software could give us. Only in the last couple of years it seems that there finally is a synergy between the two. Let’s be honest, a great Software Defined DataCenter can only be build with great software that leverages great hardware. Why would there otherwise be HCL’s still in place for almost all of the software suppliers.
Back to Springpath
Springpath is the next in anever growing line of vendors trying to leverage the storage problems through software. Although not that many provide you with a software solution only, there are still a couple of companies trying to provide a (kind) of similar solution. With services like inline deduplication, inline compression and the chance to use 7200 RPM SATA disk along with Flash and DRAM, is something we see more and more in the industry. So you have to bring other or better solutions to differentatiate from competitors. First bringing a software only solution is a different solution than most of the other players in this market, although Maxta does the exact same thing.
Looking High level at the DataPlatform gives you a feeling of the great potential this platform :
If you look at the whole picture, you’ll see a solution that will serve legacy as well as future applications as well as legacy as future storage protocols. Again, this is where Springpath takes a different approach to many of it’s competitors. Let’s dive a little deeper into the HALO architecture;
All Application data is striped across the servers in a server pool, and not only to the server the application is located. This way the applications can use all compute resources within the springpath Platform Software (SPS). Utilizing this kind of Data distribution leverage scaling performance as well as capacity when servers are added, and removing I/O bottlenecks on single server.
Like competitors like VSAN and Maxta reads and writes are cached at the Flash layer, giving a high performance rate. A write is acknowlegded as soon as it lands on FLASH and is replicated to the other flash resources in the SPS cluster, to make sure written data is secure. Hot data sets are kept in cache (Flash and DRAM) and only written to the capacity tier (which can be any type of disk, even 7200 SATA) when it becomes cold.
With HALO you’re able to seperate the performance and the capacity. Making it easier to scale independently tiers is a big gain that comes with these hyperconverged storage pools and it’s a great thing to be able to add capacity if you run out of space and and performance if that’s resource you’re getting short in.
HALO does inline deduplication as well as inline compression. The inline compression is done in variable sized blocks. Doing an inline variable sized block compression is one of those competitve edges Springpath has, using the sequantial data layout used in the HALO architecture.
HALO provides many Data Services like snapshots and clones. As all of you probably know these services can be very efficient and in the HALO architecture they can grow to very large numbers. These services help companies to recover data quickly and deliver applications rapidly.
Log Structured Distributed Object
As already mentioned the data layout within the HALO architecture is done in such a way that data is packed into smaller objects which in turn are layed out across a pool of servers in a sequential way. This kind of layout provide better endurance on the flash layer as well as better performance throughout the system. Replication is done in the same manner to make sure data is written in a secure way.
Where to use Springpath technology
There are a lot of ways to use this solution. But (I know there is always a but) as this is a 1.0 solution you may just want to wait a bit before depolying this in your production environment. This doesn’t mean you would not be able to leverage the great benefits the solution brings and spin this software up in parts of your datacenter that aren’t as critical as your production environment. Springpath sees there solution a good fit for the following enviroments:
Test and Dev
Remote office/Branch office
Virtualized Enterprise Applications
Big Data analytics
I’m not sure if these would all be the best fit for the software, but I can see a couple of them being a great fit for exploring the springpath software.
Call home functions
The last thing I want to mention is the call home function (and the Springpath support cloud leveraging this) which springpath calls autosupport. I have a strong feeling they’ve looked at NimbleStorage’s Infosight, which in my opinion is a good thing. Although I hope you have the opportunity to opt-out of this solution, I think this is a very strong feature, as it provides a solution which gives Springpath the power to proactively monitor your system, and thus provide a solution for a problem you even didn’t know you had or might occur when you didn’t take action. As well as give you an insight, through their big data analytics engine to provide insights on configurations, trends and best practices. This would give you a much better insight into your environment making sure it is always performing at is best as well as never running out of capacity.
Make sure to watch the entire #SFD7 Springpath presentation HERE, as well as read these great blogs by my fellow SFD7 delegates:
I’m very exited to tell you all I’ll be at Storage Field Day 7 in San Jose in a couple of weeks. Really looking forward to see a couple of the Storage Field Day Alumni, as well as a couple of exciting new people. Let’s have a look at the delegate list:
Chris M Evans is one those Storage Field Day Alumni that brings a lot experience and knows exactly when to ask the right question. What you guys don’t see is the awesome guy he is when the camera is off, he’s one of those people that makes you feel good and gives you the possibility to be yourself. His website and twitter account are always very informative, so make sure you follow them here:
Christopher Kusek will be at the Storage Field days for the first time, although it seems this Cat loving, humoristic, cloud jumping, vegan Ninja has always been around in spirit. throwing his own parties at VMworld and making sure you have a good laugh whenever you’re in the neighborhood… He wrote a couple of great VMware books as well as some awesome whitepapers. Make sure you’ll visit his website and follow him on twitter:
Dan Frith will be at Storage Field Day for the second time. At Storage Field Day 6 I met Dan and got to know him as a great guy who has the looks of wolverine, but is way cooler. As an Aussie he needs to travel a lot of miles before he is in the Silly Valley area. Really looking forward to meeting him again and reading his blogposts:
Dave Henry is one of those new storage field day delegates I really look forward to meeting. I’ve met him a couple of years ago when he was in his EMC role during the San Francisco VMworld. Dave is really knowledgable and I’m curious what he’ll have to say about the Storage Field Days:
Enrico Signoretti is known by everyone in the storage industry. If you don’t know him you should take a look at his website, twitter and other social media outlets out there, and you’ll know exactly what I mean. I’ve met Enrico during multiple Storage Field Days as well as VMworld and VMUG events and it’s always awesome to meet him. As said make sure you follow him on twitter and read his blog:
Howard Marks is also known by everybody who is in storage and VMware as he’s the storage veteran. He’s a writer, speaker and blogger for multiple media, and he’s the one making sure vendors aren’t talking gibberish… It’s always an honor to meet storage veterans like Howard, and you should really follow his blogposts and twitterfeed:
Jon Klaus is one the new “storagekids” on the block. During Storage Field Day 6 he was was one of the new guys, and it was awesome to have him around. He’s a storage guru and he has an awesome blog providing great information. He’s a EMCelect from the start of the program and you should follow him:
Keith Townsend is probably the only dutch storage guy born in the USA ;-P He’s one of the guys coming for the second time and he’s an awesome and knowledgable guy who always seems to have the right questions at the right time. I look forward on meeting him again and look forward on his tweets and blogs:
Mark May is another first storage field day delegates. I don’t really know a lot about Mark, but seeing he’s an EMCelect and a infrastructure veteran, he’ll be an excellent fit in this group. Make sure you follow him on twitter an through his blog:
Ray Lucchesi is another Storage Field Day veteran and he’s also a great blogger, podcaster and an awesome guy to spend time with. Ray and Howard are the creators of the greybeards on storage podcast, and Ray is the guy that always has some awesome questions to make sure he (and thereby we) know exactly what a certain feature means. Always looking forward to read Ray’s blog as well as the things he mentions on twitter:
Vipin V.K. will be at storage field day for the first time. He’s an EMCelect as well as a vExpert. It’s always great to meet people from other continents and I’m really looking forward to meet him and read his blogposts as well as following his twitterfeed:
I want to thank Stephen Foskett, Tom Hollingsworth and Claire Chaplais, the oragnizers of this awesome event for inviting me, they are hosting a lot of awesome events and you can be part of it too. Make sure you follow them through the techfieldday website, as well as on twitter an facebook:
During the beta fase of vSphere 6 I was looking for a couple of workarounds for problems during installation process in my homelab. One of those problems is that (as with vSphere 5.5) certain not supported hardware are the onboard realtek NICs on the cheaper “homelab” motherboards. During this search I came across this workaround (login needed) by Andreas Peetz explaining a method to install the drivers onto the vSphere host(s) in your environment. Thanks Andreas, it worked well for me!!
here is a workaround for you … I have created a package that includes the original VMware net-r8168, net-r8169, net-sky2 and net-s2io drivers and uses the name (net51-drivers), and published it on my V-Front Online Depot.
If your host is already installed and has a direct Internet connection then you can install it from an ESXi shell by running the following commands:
esxcli software acceptance set –level=CommunitySupported
esxcli network firewall ruleset set -e true -r httpClient
During Storage Field Day 6 we visited Coho Data HQ for the second time, and if you want to learn you really should watch the videos recorded during the event. First a bit of history on Coho Data which was founded by Andrew Warfield, Keir Fraser and Ramana Jonnala. And if you didn’t know these guys are known by a small thing called XenSource (later acquired by Citrix).
Coho Data introduced their new scale-out hybrid storage solution (NFS for VM workloads) during Storage Field Day 4 a year ago and is a regular Tech Field Day sponsor as they presented during Virtualization Field Day 3. Hybrid in the Coho Data product means they use a mix of PCIe Flash and SATA disks. As said the Flash devices used by CohoData are PCIe based devices (Intel 910 800 GB to be exact, but due to the Coho Data architecture this can be changed easy and the Intel devices are the second kind of flash devices Coho uses in the array, first were Micron).
As you can see in the picture above the Coho Data is build up of a 2U box holding 2 “MicroArrays” that each have 2 CPUs, 2 x 10GbE NIC port and 2 PCIe INTEL Flash cards. With this configuration a 2U block provides 39TB of capacity and around 180K IOPS (Random 80/20 read/write, 4K block size). The Coho Data product offers deduplication and compression as well as replication, High Availabilty and Snapshot technology in their offering . Last but certainly not least, it comes with an OpenFlow-enabled 10GbE switch (Arista) to allow ease of management, scalability and the opportunity to Streamline the data streams.
Diving deeper into the Coho Data DataStream architecture reveals the IO lane technology uses: 10GbE NIC <-> CPU <-> PCIe Flash. All IO lanes have their own CPU, 10 GbE NIC Port and a 800 GB Intel PCIe Flash . With this architecture Coho Data created an easy to scale, high performance storage system. By using the Openflow enabled SDN switch to manage the streams within the whole DataStream environment and giving the customer a SDS solution with the Coho Data MicroArray this is storage at it best.
I hear you think: “what about setting it up and managing the Coho Data offering? It’s probably extremely hard to setup and manage this system.” But it isn’t. You could setup the Coho Data system in about 15 minutes, and once your done you can use the UI to manage and maintain the system easily. Just take a look at the picture below and make sure to watch the Tech Field Day videos to see more on the UI.
What’s the future for Coho Data?
During the presentation there were a couple of questions going around in my head, but because listening to Andy presenting is taking almost all of my brain resources I didn’t ask then. That should be that big of problem, so I asked the questions through mail when I was back in the Netherlands and here are the questions and answers:
Q.You mentioned that with 1 PCIe flash device you were able to saturate a 10 Gig NIC. I understand the PCIe performance is more than sufficient for the CohoData product, but are you already looking at things like Diablo’s MCS? I know it’s still new technology with it’s own pros and negatives, but still I thought in some cases this might be a great solution for Coho. What’s your opinion?
A. The reason that I talked about NVDIMM in the second part of my presentation is that I really see RAM speed memories starting to become more and more practical in storage systems from about 2016/2017 onwards. The data path work that we are doing is really focussed towards these: PCIe flash is fast enough to saturate the 10Gb interface, but mostly with large requests on today’s hardware. As we move to NVDIMM and related technologies like Diablo’sstuff (which is really, really cool BTW), the biggest overheads will be the (software) data path processing to do file system layout, replication, snapshots, placement, recovery, etc.
The work that Coho is doing here, both on the host and in the network, is one of the biggest differences between us and other companies. I think it’s really going to start to show over the next couple of years.
If you look at the left picture (taken last year) and the right picture (taken during the SFD6 presentation) it seems AFA and cold data systems will be added…
Q. One of the slides showed a cluster of Coho arrays and it was interesting to see normal arrays as an all HDD (archiving/object store??) array. Is this what you’re looking at? And maybe even further are you also looking at AFA’s for demanding workloads, or is this not needed at all with Coho?
A. Ah — you found the (unintentional) easter egg! I totally forgot to mention this in my presentation!
In 2015 we will roll out 2 new appliance versions. One will be a “hybrid flash” chassis that combines PCIe flash with SAS flash. It will be performance focussed and still have all the transparent scale-out properties of our existing boxes. It will also be able to install into an existing hybrid-disk/flash based coho install.
The second new box, which we are planning for 2H 2015 is a capacity box that is a 2-server, 70-disk 4u. It will have between 250 and 500 TB raw capacity, and serve as bulk storage for cold data.
For large installs, these two boxes will allow customers to scale capacity and performance completely independently of one another.
There is so much more to be told about Coho Data but that’s for a later time. For now…. Let’s have weekend!!! Have a great one and CU again soon!!