During Storage Field Day 6 we visited Coho Data HQ for the second time, and if you want to learn you really should watch the videos recorded during the event. First a bit of history on Coho Data which was founded by Andrew Warfield, Keir Fraser and Ramana Jonnala. And if you didn’t know these guys are known by a small thing called XenSource (later acquired by Citrix).
Coho Data introduced their new scale-out hybrid storage solution (NFS for VM workloads) during Storage Field Day 4 a year ago and is a regular Tech Field Day sponsor as they presented during Virtualization Field Day 3. Hybrid in the Coho Data product means they use a mix of PCIe Flash and SATA disks. As said the Flash devices used by CohoData are PCIe based devices (Intel 910 800 GB to be exact, but due to the Coho Data architecture this can be changed easy and the Intel devices are the second kind of flash devices Coho uses in the array, first were Micron).
As you can see in the picture above the Coho Data is build up of a 2U box holding 2 “MicroArrays” that each have 2 CPUs, 2 x 10GbE NIC port and 2 PCIe INTEL Flash cards. With this configuration a 2U block provides 39TB of capacity and around 180K IOPS (Random 80/20 read/write, 4K block size). The Coho Data product offers deduplication and compression as well as replication, High Availabilty and Snapshot technology in their offering . Last but certainly not least, it comes with an OpenFlow-enabled 10GbE switch (Arista) to allow ease of management, scalability and the opportunity to Streamline the data streams.
Diving deeper into the Coho Data DataStream architecture reveals the IO lane technology uses: 10GbE NIC <-> CPU <-> PCIe Flash. All IO lanes have their own CPU, 10 GbE NIC Port and a 800 GB Intel PCIe Flash . With this architecture Coho Data created an easy to scale, high performance storage system. By using the Openflow enabled SDN switch to manage the streams within the whole DataStream environment and giving the customer a SDS solution with the Coho Data MicroArray this is storage at it best.
I hear you think: “what about setting it up and managing the Coho Data offering? It’s probably extremely hard to setup and manage this system.” But it isn’t. You could setup the Coho Data system in about 15 minutes, and once your done you can use the UI to manage and maintain the system easily. Just take a look at the picture below and make sure to watch the Tech Field Day videos to see more on the UI.
What’s the future for Coho Data?
During the presentation there were a couple of questions going around in my head, but because listening to Andy presenting is taking almost all of my brain resources I didn’t ask then. That should be that big of problem, so I asked the questions through mail when I was back in the Netherlands and here are the questions and answers:
Q.You mentioned that with 1 PCIe flash device you were able to saturate a 10 Gig NIC. I understand the PCIe performance is more than sufficient for the CohoData product, but are you already looking at things like Diablo’s MCS? I know it’s still new technology with it’s own pros and negatives, but still I thought in some cases this might be a great solution for Coho. What’s your opinion?
A. The reason that I talked about NVDIMM in the second part of my presentation is that I really see RAM speed memories starting to become more and more practical in storage systems from about 2016/2017 onwards. The data path work that we are doing is really focussed towards these: PCIe flash is fast enough to saturate the 10Gb interface, but mostly with large requests on today’s hardware. As we move to NVDIMM and related technologies like Diablo’sstuff (which is really, really cool BTW), the biggest overheads will be the (software) data path processing to do file system layout, replication, snapshots, placement, recovery, etc.
The work that Coho is doing here, both on the host and in the network, is one of the biggest differences between us and other companies. I think it’s really going to start to show over the next couple of years.
If you look at the left picture (taken last year) and the right picture (taken during the SFD6 presentation) it seems AFA and cold data systems will be added…
Q. One of the slides showed a cluster of Coho arrays and it was interesting to see normal arrays as an all HDD (archiving/object store??) array. Is this what you’re looking at? And maybe even further are you also looking at AFA’s for demanding workloads, or is this not needed at all with Coho?
A. Ah — you found the (unintentional) easter egg! I totally forgot to mention this in my presentation!
In 2015 we will roll out 2 new appliance versions. One will be a “hybrid flash” chassis that combines PCIe flash with SAS flash. It will be performance focussed and still have all the transparent scale-out properties of our existing boxes. It will also be able to install into an existing hybrid-disk/flash based coho install.
The second new box, which we are planning for 2H 2015 is a capacity box that is a 2-server, 70-disk 4u. It will have between 250 and 500 TB raw capacity, and serve as bulk storage for cold data.
For large installs, these two boxes will allow customers to scale capacity and performance completely independently of one another.
There is so much more to be told about Coho Data but that’s for a later time. For now…. Let’s have weekend!!! Have a great one and CU again soon!!
Stormagic was one of the companies I didn’t know what to expect of during Storage Field Day 6. As one of many in the VSA market I just didn’t see the real value of another player on the VSA battlefield, BUT… As the title already reveals, Stormagic is one those awesome companies creating technology for a special market. They are not targeting becoming the next EMC or Nutanix, they want to help the companies in need of specialised technology.Let’s dive into the technology they offer and the companies needing this.
You can watch this presentation by Hans O’Sullivan (CEO) during SFD6 for more information on StorMagic SvSAN:
Based on linux, Stormagic SvSAN is a pupose build VSA to serve at the egde. Because of the SvSAN architecture it can run in on a hypervisor as well as on bare metal. As you can see, the product is based on iSCSI. That being said, because of the way SvSAN is build, there should be no problem to offer other protocols as well, whenever there is customer need. Stormagic accomplishes this by running SvSAN as a (sort of) stack in the user space and leveraging the Linux Asynchronous IO interface (including zero copy and direct access to storage devices) making it just as effecient as a kernel based product (I’ll have to try that for myself someday soon).
More information on this can be seen in this presentation by Chris Farey (CTO) during Storage Field Day 6:
So with a such well though technology, where does this fit into the customers environment? You might think this is a useable technology for the complete datacenter, and although it might very well be, it’s not where Stormagic SvSAN is build for. The technology is developed to serve at the edge of a datacenter. Using SvSAN there where it makes sense to have a high available but centrally managed solution for the business critical applications. With support for VMware as well as Hyper-V and a true hardware independent solution is what gives true strength to the SvSAN offering. This is what makes SvSAN a great solution for many use cases…
StorMagic SvSAN infrastructure
Central management through vCenter or StorMagic web GUI
You can watch this video for more information on the SvSAN product by Chris Farley:
So what are the use cases for StorMagic SvSAN? First of all it is a scalable and high available product that can be build with only 2 servers which makes it a great solution for smaller environments and Remote Offices at the edge of an environment where business critical applications run. Looking at the pricing of the product and the possibilties the product offers (High Availability/Central Management/VMware VAAI support/Caching/Storage Pooling and VSA restore technologies) reveals a great and mature solution that can be used for many environments. Using the Stomagic website here are the major fields Stormagic SvSAN is used:
Retail – stock control, customer and staff management, point-of-sale
Government – diplomatic communication platforms
Defense – battlefield control systems
Manufacturing – process control
Financial Services – customer transactions
Restaurant and Hospitality – booking and kitchen ordering systems
Transportation – vehicle positioning and monitoring
Energy – remote power generation plant control
Medical – PACS
More on the SvSAN use cases can be viewed in this presentation by Hans O’Sullivan during SFD6:
As said before I asked myself what to think of Stormagic SvSAN. After the presentation at SFD6 I really see where the true strength of SvSAN is. Does that mean there is nothing to improve? Absolutely not, there is always room for improvement, but as you can see in the presentation those questions were asked and I know the StorMagic people are more then happy to listen to your need for improvement and work on a solution as soon as there is a need. This is company knowing they can do well in a certain field and they keep improving to make sure they can offer a product to their customers that makes sense from their perspective and not because it’s another cool feature on the list….
Other Delegates on StorMagic:
A couple of delegates already blogged about StorMagic that you should read also, so here they are:
Disclaimer: I was invited to this event by the GestaltIT and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or published by any other person than me (Arjan Timmerman).
Sometimes when you hear about a product, it makes you feel positive the first you hear of it. I had that feeling about Avere last year during Storage Field Day 4 when Ron Bianchini told us about the Avere FXT product, and everything that a company could accomplish using FXT. If you’re not familiar with Avere it might be a good idea to watch the SFD4 videos first, which I’ll include in this post first:
Hearing and watching these videos again I had a great feeling on the Avere Sytems offering and wrote a blog post on my thoughts of what I heard during this presentation. You can find my thoughts here:
During Storage Field Day 6 we had another presentation by Avere and although some of the technology is still a very interesting product, this time around we got an announcement on the brand new virtual FXT product for AWS, and it makes sense but I would like to see more of a company like Avere Systems.
Where a lot of the Avere customers are using FXT to accelerate their local NAS systems, the great potential of FXT is to be the edge filer for cloud object stores. And that’s where IMO Avere has a great potential in US based countries, but none (for now) in countries outside the US border (Law). Simply because the FXT product only talks Amazon S3 and that’s just not happening in many countries outside the US for large interlectual property of companies.
The same thing goes for the virtual FXT product, which is a promising product, but also only available on AWS. This is a problem for many of the companies I’m consulting for. They are thinking of cloud, but when it comes to the patriot act and making sure their data stays within the borders of HQ, AWS is just not happening.
So what is this virtual FXT all about?
Avere Systems virtual FXT is a way to accelerate your reads/writes in your AWS compute infrastructure. Simply put it’s the FXT software for the cloud. Awesome when you’re looking purely at the technology, but as already said it’s AWS only.
Some great information on the virtual Avere FXT product can be found on juku.it:
I really think there is a lot of potential for Avere sytems and the products they make. They should however be more hypervisor/cloud agnostic and make sure a customer has a choice. Making this available for VMware, Microsoft, KVM and other Cloud solutions that are more standard and also provide the opportunity for country local cloud providers is a big gain for Avere if you ask me.
A couple of delegates also shared their thoughts on Avere systems presentation and can be found here:
Disclaimer: I was invited to this event by the GestaltIT and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or published by any other person than me (Arjan Timmerman).