Starwind software: SFD15 preview

It is time to start blogging again, and there couldn’t possibly be better way then by starting on my invite to Storage Field Day 15 in Silicon Valley. Truth be told, not blogging was mainly due to building our own house, and we’re still very busy with building our dream house here in the Netherlands, but no more excuses for me and let’s start doing the previews for next week.

Second timer: Starwind

Last year at Storage Field Day 12 we had the first presentation from Starwind at a Tech Field Day, and I was really impressed by the technology they offer and more importantly the level detail they put in to their presentation as well as the knowledge they showed during the presentation. Most of the VMware techies will know Starwind for their ISCSI target technology they offered, which was used in many homelab environments. I know I’ve used it on multiple occassions in my lab at least. But they are moving on an during the SFD15 presentation we got more information on their HCI solution, AcloudA, Veeam VTL and Cloud replication and Starwind Scale Out and Log Structured File Systems.

There are a couple of great resources on the products they’ve talked about last time that I’ll provide a link to here:

Dan Frith – There’s A Whole Lot More To StarWind Than Free Stuff
Stephen Foskett – The Year of Cloud Extension
Rich Stroffonlino – Starwind gives you a gateway to the Cloud
Adam Bergh – Storage Field Day 12 Day 1 Recap and Day 2 Preview

The videos from SFD12

When companies already presented at a Tech Field Day, I try to watch all the videos before we go in to a new one, although with some companies this really seems impossible because of the number of times they were at a Tech Field Day. With Starwind this is (now) still easy, and that is why I’ll also include the SFD12 Vimeo videos here as well.

Starwind Simple, Flexible, Scalable Storage

 

 

Starwind Fault-Tolerant Storage Demo

 

 

StarWind Scale Out and Log Structured File System

 

 

StarWind and AcloudA: Stairway to Cloud

 

 

StarWind and Veeam VTL and Cloud Replication

 

 

I’m really looking forward to meeting the Kolomyeytsev brothers again, and i hope you will follow us during the livestream at the Tech Field Day site:

http://www.techfieldday.com/event/sfd15

See you soon for the next presenter.

Disclaimer: I was invited to this meeting by TechFieldDay to attend SFD15 and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or edited by any other person than the me.

Datera: Elastic Data Fabric

During Storage Field Day 10 we had a very interesting presentation by Datera on their Elastic Data Fabric. They say its the first storage solution for enterprises as well as service provider clouds that is designed with DevOps style operations in mind. The Elastic Data Fabric provides scale-out storage software capable of turning standard commodity hardware into a RESTful API-driven, policy-based storage fabric, for the enterprise environments.
Datera1

The Datera EBF solution gives your environment the flexibility of hyperscale environments through a clever and innovate software solution. In a fast changing application landscape being fast, flexible, agile and software defined is key. Cloud native is the new kid on the block and more and enterprises are adapting to these kinds application development.

Datera seems to be able to provide enterprises as well as cloud providers the storage that is needed to build these applications. The way this is accomplished by Datera is be defined in four main solutions:

Intent defined

What is Intent defined? I had a bit of a struggle on that question myself, so just lets stick to the explanation Datera provides:
Intent defined is a ochestrated play between storage and application. An application developer will know what he would like from storage, and can define these to the storage application programming interface. This is DevOps at its best. When storage is able to be scriptable from a developer perspective and manageable from a storage adminstrator perspective you know you hit the jackpot.

Screen Shot 2016-07-20 at 14.34.14

API first approach

API_full.png

Already mention the API a couple of times, but this is one of the key features of the Datera EBF, and therefor very important. Datera aims to set a new standard with the elegance and simplicity of their API. They intend to make the API as easy usable as possible to make sure it used and not forsaken because it so hard to understand.
The API is a well though and extremely hard peace to do right creating something as difficult as a storage platform for the customers Datera is aiming for. The API first approach and the approach Datera took developing the API seems to be a real seldom seen piece of art in this space.

Screen-Shot-2016-07-20-at-15.10.49.png

Hyper-composable

Things always need to come together creating something like a storage platform. One of these things is that the companies buying your solution want the opportunity to mix and match. They want to buy what they need now and if they need more (capacity or performance or both) they want to add just as easy. At Datera the you can mix and match different kind of nodes without impacting the overall performance of the solution, making it one of these rare solutions that is truely hyper-composable.

Screen-Shot-2016-07-20-at-15.11.02.png

Multi-Tenant

This is where a lot of software solutions say they are, but…. When you start using their products you find out the hard way that the definition of Multi-tenant is used in many ways, and true multi-tenancy is hard to get.
Is this different with Datera? They say it is, but to be honest I’m nit really sure of it. I’ll try to figure this out and reach out to the Datera people. And although they do not have a lot of official customers, a couple of them are well known for their multi-tenant environment, so my best guess is that the multi-tenancy part is OK with Datera, if not I’ll let you know.

Screen-Shot-2016-07-20-at-15.11.09.png

Conclusion

I was very impressed with the information provided by Datera during Storage Field Day 10. Due to a ton of work coming back after SFD10 and TFD11 I didn’t really have time to do a deep dive into the technology, but that is where my fellow SFD10 delegates are a big value to the community, so here are their blogposts:

The Cool Thing About Datera Is Intent by Dan Frith
Defining Software-defined Storage definition. Definitely. – Juku.it by Enrico Signoretti
Torus – Because We Need Another Distributed Storage Software Solution by Chris Evans
Storage Field Day 10 Preview: Datera by Chris Evans
Storage Field Day 10 Next Week by Rick Schlander

And as always the Tech Field Day team provides us with an awesome site full of the information on Datera here

And just to make sure you have a direct option to watch the videos of the SFD10 presentations, here they are:

1. Datera Introduction with Marc Fleischmann

2. Datera Docker and Swarm Demo with Bill Borsari

3. Datera Architecture Deep Dive

4. Datera OpenStack Demo with Bill Borsari

And make sure to visit the Datera website as well:

http://datera.io/

If you are interested in these kinds of topics, please join us for the TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. This is a one day event focused on cloud computing and IT infrastructure with an innovative formula, it combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!

VMware VSAN 6.2, what’s new?

This is a cross post from my Metis IT blogpost, which you can find here.

VMware VSAN 6.2

On February 10 VMware announced Virtual SAN version 6.2. A lot of Metis IT customers are asking about the Software Defined Data Center (SDDC) and how products like VSAN fit into this new paradigm. Let’s investigate what VMware VSAN is, and what the value would be to use it, as well as what the new features are in version 6.2

VSAN and Software Defined Storage

In the data storage world, we all know that the growth of data is explosive (to say the least). In the last decade the biggest challenge for most companies was that people just kept making copies of their data and the data of their co-workers. Today we not only have this problem, but storage also has to provide the performance needed for data-analytics and more.

First the key components of Software Defined Storage:

  • Abstraction: Abstracting the hardware from the software provides greater flexibility and scalability
  • Aggregation: In the end it shouldn’t matter what storage solution you use, but it should be managed through only one interface
  • Provisioning: the possibility to provision storage in the most effective and efficient way
  • Orchestration: Make use of all of the storage platforms in your environment by orchestration (vVOLS, VSAN)

vsan01

VSAN and Hyper-Converged Infrastructure

So what about Hyper-Converged Infrastructure (HCI)? Hyper-Converged systems allow the integrated resources (Compute, Network and Storage) to be managed as one entity through a common interface. With Hyper-converged systems the infrastructure can be expanded by adding nodes.

VSAN is Hyper-converged in a pure form. You don’t have to buy a complete stack, and you’re not bound to certain hardware configurations from certain vendors. Of course, there is the need for a VSAN HCL to make sure you reach the full potential of VSAN.

VMware VSAN 6.2. new features

With the 6.2 version of VSAN, VMware introduced a couple of really nice and awesome features, some of which are only available on the All-Flash VSAN clusters:

  • Data Efficiency (Deduplication and Compression / All-Flash only)
  • RAID-5/RAID-6 – Erasure Coding (All-Flash only)
  • Quality of Service (QoS Hybrid and All-Flash)
  • Software Checksum (Hybrid and All-Flash)
  • IPV6 (Hybrid and All-Flash)
  • Performance Monitoring Service (Hybrid and All-Flash)

Data Efficiency

Dedupe and compression happens during de-staging from the caching tier to the capacity tier. You enable “space efficiency” on a cluster level and deduplication happens on a per disk group basis. Larger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated, they are compressed. A significant saving already, but combined with deduplication, the results achieved can be up to 7x space reduction, off course fully dependent on the workload and type of VMs.

Erasure Coding

New is RAID 5 and RAID 6 support over the network, also known as erasure coding. In this case, RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts, 1 can fail without data loss. This results in a significant reduction of required disk capacity compared to RAID 1. Normally a 20GB disk would require 40GB of disk capacity with FTT=1, but in the case of RAID-5 over the network, the requirement is only ~27GB. RAID 6 is an option if FTT=2 is desired.

Quality of Service

This enables per VMDK IOPS Limits. They can be deployed by Storage Policy-Based Management (SPBM), tying them to existing policy frameworks. Service providers can use this to create differentiated service offerings using the same cluster/pool of storage. Customers wanting to mix diverse workloads will be interested in being able to keep workloads from impacting each other.

Software Checksum

Software Checksum will enable customers to detect corruptions that could be caused by faulty hardware/software components, including memory, drives, etc. during the read or write operations. In the case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection these errors without end-to-end integrity checking.

IPV6

Virtual SAN can now support IPv4-only, IPv6-only, and also IPv4/IPv6-both enabled. This addresses requirements for customers moving to IPv6 and, additionally, supports mixed mode for migrations.

Performance Monitoring Service

Performance Monitoring Service allows customers to be able to monitor existing workloads from vCenter. Customers needing access to tactical performance information will not need to go to vRO. Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter. The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally to 3rd party monitoring solutions by API. The Performance monitoring service runs on a distributed database that is stored directly on Virtual SAN.

Conclusion

VMware is making clear that the old way to do storage is obsolete. A company needs the agility, efficiency and scalability that is provided by the best of all worlds. VSAN is one of these, and although it has a short history, it has grown up pretty fast. For more information make sure to read the following blogs, and if you’re looking for a SDDC/SDS/HCI consultant to help you in solving your challenges, make sure to look for Metis IT.

Blogs on VMware VSAN:
http://www.vmware.com/products/virtual-san/
http://www.yellow-bricks.com/virtual-san/
http://www.punchingclouds.com/
http://cormachogan.com/vsan/

VMware to present on VSAN at Storage Field Day 9

I’m really exited to see the VMware VSAN team during Storage Field Day 9, where they will probably dive deep into the new features of VSAN 6.2. It will be an open discussion, where a I’m certain that the delegates will have some awesome questions. Also I would advise you to watch our earlier visit to the VMware VSAN team in Palo Alto about a year ago, at Storage Field Day 7 (Link)