by Yannick Arens | May 1, 2017 | Elastifile
Tuesday 18 april 2017, Patrick van Helden, Director of Solution Architecture at Elastifile was at Metis IT to tell about Elastifile. We had the chance to try a real-life deployment of the Elastifile software.
Elastifile is a relative new name in the storage area. Since this month, the company is out of stealth and has presented its Elastifile Cloud File System. The company is founded in 2013 in Israel by three founders with a strong background in the virtualization and storage industry. In three funding rounds the product raised $58 Million. In the last round $15M came directly from Cisco. Other investors in Elastifile are leading flash Storage vendors and Enterprise Cloud Vendors.
What is Elastifile?
The goal of the founders is to have a storage platform that is able to run any application, on any environment, at any location. Whereby any location means really any location: Cloud or on premise. The product is developed to run with the same characteristics in these environments. Therefor Elastifile wrote from scratch a POSIX compliant filesystem that supports file, block and object oriented workloads and is optimized for flash devices. You can store your documents, user shares, VMware VMDK files, but also use it for big data applications, all stored on the same Elastifile Cloud File System.
But what is the difference with a NetApp storage for example? A NetApp system can also provide you the same capabilities and is already doing this for years. The first thing in which Elastifile’s approach is different than NetApp, is the way the product is written. It’s written for high performance and low latency. Elastifile only supports flash devices and the software knows how to handle the different types of flash devices to get the best performance and extend the lifetime of flash devices. Furthermore, ElastiFile is linearly scalable and can be combined with compute (Hyperconverged Solutions).
Another difference is that the Elastifile Cloud File System can run inside a (public) cloud environment and connect this to your own on premise environment. The problem with (public) cloud environment is that it gives you not the same predictable performance as in your on-premise environment. The Elastifile Cloud File System have a dynamic data path to handle noisy and fluctuating environments like the cloud. Due to this dynamic path Elastifile can run with high-performance and most important with low latency in cloud-like environments.

Deployment models
Elastifile’s Cloud File System can be deployed in three different deployment models:
- HCI
- Dedicated Storage mode
- In-Cloud
The first deployment model is HCI, where the Elastifile software runs on top of a hypervisor. Now, Elastifile supports only VMware, additional hypervisors will be added in future releases. You can compare this deployment with many other HCI vendors, but when connecting and combining the HCI deployment model with one of the other deployment options it gives you more flexibility and capabilities. Most other HCI vendors only support a small set of certified hardware configurations, wherein Elastifile supports a broad range of hardware configurations.
The second and in my opinion the most interesting deployment model is the dedicated storage mode deployment. In this model, the Elastifile software is directly installed on servers with flash devices. Together they create the Elastifile distributed storage. With this deployment model, it is possible to connect hypervisors directly to these storage nodes using NFS (and in the future SMB3), but also connect bare-metal servers with Linux, Oracle or even container based workloads to this same storage pool.
As we already discussed earlier the latest deployment is the In-Cloud deployment. Elastifile can run In-Cloud in one of the big public cloud providers but is not limited to public clouds. Elastifile can also run in other clouds as long it delivers flash based storage as infrastructure. The Elastifile can use the storage to build its distributed low-latency cloud file system.
When combining these three models you get a Cloud ready file system with high performance, low latency and a lot of flexibility and possible use-cases.

Use-cases
HCI file services
A great use-case for the Elastifile Cloud File System is that you can decouple the operating system and application from the actual data of the application in a HCI deployment. You can use the Elastifile Cloud File System to mount a VM directly to the storage and bypass the hypervisor. And because the Elastifile Cloud File System is a POSIX filesystem it can store millions of files with deep file structures.
Linear scalable in cloud-like environments
A second use-case for the Elastifile Cloud File system is that the performance with any deployment of Elastifile delivers a predictable low-latency performance. When expanding the Elastifile nodes each node will add the same performance as any other node. When adding additional storage, you’re also adding additional storage controllers to the cluster. This result in a linear scalable solution even in cloud-like environments.
Flash tiering
The last use-case of the Elastifile is that it could automatically move files on the filesystem to another tier of flash storage. This could be a cheaper type of flash or a less performing type of flash storage, for example consumer grade SSD’s. Movement will be based on policies. The Elastifile software can further offload cold data to a cheaper type of storage, like a S3 storage. This can be a cloud based S3 storage, but can also be an on premise S3 storage.
The future
How the future will look like is always difficult to say, but from all what I already tried is this a very promising first version of the Elastifile Cross-Cloud Data Fabric. In the session with Patrick, I deployed the software myself and Patrick showed us the performance on these deployed nodes without any problems. The idea’s around the product are great and on the roadmap, you find the most important capabilities which are needed to make it a real mature storage product.
by Yannick Arens | May 25, 2016 | TechUnplugged
Two weeks ago I attended the TechUnplugged in London. For whom doesn’t know what TechUnplugged is. TechUnplugged is a full day conference focused on cloud computing and IT infrastructure. The conference brings influencers, vendors and end users together so it is possible to create interaction between these people. If you want more information, look at techunplugged.io.
All speakers had a slot of 25 minutes to tell their story. First I thought it was very little time to tell a story, but after a day of presentation I think it’s sufficient to do the job. If the subject of the presentation is not in your area of interest, it will only cost you 25 minutes of your time. The presentations of the influencers are interspersed with the vendors so you have varied subjects and presentation styles.
The majority of the presentations were storage oriented. These presentations addressed subjects like: the history of storage, winners and losers in storage solutions, multiple vendors and secondary storage. Besides the storage presentations there was a session about OpenStack, Clouds and Containers and the presentation about the Software Defined Data Center from my colleague Arjan Timmerman with stroopwafels and chocolate. I gave, or it was the intention to give it live, a technical overview about vRealize Automation, but because of the bad Wi-Fi connection it was only a movie.
The last part was an ‘Ask Me Anything’ panel consisting of influencers and vendors. Everybody could ask any questions got their answers from different perspectives. It seemed as a nice concept, but it’s always difficult to create this kind of interaction. After the ‘Ask Me Anything’ panel it was time to start the social part of the conference (beer, wine and networking).
I’m looking back at a well-organized event with a broad pallet of interesting subjects and people involved. I think the combination of vendors and influencers and the presentations is perfect formula to be “updated” and involved in the last developments. For me some new products were introduced and I got some new insights in the fast changing world of cloud, storage and SDDC. I sincerely hope I meet you all at the next TechUnplugged in Amsterdam!

by Yannick Arens | Apr 5, 2016 | SimpliVity
This is a cross post from my Metis IT blogpost, which you can find here.
Today, April 5, 2016, SimpliVity announced new capabilities of the OmniStack Data Virtualization Platform. The announcement consists of three subjects:
- OmniStack 3.5
- OmniView
- Hyper-V
Omnistack 3.5
This new version is the first major update of this year and I hope there will come more updates. The latest major release, version 3.0, was in the early second half of 2015. SimpliVity say this new version will deliver new capabilities optimized for large, mission-critical and global enterprise deployments. Besides improvements to the code, this release will add three new main capabilities to the OmniStack Data Virtualization Platform.
Stretched Clusters
The first improvement in the OmniStack software is the ability to create multi-node stretched clusters. In the current versions it is only possible to create a stretched cluster with a total of 2 nodes divided over two sites. This limit is now increased and supported by default. With a stretched cluster it will be possible to achieve a RPO of zero and a RTO of seconds.

Intelligent Workload Optimizer
The second new capability is the Intelligent Workload Optimizer. SimpliVity will use a multi-dimensional approach to balance the workload over the platform. The balancing will be based on CPU, Memory, I/O performance and Data Location. This will result in less data migrations and a greater virtual machine improvement.

REST API
And the last new capability in the OmniStack Software is the REST API. In version 3.5 it will be possible to use the REST API to manage the SimpliVity data virtualization platform. It was already possible to integrate with VMware vRealize Automation but now it will be a lot easier to integrate with third-party management portals and applications.

OmniView
OmniView Predictive Insight tool is the second part of the announcement. OmniView is a web-based tool that gives custom visualization of an entire SimpliVity deployment. It can give predictive analytics and trends within a SimpliVity environment and helps to plan future grow. The tool can also help to investigate and troubleshoot issues within the environment. OmniView will be available for Mission-Critical-level support customers and approved partners.

Hyper-V
The last part of the announcement is support for Hyper-V. The OmniStack Data Virtualization platform will be extended to this platform to give customers more choice. SimpliVity will support mixed and dedicated Hyper-V environments with the release of Windows Server 2016. Planning and timing about the availability is aligned to the release of Microsoft Windows Server 2016.
Conclusion
The announcement is a great step in the right direction and I think just-in-time. For me the most important part of the announcement is the announcement of version 3.5 and more specifically the support for stretched clusters. In more and more large European organizations stretched cluster support is a requirement nowadays and SimpliVity will now have the ability to support this. Also the REST API will help to integrate SimpliVity in an existing ecosystem of a customer.
The OmniView Predictive Insight tool will give customers insight to their SimpliVity environment and provide predictive analytics and forecasts. In the current 3.0 version it was only possible to get some statistics about the storage but now you will have a self-learning system which customers can use to improve their environment.
The Hyper-V support announcement is also a long-awaited one. Now we only have to wait till Microsoft will release Windows Server 2016 to use this feature.