Tuesday 18 april 2017, Patrick van Helden, Director of Solution Architecture at Elastifile was at Metis IT to tell about Elastifile. We had the chance to try a real-life deployment of the Elastifile software.
Elastifile is a relative new name in the storage area. Since this month, the company is out of stealth and has presented its Elastifile Cloud File System. The company is founded in 2013 in Israel by three founders with a strong background in the virtualization and storage industry. In three funding rounds the product raised $58 Million. In the last round $15M came directly from Cisco. Other investors in Elastifile are leading flash Storage vendors and Enterprise Cloud Vendors.
What is Elastifile?
The goal of the founders is to have a storage platform that is able to run any application, on any environment, at any location. Whereby any location means really any location: Cloud or on premise. The product is developed to run with the same characteristics in these environments. Therefor Elastifile wrote from scratch a POSIX compliant filesystem that supports file, block and object oriented workloads and is optimized for flash devices. You can store your documents, user shares, VMware VMDK files, but also use it for big data applications, all stored on the same Elastifile Cloud File System.
But what is the difference with a NetApp storage for example? A NetApp system can also provide you the same capabilities and is already doing this for years. The first thing in which Elastifile’s approach is different than NetApp, is the way the product is written. It’s written for high performance and low latency. Elastifile only supports flash devices and the software knows how to handle the different types of flash devices to get the best performance and extend the lifetime of flash devices. Furthermore, ElastiFile is linearly scalable and can be combined with compute (Hyperconverged Solutions).
Another difference is that the Elastifile Cloud File System can run inside a (public) cloud environment and connect this to your own on premise environment. The problem with (public) cloud environment is that it gives you not the same predictable performance as in your on-premise environment. The Elastifile Cloud File System have a dynamic data path to handle noisy and fluctuating environments like the cloud. Due to this dynamic path Elastifile can run with high-performance and most important with low latency in cloud-like environments.
Elastifile’s Cloud File System can be deployed in three different deployment models:
- Dedicated Storage mode
The first deployment model is HCI, where the Elastifile software runs on top of a hypervisor. Now, Elastifile supports only VMware, additional hypervisors will be added in future releases. You can compare this deployment with many other HCI vendors, but when connecting and combining the HCI deployment model with one of the other deployment options it gives you more flexibility and capabilities. Most other HCI vendors only support a small set of certified hardware configurations, wherein Elastifile supports a broad range of hardware configurations.
The second and in my opinion the most interesting deployment model is the dedicated storage mode deployment. In this model, the Elastifile software is directly installed on servers with flash devices. Together they create the Elastifile distributed storage. With this deployment model, it is possible to connect hypervisors directly to these storage nodes using NFS (and in the future SMB3), but also connect bare-metal servers with Linux, Oracle or even container based workloads to this same storage pool.
As we already discussed earlier the latest deployment is the In-Cloud deployment. Elastifile can run In-Cloud in one of the big public cloud providers but is not limited to public clouds. Elastifile can also run in other clouds as long it delivers flash based storage as infrastructure. The Elastifile can use the storage to build its distributed low-latency cloud file system.
When combining these three models you get a Cloud ready file system with high performance, low latency and a lot of flexibility and possible use-cases.
HCI file services
A great use-case for the Elastifile Cloud File System is that you can decouple the operating system and application from the actual data of the application in a HCI deployment. You can use the Elastifile Cloud File System to mount a VM directly to the storage and bypass the hypervisor. And because the Elastifile Cloud File System is a POSIX filesystem it can store millions of files with deep file structures.
Linear scalable in cloud-like environments
A second use-case for the Elastifile Cloud File system is that the performance with any deployment of Elastifile delivers a predictable low-latency performance. When expanding the Elastifile nodes each node will add the same performance as any other node. When adding additional storage, you’re also adding additional storage controllers to the cluster. This result in a linear scalable solution even in cloud-like environments.
The last use-case of the Elastifile is that it could automatically move files on the filesystem to another tier of flash storage. This could be a cheaper type of flash or a less performing type of flash storage, for example consumer grade SSD’s. Movement will be based on policies. The Elastifile software can further offload cold data to a cheaper type of storage, like a S3 storage. This can be a cloud based S3 storage, but can also be an on premise S3 storage.
How the future will look like is always difficult to say, but from all what I already tried is this a very promising first version of the Elastifile Cross-Cloud Data Fabric. In the session with Patrick, I deployed the software myself and Patrick showed us the performance on these deployed nodes without any problems. The idea’s around the product are great and on the roadmap, you find the most important capabilities which are needed to make it a real mature storage product.
This post is cross-posted on the Metis IT website: find it here
Last week I attended VMworld 2016 in las Vegas. The VMware CEO, Pat Gelsinger, opened VMworld 2016, with the statement that “a new era of cloud freedom and control is here.” During the presentation Pat introduced VMware Cloud Foundation and the Cross-Cloud Architecture. According to them, this will be game-changing and “will enable customers to run, manage, connect, and secure applications across clouds and devices in a common operating environment”.
So let’s jump into what Cloud Foundation is. The way VMware explains Cloud foundation is as glue between vSphere, Virtual SAN and NSX which enables companies using it to create a unified Software Defined Data Center platform. In other words: it is a native stack that delivers enterprise-ready cloud infrastructure for the private and public cloud.
As VMware Cloud Foundation is their unified SDDC platform for the hybrid cloud, it is based on VMware’s compute, storage and network virtualization, it delivers a natively integrated software stack that can be used on-premise for private cloud deployment or run as a service from the public cloud with consistent and simple operations.
The core components are VMware vSphere, Virtual SAN and NSX. Another component of VMware Cloud Foundation is VMware SDDC Manager which automates the entire system lifecycle and simplifies software operations. In addition, it can be further integrated with VMware vRealize Suite, VMware Horizon and VMware Integrated OpenStack (where another announcement was made on during VMworld).
The launching partner of VMware is IBM that will provide coverage around the world, when it comes to datacenter locations:
select VMware vCloud Air Network (vCAN) partners in the near future to enable consumption of the full SDDC stack through a subscription model. The partners will deliver a SDDC infrastructure in the public cloud by leveraging Cloud Foundation.
The road ahead for VMware can be no else than the path of Cloud Foundation and the Cross Cloud architecture. We need to see if the path chosen is enough to get VMware back on track again. In my opinion VMware is late to the party and they should have kept their good relationships with partners at the first place. But truth be said, it seems a very solid foundation for companies to build the next generation datacenters, with VMware’s unified SDDC platform. With some changes in course and a solid development of products VMware might be able to keep their stake in the datacenters.
Sources on VMware Cloud Foundation:
If you are interested on these kind of topics, join us at the next TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. TECHunplugged is a one day event focused on cloud computing and IT infrastructure with a formula that combines a group of independent, insightful bloggers with disruptive technology vendors and end users who manage the next generation technology environments. Join us!
During Storage Field Day 10 we had a very interesting presentation by Datera on their Elastic Data Fabric. They say its the first storage solution for enterprises as well as service provider clouds that is designed with DevOps style operations in mind. The Elastic Data Fabric provides scale-out storage software capable of turning standard commodity hardware into a RESTful API-driven, policy-based storage fabric, for the enterprise environments.
The Datera EBF solution gives your environment the flexibility of hyperscale environments through a clever and innovate software solution. In a fast changing application landscape being fast, flexible, agile and software defined is key. Cloud native is the new kid on the block and more and enterprises are adapting to these kinds application development.
Datera seems to be able to provide enterprises as well as cloud providers the storage that is needed to build these applications. The way this is accomplished by Datera is be defined in four main solutions:
What is Intent defined? I had a bit of a struggle on that question myself, so just lets stick to the explanation Datera provides:
Intent defined is a ochestrated play between storage and application. An application developer will know what he would like from storage, and can define these to the storage application programming interface. This is DevOps at its best. When storage is able to be scriptable from a developer perspective and manageable from a storage adminstrator perspective you know you hit the jackpot.
Already mention the API a couple of times, but this is one of the key features of the Datera EBF, and therefor very important. Datera aims to set a new standard with the elegance and simplicity of their API. They intend to make the API as easy usable as possible to make sure it used and not forsaken because it so hard to understand.
The API is a well though and extremely hard peace to do right creating something as difficult as a storage platform for the customers Datera is aiming for. The API first approach and the approach Datera took developing the API seems to be a real seldom seen piece of art in this space.
Things always need to come together creating something like a storage platform. One of these things is that the companies buying your solution want the opportunity to mix and match. They want to buy what they need now and if they need more (capacity or performance or both) they want to add just as easy. At Datera the you can mix and match different kind of nodes without impacting the overall performance of the solution, making it one of these rare solutions that is truely hyper-composable.
This is where a lot of software solutions say they are, but…. When you start using their products you find out the hard way that the definition of Multi-tenant is used in many ways, and true multi-tenancy is hard to get.
Is this different with Datera? They say it is, but to be honest I’m nit really sure of it. I’ll try to figure this out and reach out to the Datera people. And although they do not have a lot of official customers, a couple of them are well known for their multi-tenant environment, so my best guess is that the multi-tenancy part is OK with Datera, if not I’ll let you know.
I was very impressed with the information provided by Datera during Storage Field Day 10. Due to a ton of work coming back after SFD10 and TFD11 I didn’t really have time to do a deep dive into the technology, but that is where my fellow SFD10 delegates are a big value to the community, so here are their blogposts:
The Cool Thing About Datera Is Intent by Dan Frith
Defining Software-defined Storage definition. Definitely. – Juku.it by Enrico Signoretti
Torus – Because We Need Another Distributed Storage Software Solution by Chris Evans
Storage Field Day 10 Preview: Datera by Chris Evans
Storage Field Day 10 Next Week by Rick Schlander
And as always the Tech Field Day team provides us with an awesome site full of the information on Datera here
And just to make sure you have a direct option to watch the videos of the SFD10 presentations, here they are:
1. Datera Introduction with Marc Fleischmann
2. Datera Docker and Swarm Demo with Bill Borsari
3. Datera Architecture Deep Dive
4. Datera OpenStack Demo with Bill Borsari
And make sure to visit the Datera website as well:
If you are interested in these kinds of topics, please join us for the TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. This is a one day event focused on cloud computing and IT infrastructure with an innovative formula, it combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!
During Tech Field Day 11 we had presentations from a lot of awesome companies. Some of them I knew, but others were new to me, and that while some of these already exist for multiple years. The first of these “older”companies was Netwrix.
When writing a couple of VMware designs in which compliancy was a big deal, I learned that a good auditing tool is a must have as the auditors will not approve anything if they you didn’t provide them with the right answers and tooling needed to be compliant. A tool like Netwrix can help a lot with this.
So during Tech Field Day 11 I was pleased to see Netwrix do a great job at explaining where they came from and what they do. A couple of points that were told in this first presentation:
• The company is founded in 2006 (that’s right the company celebrates it’s 10th anniversary this year)
• The founders Michael Fimin and Alex Vovk, who both worked at Quest software before starting Netwrix.
• The company has no venture funding.
• The company has over 200 employees across the globe, and;
• They have over 7000 customers worldwide
But it might be better if you just watch part 1 of the presentation first:
Netwrix Auditor Platform capabilities
The Netwrix auditor platform can help you audit and monitor multiple systems and application, the following are usable by default:
- Microsoft Active Directory
- Microsoft Exchange Server
- Microsoft Office 365
- Microsoft Sharepoint
- Microsoft SQL server
- VMware vSphere
- Windows File Server
- Windows Server
Some of these are on-premises only, but a couple of them are also hybrid cloud capable, meaning you can audit your applications both on- and off-premises. Through the use of RESTful API’s both in and out bound you can leverage even more, but that is for a later blogpost :D.
Other TFD11 delegates on Netwrix
As always a couple of my TFD11 delegates also wrote some articles on Netwrix. Here are the articles already in the open (I’ll try to keep it updated, but I can’t promise anything :D):
A small section on Netwrix can be found in the write-up by Tech Field Day Goes To 11
And last but not least, Mark May (@) wrote a piece right after the presentation (showoff ;-P) called: Breaking down silos between security and operations
And as always, all Netwrix information and videos are available at the Tech Field Day site: Tech Field Day Netwrix
As already mentioned I’ll try to keep this post updated if people will write more on Netwrix, and I will also try to do a part two and three on Netwrix, but first I want to write a couple of post on other companies presenting at TFD11.
This week I’m at the SDDC consulting training at the VMware EMEA HQ in Staines. There is a really full program with presentations and labs about the VMware SDDC portfolio. Products that will be covered in the training are:
- vRealize Automation
- vRealize Orchestrator
- VMware NSX
- VMware SRM
But the most important focus this week is the integration between all VMware products and 3th party products like InfoBlox and Service Now.
We started yesterday with the installation of a distributed vRealize Automation 6 environment. After clicking thru 281 pages of instruction the installation was finished. Some people in the class had problems with the lab base environment because of time out errors. The reason was a slow network connection not just slow but really really slow…
The lab environment consists of virtualized ESXi hosts and is using NSX for the networking part. In NSX there is some bug (or should I say undocumented feature ;-)) that cause lots of packet drops when using virtualized ESXi hosts and NSX. The solution to work around is to create DRS rules to keep some of the VMs (the ones you are working on) together on a virtualized ESXi host so all network traffic is kept locally. I think it’s also possible you experience the same slow connection if you are doing the VMware Hands On Labs because the setup is probably the same.
Today when booting up my lab again I had the issue that the infrastructure tab had a strange name. The name was changed in: email@example.com instead of just Infrastructure. All underlying tabs had the same problems. If you know where to click everything is still working, but it doesn’t feel good.
The solution to this problem is to just reboot some nodes of the vRA installation. But wait, which of the 10 servers do need a reboot? The answer is nearly all of them. The boot order for the complete stack is:
- Microsoft SQL Database server
- Identity appliance
- vRealize appliance 1
- vRealize appliance 2
- IAAS webserver 1 & 2 (vRealize webportal and ModelManagerData services)
- Application server 1 ( primary IAAS Manager and the DEM Orchestrator Server services)
- Application server 2 (secondary IAAS Manager and the DEM Orchestrator Server services)
- Proxy server 1 & 2 (DEM worker and Proxy Agent server services)
Rebooting from step 3 will resolve this issue. First shutdown all services in the reverse order and when you are at the vRealize appliance 1 just reboot this one. Wait till the VAMI is showing up in the console and then (and not earlier!) start the next one of the list. If the server is a Windows server give it some extra time to boot up all services.
If everything is restarted then you will see the normally names and tabs.