Tuesday 18 april 2017, Patrick van Helden, Director of Solution Architecture at Elastifile was at Metis IT to tell about Elastifile. We had the chance to try a real-life deployment of the Elastifile software.
Elastifile is a relative new name in the storage area. Since this month, the company is out of stealth and has presented its Elastifile Cloud File System. The company is founded in 2013 in Israel by three founders with a strong background in the virtualization and storage industry. In three funding rounds the product raised $58 Million. In the last round $15M came directly from Cisco. Other investors in Elastifile are leading flash Storage vendors and Enterprise Cloud Vendors.
What is Elastifile?
The goal of the founders is to have a storage platform that is able to run any application, on any environment, at any location. Whereby any location means really any location: Cloud or on premise. The product is developed to run with the same characteristics in these environments. Therefor Elastifile wrote from scratch a POSIX compliant filesystem that supports file, block and object oriented workloads and is optimized for flash devices. You can store your documents, user shares, VMware VMDK files, but also use it for big data applications, all stored on the same Elastifile Cloud File System.
But what is the difference with a NetApp storage for example? A NetApp system can also provide you the same capabilities and is already doing this for years. The first thing in which Elastifile’s approach is different than NetApp, is the way the product is written. It’s written for high performance and low latency. Elastifile only supports flash devices and the software knows how to handle the different types of flash devices to get the best performance and extend the lifetime of flash devices. Furthermore, ElastiFile is linearly scalable and can be combined with compute (Hyperconverged Solutions).
Another difference is that the Elastifile Cloud File System can run inside a (public) cloud environment and connect this to your own on premise environment. The problem with (public) cloud environment is that it gives you not the same predictable performance as in your on-premise environment. The Elastifile Cloud File System have a dynamic data path to handle noisy and fluctuating environments like the cloud. Due to this dynamic path Elastifile can run with high-performance and most important with low latency in cloud-like environments.
Elastifile’s Cloud File System can be deployed in three different deployment models:
- Dedicated Storage mode
The first deployment model is HCI, where the Elastifile software runs on top of a hypervisor. Now, Elastifile supports only VMware, additional hypervisors will be added in future releases. You can compare this deployment with many other HCI vendors, but when connecting and combining the HCI deployment model with one of the other deployment options it gives you more flexibility and capabilities. Most other HCI vendors only support a small set of certified hardware configurations, wherein Elastifile supports a broad range of hardware configurations.
The second and in my opinion the most interesting deployment model is the dedicated storage mode deployment. In this model, the Elastifile software is directly installed on servers with flash devices. Together they create the Elastifile distributed storage. With this deployment model, it is possible to connect hypervisors directly to these storage nodes using NFS (and in the future SMB3), but also connect bare-metal servers with Linux, Oracle or even container based workloads to this same storage pool.
As we already discussed earlier the latest deployment is the In-Cloud deployment. Elastifile can run In-Cloud in one of the big public cloud providers but is not limited to public clouds. Elastifile can also run in other clouds as long it delivers flash based storage as infrastructure. The Elastifile can use the storage to build its distributed low-latency cloud file system.
When combining these three models you get a Cloud ready file system with high performance, low latency and a lot of flexibility and possible use-cases.
HCI file services
A great use-case for the Elastifile Cloud File System is that you can decouple the operating system and application from the actual data of the application in a HCI deployment. You can use the Elastifile Cloud File System to mount a VM directly to the storage and bypass the hypervisor. And because the Elastifile Cloud File System is a POSIX filesystem it can store millions of files with deep file structures.
Linear scalable in cloud-like environments
A second use-case for the Elastifile Cloud File system is that the performance with any deployment of Elastifile delivers a predictable low-latency performance. When expanding the Elastifile nodes each node will add the same performance as any other node. When adding additional storage, you’re also adding additional storage controllers to the cluster. This result in a linear scalable solution even in cloud-like environments.
The last use-case of the Elastifile is that it could automatically move files on the filesystem to another tier of flash storage. This could be a cheaper type of flash or a less performing type of flash storage, for example consumer grade SSD’s. Movement will be based on policies. The Elastifile software can further offload cold data to a cheaper type of storage, like a S3 storage. This can be a cloud based S3 storage, but can also be an on premise S3 storage.
How the future will look like is always difficult to say, but from all what I already tried is this a very promising first version of the Elastifile Cross-Cloud Data Fabric. In the session with Patrick, I deployed the software myself and Patrick showed us the performance on these deployed nodes without any problems. The idea’s around the product are great and on the roadmap, you find the most important capabilities which are needed to make it a real mature storage product.
This post is cross-posted on the Metis IT website: find it here
Last week I attended VMworld 2016 in las Vegas. The VMware CEO, Pat Gelsinger, opened VMworld 2016, with the statement that “a new era of cloud freedom and control is here.” During the presentation Pat introduced VMware Cloud Foundation and the Cross-Cloud Architecture. According to them, this will be game-changing and “will enable customers to run, manage, connect, and secure applications across clouds and devices in a common operating environment”.
So let’s jump into what Cloud Foundation is. The way VMware explains Cloud foundation is as glue between vSphere, Virtual SAN and NSX which enables companies using it to create a unified Software Defined Data Center platform. In other words: it is a native stack that delivers enterprise-ready cloud infrastructure for the private and public cloud.
As VMware Cloud Foundation is their unified SDDC platform for the hybrid cloud, it is based on VMware’s compute, storage and network virtualization, it delivers a natively integrated software stack that can be used on-premise for private cloud deployment or run as a service from the public cloud with consistent and simple operations.
The core components are VMware vSphere, Virtual SAN and NSX. Another component of VMware Cloud Foundation is VMware SDDC Manager which automates the entire system lifecycle and simplifies software operations. In addition, it can be further integrated with VMware vRealize Suite, VMware Horizon and VMware Integrated OpenStack (where another announcement was made on during VMworld).
The launching partner of VMware is IBM that will provide coverage around the world, when it comes to datacenter locations:
select VMware vCloud Air Network (vCAN) partners in the near future to enable consumption of the full SDDC stack through a subscription model. The partners will deliver a SDDC infrastructure in the public cloud by leveraging Cloud Foundation.
The road ahead for VMware can be no else than the path of Cloud Foundation and the Cross Cloud architecture. We need to see if the path chosen is enough to get VMware back on track again. In my opinion VMware is late to the party and they should have kept their good relationships with partners at the first place. But truth be said, it seems a very solid foundation for companies to build the next generation datacenters, with VMware’s unified SDDC platform. With some changes in course and a solid development of products VMware might be able to keep their stake in the datacenters.
Sources on VMware Cloud Foundation:
VMware Cloud foundation website
IBM Cloud and VMware
If you are interested on these kind of topics, join us at the next TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. TECHunplugged is a one day event focused on cloud computing and IT infrastructure with a formula that combines a group of independent, insightful bloggers with disruptive technology vendors and end users who manage the next generation technology environments. Join us!
During Storage Field Day 10 we had a very interesting presentation by Datera on their Elastic Data Fabric. They say its the first storage solution for enterprises as well as service provider clouds that is designed with DevOps style operations in mind. The Elastic Data Fabric provides scale-out storage software capable of turning standard commodity hardware into a RESTful API-driven, policy-based storage fabric, for the enterprise environments.
The Datera EBF solution gives your environment the flexibility of hyperscale environments through a clever and innovate software solution. In a fast changing application landscape being fast, flexible, agile and software defined is key. Cloud native is the new kid on the block and more and enterprises are adapting to these kinds application development.
Datera seems to be able to provide enterprises as well as cloud providers the storage that is needed to build these applications. The way this is accomplished by Datera is be defined in four main solutions:
What is Intent defined? I had a bit of a struggle on that question myself, so just lets stick to the explanation Datera provides:
Intent defined is a ochestrated play between storage and application. An application developer will know what he would like from storage, and can define these to the storage application programming interface. This is DevOps at its best. When storage is able to be scriptable from a developer perspective and manageable from a storage adminstrator perspective you know you hit the jackpot.
API first approach
Already mention the API a couple of times, but this is one of the key features of the Datera EBF, and therefor very important. Datera aims to set a new standard with the elegance and simplicity of their API. They intend to make the API as easy usable as possible to make sure it used and not forsaken because it so hard to understand.
The API is a well though and extremely hard peace to do right creating something as difficult as a storage platform for the customers Datera is aiming for. The API first approach and the approach Datera took developing the API seems to be a real seldom seen piece of art in this space.
Things always need to come together creating something like a storage platform. One of these things is that the companies buying your solution want the opportunity to mix and match. They want to buy what they need now and if they need more (capacity or performance or both) they want to add just as easy. At Datera the you can mix and match different kind of nodes without impacting the overall performance of the solution, making it one of these rare solutions that is truely hyper-composable.
This is where a lot of software solutions say they are, but…. When you start using their products you find out the hard way that the definition of Multi-tenant is used in many ways, and true multi-tenancy is hard to get.
Is this different with Datera? They say it is, but to be honest I’m nit really sure of it. I’ll try to figure this out and reach out to the Datera people. And although they do not have a lot of official customers, a couple of them are well known for their multi-tenant environment, so my best guess is that the multi-tenancy part is OK with Datera, if not I’ll let you know.
I was very impressed with the information provided by Datera during Storage Field Day 10. Due to a ton of work coming back after SFD10 and TFD11 I didn’t really have time to do a deep dive into the technology, but that is where my fellow SFD10 delegates are a big value to the community, so here are their blogposts:
The Cool Thing About Datera Is Intent by Dan Frith
Defining Software-defined Storage definition. Definitely. – Juku.it by Enrico Signoretti
Torus – Because We Need Another Distributed Storage Software Solution by Chris Evans
Storage Field Day 10 Preview: Datera by Chris Evans
Storage Field Day 10 Next Week by Rick Schlander
And as always the Tech Field Day team provides us with an awesome site full of the information on Datera here
And just to make sure you have a direct option to watch the videos of the SFD10 presentations, here they are:
1. Datera Introduction with Marc Fleischmann
2. Datera Docker and Swarm Demo with Bill Borsari
3. Datera Architecture Deep Dive
4. Datera OpenStack Demo with Bill Borsari
And make sure to visit the Datera website as well:
If you are interested in these kinds of topics, please join us for the TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. This is a one day event focused on cloud computing and IT infrastructure with an innovative formula, it combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!
During Tech Field Day 11 we had presentations from a lot of awesome companies. Some of them I knew, but others were new to me, and that while some of these already exist for multiple years. The first of these “older”companies was Netwrix.
When writing a couple of VMware designs in which compliancy was a big deal, I learned that a good auditing tool is a must have as the auditors will not approve anything if they you didn’t provide them with the right answers and tooling needed to be compliant. A tool like Netwrix can help a lot with this.
So during Tech Field Day 11 I was pleased to see Netwrix do a great job at explaining where they came from and what they do. A couple of points that were told in this first presentation:
• The company is founded in 2006 (that’s right the company celebrates it’s 10th anniversary this year)
• The founders Michael Fimin and Alex Vovk, who both worked at Quest software before starting Netwrix.
• The company has no venture funding.
• The company has over 200 employees across the globe, and;
• They have over 7000 customers worldwide
But it might be better if you just watch part 1 of the presentation first:
Who is Netwrix? from Stephen Foskett on Vimeo.
Netwrix Auditor Platform capabilities
The Netwrix auditor platform can help you audit and monitor multiple systems and application, the following are usable by default:
- Microsoft Active Directory
- Microsoft Exchange Server
- Microsoft Office 365
- Microsoft Sharepoint
- Microsoft SQL server
- VMware vSphere
- Windows File Server
- Windows Server
Some of these are on-premises only, but a couple of them are also hybrid cloud capable, meaning you can audit your applications both on- and off-premises. Through the use of RESTful API’s both in and out bound you can leverage even more, but that is for a later blogpost :D.
Other TFD11 delegates on Netwrix
As always a couple of my TFD11 delegates also wrote some articles on Netwrix. Here are the articles already in the open (I’ll try to keep it updated, but I can’t promise anything :D):
Julian Wood (@julian_wood) wrote a great preview, the Tech Field Day 11 Preview: Netwrix
As well as Alaister Cook (@DemitasseNZ) did an introduction: TFD11 introduction: Netwrix
A small section on Netwrix can be found in the write-up by Tech Field Day Goes To 11
And last but not least, Mark May () wrote a piece right after the presentation (showoff ;-P) called: Breaking down silos between security and operations
And as always, all Netwrix information and videos are available at the Tech Field Day site: Tech Field Day Netwrix
As already mentioned I’ll try to keep this post updated if people will write more on Netwrix, and I will also try to do a part two and three on Netwrix, but first I want to write a couple of post on other companies presenting at TFD11.
This week I’m at the SDDC consulting training at the VMware EMEA HQ in Staines. There is a really full program with presentations and labs about the VMware SDDC portfolio. Products that will be covered in the training are:
- vRealize Automation
- vRealize Orchestrator
- VMware NSX
- VMware SRM
But the most important focus this week is the integration between all VMware products and 3th party products like InfoBlox and Service Now.
We started yesterday with the installation of a distributed vRealize Automation 6 environment. After clicking thru 281 pages of instruction the installation was finished. Some people in the class had problems with the lab base environment because of time out errors. The reason was a slow network connection not just slow but really really slow…
The lab environment consists of virtualized ESXi hosts and is using NSX for the networking part. In NSX there is some bug (or should I say undocumented feature ;-)) that cause lots of packet drops when using virtualized ESXi hosts and NSX. The solution to work around is to create DRS rules to keep some of the VMs (the ones you are working on) together on a virtualized ESXi host so all network traffic is kept locally. I think it’s also possible you experience the same slow connection if you are doing the VMware Hands On Labs because the setup is probably the same.
Today when booting up my lab again I had the issue that the infrastructure tab had a strange name. The name was changed in: firstname.lastname@example.org instead of just Infrastructure. All underlying tabs had the same problems. If you know where to click everything is still working, but it doesn’t feel good.
The solution to this problem is to just reboot some nodes of the vRA installation. But wait, which of the 10 servers do need a reboot? The answer is nearly all of them. The boot order for the complete stack is:
- Microsoft SQL Database server
- Identity appliance
- vRealize appliance 1
- vRealize appliance 2
- IAAS webserver 1 & 2 (vRealize webportal and ModelManagerData services)
- Application server 1 ( primary IAAS Manager and the DEM Orchestrator Server services)
- Application server 2 (secondary IAAS Manager and the DEM Orchestrator Server services)
- Proxy server 1 & 2 (DEM worker and Proxy Agent server services)
Rebooting from step 3 will resolve this issue. First shutdown all services in the reverse order and when you are at the vRealize appliance 1 just reboot this one. Wait till the VAMI is showing up in the console and then (and not earlier!) start the next one of the list. If the server is a Windows server give it some extra time to boot up all services.
If everything is restarted then you will see the normally names and tabs.
In half an hour we’ll go live with Storage Field Day 10. I do encourage you to watch live at the Tech Field Day site, but for those that really want to follow the livestream on vdicloud.nl, here you go:
The agenda will be as follows:
If you have a question you would like to be asked during the presentation, please do so (use twitter and use the hashtag #SFD10)
See you online!
Two weeks ago I attended the TechUnplugged in London. For whom doesn’t know what TechUnplugged is. TechUnplugged is a full day conference focused on cloud computing and IT infrastructure. The conference brings influencers, vendors and end users together so it is possible to create interaction between these people. If you want more information, look at techunplugged.io.
All speakers had a slot of 25 minutes to tell their story. First I thought it was very little time to tell a story, but after a day of presentation I think it’s sufficient to do the job. If the subject of the presentation is not in your area of interest, it will only cost you 25 minutes of your time. The presentations of the influencers are interspersed with the vendors so you have varied subjects and presentation styles.
The majority of the presentations were storage oriented. These presentations addressed subjects like: the history of storage, winners and losers in storage solutions, multiple vendors and secondary storage. Besides the storage presentations there was a session about OpenStack, Clouds and Containers and the presentation about the Software Defined Data Center from my colleague Arjan Timmerman with stroopwafels and chocolate. I gave, or it was the intention to give it live, a technical overview about vRealize Automation, but because of the bad Wi-Fi connection it was only a movie.
The last part was an ‘Ask Me Anything’ panel consisting of influencers and vendors. Everybody could ask any questions got their answers from different perspectives. It seemed as a nice concept, but it’s always difficult to create this kind of interaction. After the ‘Ask Me Anything’ panel it was time to start the social part of the conference (beer, wine and networking).
I’m looking back at a well-organized event with a broad pallet of interesting subjects and people involved. I think the combination of vendors and influencers and the presentations is perfect formula to be “updated” and involved in the last developments. For me some new products were introduced and I got some new insights in the fast changing world of cloud, storage and SDDC. I sincerely hope I meet you all at the next TechUnplugged in Amsterdam!
The last couple of weeks I’ve been doing research on the VMware vCloud Air for one of our customers. Our customers is looking at the vCloud Air solution for large pieces of their current infrastructure, and the big driver for this investigation is the fact that at first looks the prices for VMware vCloud Air look cheaper (way cheaper accually) in stead of building and migrating and their own Datacenters. But to get a clearer view of what the VMware vCloud Air , let’s dive in to vCloud Air to see if this is true and what vCloud Air is.
Vmware vCloud Air
First we’ll investigate what VMware is providing through their vCloud Air offering. To make sure we got everything right we’ll see what the VMware website tells us about their vCloud Air offering:
As you can see there are multiple ways the business could use the VMware vCloud Air solution as seen in picture above there are 6 different offerings:
- Disaster Recovery
- Virtual Private Cloud
- Virtual Private Cloud OnDemand
- vCloud Government Service
- Object Storage
- Dedicated Cloud
For this part we’re going to take a look at the Dedicated Cloud offering and we’re diving in to the benefits and positive points of the solution.
Welcome in the buzzword bingo?
There is to much buzzwords going around in this area that it is hard to keep track on what is what! What is the difference betweens private cloud, Infrastructure as a Service and Dedicated cloud and on-premises infrastructure? What does it mean for your company and where does fit in your IT environment?
For me the Invisible IT buzzwords is what most companies (I do business with) are really looking for…. Providing the IT resources instantly when the business needs it, and thereby being a business enabler is what IT should be all about.
In the the last couple of decades it often happened that when the business needed a new application to help the business growth, it could take months before the application could be used. With the birth, and adoption of virtualization most IT departments managed to cut this down to about a week. But that time is used to just implement the virtual servers needed for the application (with the right network and storage resources), and after this time the application still needs to implement and test the the application.
Welcome to the Cloud Era
With the introduction of cloud a lot of people were sceptic. But after a couple of years people use it all the time, and got used to the benefits of cloud computing. One of the biggest advantages of cloud computing is how fast it is to buy resources. Go to AWS, Azure, Google or whatever cloud providor you want and with a credit card and a few clicks your VM is running in minutes….
This is where most IT departments lost the battle (they think…). If a in house department still needs to wait weeks of even still months before they can really start developing, implementing and using the application they tend to run, and use, the public cloud quickly. They normally won’t think of the business impact of such a move, but on the other the project can deliver much quicker and that’s all that counts to them.
As Dilbert explained in the comic above there is a way for IT to use the on-premises resources as well as the public cloud to move be the business enabler IT needs to be.
Virtualization vs. Hybrid Cloud
It seems such a long time ago that virtualization needed to prove its place in the Datacenter. A lot of companies looked at the virtualization product and didn’t see it production ready, but after testing it in their test environments and seeing the benefits almost all companies testing also started using it in their production environments as well.
The same is seems to happen with the use of hybrid cloud, but it seems that the hybrid cloud adoption goes much faster. The way companies start using a hybrid cloud solution is lots of time driven by the fact that certain workloads already started their development in the public cloud, and the company would like to embed the posibilities the cloud provides. The Hybrid cloud is the combination of private (which could also be a traditional IT environment) and public cloud(s) which provides your company the best of both worlds. But to manage these clous, you’ll need the right tools.
Cloud Management Platform
To manage your comapnies Hybrid Cloud they’ll need a Cloud Management Platform. As already mentioned the CMP’s are Management portals that offer your business the management needed to provide the private and public IT services. It is important to know that although there are many CMP’s I have found any (yet) that offers the complete spectrum of private and public offerings, although they all offer REStful api support so you could create certain things yourself (if you have the development force to do so ;)) I’ll probably dive into a couple of the CMP’s at a later stage, but for now if you want to know more about CMP’s look at these:
There are many more, but for now it is more than enough to have some reading material during a couple of days 😉
VMware vRealize suite and vCloud Air
I started this post about the VMware vCloud Air solution, but in the end I didn’t really talk about it that much. I promise I’ll do more in depth in the next part but for now I want to focus a little more on VMware vRealize Suite and the vCloud Air products for building a VMware Hybrid cloud.
With a lot of companies that build their virtualization environment on the VMware vSphere product, it is an easy step to want to build their hybrid IT environment on this foundation. To do so, they can leverage the vRealize suite product to automate and orchestrate their current environment as well as the vCloud air solutions, and furthermore other cloud solutions like AWS, Azure and others.
For a lot of companies this would build the environment they need to be on the edge, while still maintaining a soltution build on the foundation they already had, keeping the knowledge they already have in house, and giving IT the power to become a business enabler again.
When I started this post I didn’t intend it to be this long, and that’s the main reason to stop puting more information in this single post. Where I started out with an introduction to VMware vCloud Air, it became much more, but that’s what blogging is all about (IMHO :D) I’ll be back with more information on vCloud Air, vRealize suite, CMP, and more…. But for now cheerio!
If you want to know more about this topic, I’ll be presenting at next TECHunplugged conference in London on 12/5/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!
A couple of weeks ago I attended a seminar in which a Nutanix competitor stated that the Nutanix Licensing was a really hard nut to crack. I immediatly went to the Nutanix licensing site to see what was so difficult about it but couldn’t explain why someone would find it so hard to figure out the Nutanix Licensing.
The Acropolis editions
So let’s see how the Nutanix Licensing is done, and start with Acropolis:
Nutanix Acropolis is a powerful scale-out data fabric for storage, compute and virtualization. Acropolis combines feature-rich software-defined storage with built-in virtualization in a turnkey hyperconverged infrastructure solution that can run any application at any scale.
Well get back on the Acropolis licensing details shortly 😀
And on the other hand Nutanix Prism:
Nutanix Prism gives administrators a simple and elegant way to manage virtual environments. Powered by advanced data analytics and heuristics, Prism simplifies and streamlines common workflows within a datacenter eliminating the need to have disparate management solutions.
So after we know the different versions of Acropolis and Prism let’s dive into the differences between the Acropolis Starter, Pro and Ultimate editions:
The first difference between the three can be Storage side of things and is clearly stated on the Nutanix Website:
So if you need more then 12 hosts in a cluster or Deduplication, Compression or Erasure coding you’ll need at least Pro, if you need some of your workloads to be pinned on flash you’ll have to switch over to Ultimate. Easy as that on the storage side, let’s continue.
The next one in the Nutanix list is Infrastructure Resilience:
In this case it is even easier, if you need enclosure or rack awereness Starter is a NoNo. You’ll need to decide on other features (like storage) if you need to go to Pro or Ultimate.
Next please 😀 That will be Data Protection:
So you need Cloud Connect or Time Stream? Move to Pro, if you want Multi Site DR, Metro Availability or Sync Replicatie and DR you’ll need to go to Ultimate, if you don’t need all of these services, you’ll do just fine with starter…
No hard nuts for me until now, but we’re not there so let’s continue:
The one thing everybody seems to be talking about these days is security, and that’s also the next on the Nutanix list:
Just need Client Authentication? Go for Starter. Also need Cluster Lockdown?:
Cluster Shield, which allows administrators to restrict access to a Nutanix cluster in security-conscious environments, such as government facilities and healthcare provider systems. Cluster Shield disables interactive shell logins automatically.
Go to Pro and if you also need Data-at-Rest encryption please continue to Ultimate 😉
The next “hard nut” to crack, would be Management & Analytics. But for me it’s another easy comparison between what is in the licensing offer:
What is important for our Prism comparison is that every Acropolis edition already offers the Prism Starter edition. We don’t really need to look at that one than, so we’ll concentrate on Pro if we get there 😀 For Management and Analytics it is kind of easy again, because if you need Rest API’s you’ll need Pro or Ultimate, otherwise you could do with Starter. But again, it depends on the other features in these licensing deals if you can choose the Starter/Pro or Ultimate.
The last one would be Virtualization, but there is difference between the three on this:
That’s all for the Acropolis site of things, and to be honest I didn’t find any hard nuts to crack. The list is very clear, and based on the business and technical requirements it should be able to choose the flavor you need.
The Prism editions
So it must be in the Prism site of things than. Let’s see how difficult this Prism thingy really is.
For the people that payed attention, I already told that all Acropolis editions included the Prism Starter edition so we can concentrate on Pro.
Just to make clear on what is in both of them:
It’s not that hard to make your choice if you ask me, but let’s explain what Prism Pro offers more than the already included Starter:
- Prism search is an integrated google-like search experience that will help you query and perform actions with a single click
- Customizable Operations Dashboard is a Visually rich dashboards that give actionable summary of applications, virtual machines and infrastructure state at-a-glance.
- Capacity Behavior Analytics is a Predictive analysis of capacity usage and trends based on workload behavior enabling pay-as-you-grow scaling
- Capacity Optimization Advisor is an Infrastructure optimization recommendations to improve efficiency and performance
So if you need one of these features you’ll need to buy the Prism Pro license.
It is one thing to bash your competitors and I know they all do this, including Nutanix themselves, but if you want to say something about your competitors, please make sure to know what you’re talking about. In this case (and in my “humble” opinion) the statement about the Nutanix Licensing being a hard nut to crack is really based on nothing.
The Nutanix licensing is very clear in what the licensing does and does not include and it’s up to you to create clear requirements about the environment on which you can than base you choice for the Acropolis and Prism edition you’ll need.
This is a cross post from my Metis IT blogpost, which you can find here.
Today, April 5, 2016, SimpliVity announced new capabilities of the OmniStack Data Virtualization Platform. The announcement consists of three subjects:
- OmniStack 3.5
This new version is the first major update of this year and I hope there will come more updates. The latest major release, version 3.0, was in the early second half of 2015. SimpliVity say this new version will deliver new capabilities optimized for large, mission-critical and global enterprise deployments. Besides improvements to the code, this release will add three new main capabilities to the OmniStack Data Virtualization Platform.
The first improvement in the OmniStack software is the ability to create multi-node stretched clusters. In the current versions it is only possible to create a stretched cluster with a total of 2 nodes divided over two sites. This limit is now increased and supported by default. With a stretched cluster it will be possible to achieve a RPO of zero and a RTO of seconds.
Intelligent Workload Optimizer
The second new capability is the Intelligent Workload Optimizer. SimpliVity will use a multi-dimensional approach to balance the workload over the platform. The balancing will be based on CPU, Memory, I/O performance and Data Location. This will result in less data migrations and a greater virtual machine improvement.
And the last new capability in the OmniStack Software is the REST API. In version 3.5 it will be possible to use the REST API to manage the SimpliVity data virtualization platform. It was already possible to integrate with VMware vRealize Automation but now it will be a lot easier to integrate with third-party management portals and applications.
OmniView Predictive Insight tool is the second part of the announcement. OmniView is a web-based tool that gives custom visualization of an entire SimpliVity deployment. It can give predictive analytics and trends within a SimpliVity environment and helps to plan future grow. The tool can also help to investigate and troubleshoot issues within the environment. OmniView will be available for Mission-Critical-level support customers and approved partners.
The last part of the announcement is support for Hyper-V. The OmniStack Data Virtualization platform will be extended to this platform to give customers more choice. SimpliVity will support mixed and dedicated Hyper-V environments with the release of Windows Server 2016. Planning and timing about the availability is aligned to the release of Microsoft Windows Server 2016.
The announcement is a great step in the right direction and I think just-in-time. For me the most important part of the announcement is the announcement of version 3.5 and more specifically the support for stretched clusters. In more and more large European organizations stretched cluster support is a requirement nowadays and SimpliVity will now have the ability to support this. Also the REST API will help to integrate SimpliVity in an existing ecosystem of a customer.
The OmniView Predictive Insight tool will give customers insight to their SimpliVity environment and provide predictive analytics and forecasts. In the current 3.0 version it was only possible to get some statistics about the storage but now you will have a self-learning system which customers can use to improve their environment.
The Hyper-V support announcement is also a long-awaited one. Now we only have to wait till Microsoft will release Windows Server 2016 to use this feature.