It is incredible that it already more than a year since my last post here at VDIcloud, and it’s even worse that I promised more content but never delivered. So let’s give it another try, and where better to start than with the Tech Field Days I’ve been to in the past. Let’s start with the last I’ve been to (also almost a year ago) at the NetApp Insight event in Las Vegas in 2018. As the next event will be this month, I thought it would be a good idea to see what they showcased last year, and then after the event this year compare and see what NetApp brings to the table.
How to give HCI power to the customer
We live in a world where data is worth more than oil, and companies are constantly looking for better and faster ways to utilize their data to the fullest. A lot of times we here about cloud, and instantly we think about Microsoft Azure, Amazon Web Services and the Google Cloud Platform, but a lot of companies are still working on their own on-premises infrastructures albeit that more and more companies are trying to replicate the public cloud providers by utilizing Converged and Hyper Converged solutions. Bridging the gap between On-premises solutions and Public Cloud Providers is something that a lot companies are struggling with and for this they are looking at the HCI providers for help. NetApp is relatively late with their HCI solution, but that doesn’t mean that they’re providing the right solution for their customers.
NetApp’s view on HCI
During this presentation Gabriel goes deeper into what the focus is for NetApp HCI and provides answers on some of the important questions NetApp’s view on HCI. The presentation was focussed on what defines HCI, what types of HCI exist, and what are the benefits for customer. As always with the Field Day events there are some awesome delegates that have great questions an remarks on what the presenter is telling, and this presentation has some great remarks on HCI, CI and what the difference really is. I really loved the conversation on what the difference is between CI and HCI and what that means for the customer, but I do agree with Gabe that the biggest difference between CI and HCI is that CI comes by the rack and the stack is controlled by the vendor, where as with HCI you can start small and grow big. Both have their advantages, but if convergence started with the hyperscalers, HCI is much more in line than CI. NetApp offers a solutions that exists of Compute nodes and Storage Nodes that can be scaled independently of each other. This provides customers the ability to really disaggregate compute and storage, while still having the power of a single scalable and easy to maintain platform.
Arjan’s view
I’ve always looked at the HCI solution as a great step towards building on-premises cloud solutions (private cloud if that is a better name for you to use). The problem I’ve always seen with these solution is that the focus often seemed to be on the bringing the vendors solution to the customer and in doing so it is not always bringing the best solution to them. Hyper Converged Infrastructure for me is much more about bridging the private and public Cloud and for me the solutions that NetApp provide to its customers seems to meet these standards. So calling in Hybrid Cloud Infrastructure instead of Hyper Converged is a great one. Sure there is work that needs to be done by NetApp, but who knows what they will announce in a couple of weeks. Just watch the video and read the blogs below to have the best insights on the NetApp HCI solution. And don’t forget to visit the Tech Field Day Xtra site for this event here: https://techfieldday.com/event/netappinsight18/
I just wanted to shout out to my fellow TFDx delegates at this event that wrote some great insights to this preasentation:
It is time to start blogging again, and there couldn’t possibly be better way then by starting on my invite to Storage Field Day 15 in Silicon Valley. Truth be told, not blogging was mainly due to building our own house, and we’re still very busy with building our dream house here in the Netherlands, but no more excuses for me and let’s start doing the previews for next week.
Second timer: Starwind
Last year at Storage Field Day 12 we had the first presentation from Starwind at a Tech Field Day, and I was really impressed by the technology they offer and more importantly the level detail they put in to their presentation as well as the knowledge they showed during the presentation. Most of the VMware techies will know Starwind for their ISCSI target technology they offered, which was used in many homelab environments. I know I’ve used it on multiple occassions in my lab at least. But they are moving on an during the SFD15 presentation we got more information on their HCI solution, AcloudA, Veeam VTL and Cloud replication and Starwind Scale Out and Log Structured File Systems.
There are a couple of great resources on the products they’ve talked about last time that I’ll provide a link to here:
When companies already presented at a Tech Field Day, I try to watch all the videos before we go in to a new one, although with some companies this really seems impossible because of the number of times they were at a Tech Field Day. With Starwind this is (now) still easy, and that is why I’ll also include the SFD12 Vimeo videos here as well.
Starwind Simple, Flexible, Scalable Storage
Starwind Fault-Tolerant Storage Demo
StarWind Scale Out and Log Structured File System
StarWind and AcloudA: Stairway to Cloud
StarWind and Veeam VTL and Cloud Replication
I’m really looking forward to meeting the Kolomyeytsev brothers again, and i hope you will follow us during the livestream at the Tech Field Day site:
Disclaimer: I was invited to this meeting by TechFieldDay to attend SFD15 and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or edited by any other person than the me.
During Storage Field Day 10 we had a very interesting presentation by Datera on their Elastic Data Fabric. They say its the first storage solution for enterprises as well as service provider clouds that is designed with DevOps style operations in mind. The Elastic Data Fabric provides scale-out storage software capable of turning standard commodity hardware into a RESTful API-driven, policy-based storage fabric, for the enterprise environments.
The Datera EBF solution gives your environment the flexibility of hyperscale environments through a clever and innovate software solution. In a fast changing application landscape being fast, flexible, agile and software defined is key. Cloud native is the new kid on the block and more and enterprises are adapting to these kinds application development.
Datera seems to be able to provide enterprises as well as cloud providers the storage that is needed to build these applications. The way this is accomplished by Datera is be defined in four main solutions:
Intent defined
What is Intent defined? I had a bit of a struggle on that question myself, so just lets stick to the explanation Datera provides:
Intent defined is a ochestrated play between storage and application. An application developer will know what he would like from storage, and can define these to the storage application programming interface. This is DevOps at its best. When storage is able to be scriptable from a developer perspective and manageable from a storage adminstrator perspective you know you hit the jackpot.
API first approach
Already mention the API a couple of times, but this is one of the key features of the Datera EBF, and therefor very important. Datera aims to set a new standard with the elegance and simplicity of their API. They intend to make the API as easy usable as possible to make sure it used and not forsaken because it so hard to understand.
The API is a well though and extremely hard peace to do right creating something as difficult as a storage platform for the customers Datera is aiming for. The API first approach and the approach Datera took developing the API seems to be a real seldom seen piece of art in this space.
Hyper-composable
Things always need to come together creating something like a storage platform. One of these things is that the companies buying your solution want the opportunity to mix and match. They want to buy what they need now and if they need more (capacity or performance or both) they want to add just as easy. At Datera the you can mix and match different kind of nodes without impacting the overall performance of the solution, making it one of these rare solutions that is truely hyper-composable.
Multi-Tenant
This is where a lot of software solutions say they are, but…. When you start using their products you find out the hard way that the definition of Multi-tenant is used in many ways, and true multi-tenancy is hard to get.
Is this different with Datera? They say it is, but to be honest I’m nit really sure of it. I’ll try to figure this out and reach out to the Datera people. And although they do not have a lot of official customers, a couple of them are well known for their multi-tenant environment, so my best guess is that the multi-tenancy part is OK with Datera, if not I’ll let you know.
Conclusion
I was very impressed with the information provided by Datera during Storage Field Day 10. Due to a ton of work coming back after SFD10 and TFD11 I didn’t really have time to do a deep dive into the technology, but that is where my fellow SFD10 delegates are a big value to the community, so here are their blogposts:
If you are interested in these kinds of topics, please join us for the TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. This is a one day event focused on cloud computing and IT infrastructure with an innovative formula, it combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!