This post is cross-posted on the Metis IT website: find it here
Last week I attended VMworld 2016 in las Vegas. The VMware CEO, Pat Gelsinger, opened VMworld 2016, with the statement that “a new era of cloud freedom and control is here.” During the presentation Pat introduced VMware Cloud Foundation and the Cross-Cloud Architecture. According to them, this will be game-changing and “will enable customers to run, manage, connect, and secure applications across clouds and devices in a common operating environment”.
So let’s jump into what Cloud Foundation is. The way VMware explains Cloud foundation is as glue between vSphere, Virtual SAN and NSX which enables companies using it to create a unified Software Defined Data Center platform. In other words: it is a native stack that delivers enterprise-ready cloud infrastructure for the private and public cloud.
As VMware Cloud Foundation is their unified SDDC platform for the hybrid cloud, it is based on VMware’s compute, storage and network virtualization, it delivers a natively integrated software stack that can be used on-premise for private cloud deployment or run as a service from the public cloud with consistent and simple operations.
The core components are VMware vSphere, Virtual SAN and NSX. Another component of VMware Cloud Foundation is VMware SDDC Manager which automates the entire system lifecycle and simplifies software operations. In addition, it can be further integrated with VMware vRealize Suite, VMware Horizon and VMware Integrated OpenStack (where another announcement was made on during VMworld).
The launching partner of VMware is IBM that will provide coverage around the world, when it comes to datacenter locations:
select VMware vCloud Air Network (vCAN) partners in the near future to enable consumption of the full SDDC stack through a subscription model. The partners will deliver a SDDC infrastructure in the public cloud by leveraging Cloud Foundation.
The road ahead for VMware can be no else than the path of Cloud Foundation and the Cross Cloud architecture. We need to see if the path chosen is enough to get VMware back on track again. In my opinion VMware is late to the party and they should have kept their good relationships with partners at the first place. But truth be said, it seems a very solid foundation for companies to build the next generation datacenters, with VMware’s unified SDDC platform. With some changes in course and a solid development of products VMware might be able to keep their stake in the datacenters.
Sources on VMware Cloud Foundation:
If you are interested on these kind of topics, join us at the next TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. TECHunplugged is a one day event focused on cloud computing and IT infrastructure with a formula that combines a group of independent, insightful bloggers with disruptive technology vendors and end users who manage the next generation technology environments. Join us!
During Storage Field Day 10 we had a very interesting presentation by Datera on their Elastic Data Fabric. They say its the first storage solution for enterprises as well as service provider clouds that is designed with DevOps style operations in mind. The Elastic Data Fabric provides scale-out storage software capable of turning standard commodity hardware into a RESTful API-driven, policy-based storage fabric, for the enterprise environments.
The Datera EBF solution gives your environment the flexibility of hyperscale environments through a clever and innovate software solution. In a fast changing application landscape being fast, flexible, agile and software defined is key. Cloud native is the new kid on the block and more and enterprises are adapting to these kinds application development.
Datera seems to be able to provide enterprises as well as cloud providers the storage that is needed to build these applications. The way this is accomplished by Datera is be defined in four main solutions:
What is Intent defined? I had a bit of a struggle on that question myself, so just lets stick to the explanation Datera provides:
Intent defined is a ochestrated play between storage and application. An application developer will know what he would like from storage, and can define these to the storage application programming interface. This is DevOps at its best. When storage is able to be scriptable from a developer perspective and manageable from a storage adminstrator perspective you know you hit the jackpot.
Already mention the API a couple of times, but this is one of the key features of the Datera EBF, and therefor very important. Datera aims to set a new standard with the elegance and simplicity of their API. They intend to make the API as easy usable as possible to make sure it used and not forsaken because it so hard to understand.
The API is a well though and extremely hard peace to do right creating something as difficult as a storage platform for the customers Datera is aiming for. The API first approach and the approach Datera took developing the API seems to be a real seldom seen piece of art in this space.
Things always need to come together creating something like a storage platform. One of these things is that the companies buying your solution want the opportunity to mix and match. They want to buy what they need now and if they need more (capacity or performance or both) they want to add just as easy. At Datera the you can mix and match different kind of nodes without impacting the overall performance of the solution, making it one of these rare solutions that is truely hyper-composable.
This is where a lot of software solutions say they are, but…. When you start using their products you find out the hard way that the definition of Multi-tenant is used in many ways, and true multi-tenancy is hard to get.
Is this different with Datera? They say it is, but to be honest I’m nit really sure of it. I’ll try to figure this out and reach out to the Datera people. And although they do not have a lot of official customers, a couple of them are well known for their multi-tenant environment, so my best guess is that the multi-tenancy part is OK with Datera, if not I’ll let you know.
I was very impressed with the information provided by Datera during Storage Field Day 10. Due to a ton of work coming back after SFD10 and TFD11 I didn’t really have time to do a deep dive into the technology, but that is where my fellow SFD10 delegates are a big value to the community, so here are their blogposts:
The Cool Thing About Datera Is Intent by Dan Frith
Defining Software-defined Storage definition. Definitely. – Juku.it by Enrico Signoretti
Torus – Because We Need Another Distributed Storage Software Solution by Chris Evans
Storage Field Day 10 Preview: Datera by Chris Evans
Storage Field Day 10 Next Week by Rick Schlander
And as always the Tech Field Day team provides us with an awesome site full of the information on Datera here
And just to make sure you have a direct option to watch the videos of the SFD10 presentations, here they are:
1. Datera Introduction with Marc Fleischmann
2. Datera Docker and Swarm Demo with Bill Borsari
3. Datera Architecture Deep Dive
4. Datera OpenStack Demo with Bill Borsari
And make sure to visit the Datera website as well:
If you are interested in these kinds of topics, please join us for the TECHunplugged conference in Amsterdam on 6/10/16 and in Chicago on 27/10/16. This is a one day event focused on cloud computing and IT infrastructure with an innovative formula, it combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!
During Tech Field Day 11 we had presentations from a lot of awesome companies. Some of them I knew, but others were new to me, and that while some of these already exist for multiple years. The first of these “older”companies was Netwrix.
When writing a couple of VMware designs in which compliancy was a big deal, I learned that a good auditing tool is a must have as the auditors will not approve anything if they you didn’t provide them with the right answers and tooling needed to be compliant. A tool like Netwrix can help a lot with this.
So during Tech Field Day 11 I was pleased to see Netwrix do a great job at explaining where they came from and what they do. A couple of points that were told in this first presentation:
• The company is founded in 2006 (that’s right the company celebrates it’s 10th anniversary this year)
• The founders Michael Fimin and Alex Vovk, who both worked at Quest software before starting Netwrix.
• The company has no venture funding.
• The company has over 200 employees across the globe, and;
• They have over 7000 customers worldwide
But it might be better if you just watch part 1 of the presentation first:
Netwrix Auditor Platform capabilities
The Netwrix auditor platform can help you audit and monitor multiple systems and application, the following are usable by default:
- Microsoft Active Directory
- Microsoft Exchange Server
- Microsoft Office 365
- Microsoft Sharepoint
- Microsoft SQL server
- VMware vSphere
- Windows File Server
- Windows Server
Some of these are on-premises only, but a couple of them are also hybrid cloud capable, meaning you can audit your applications both on- and off-premises. Through the use of RESTful API’s both in and out bound you can leverage even more, but that is for a later blogpost :D.
Other TFD11 delegates on Netwrix
As always a couple of my TFD11 delegates also wrote some articles on Netwrix. Here are the articles already in the open (I’ll try to keep it updated, but I can’t promise anything :D):
A small section on Netwrix can be found in the write-up by Tech Field Day Goes To 11
And last but not least, Mark May (@) wrote a piece right after the presentation (showoff ;-P) called: Breaking down silos between security and operations
And as always, all Netwrix information and videos are available at the Tech Field Day site: Tech Field Day Netwrix
As already mentioned I’ll try to keep this post updated if people will write more on Netwrix, and I will also try to do a part two and three on Netwrix, but first I want to write a couple of post on other companies presenting at TFD11.
This week I’m at the SDDC consulting training at the VMware EMEA HQ in Staines. There is a really full program with presentations and labs about the VMware SDDC portfolio. Products that will be covered in the training are:
- vRealize Automation
- vRealize Orchestrator
- VMware NSX
- VMware SRM
But the most important focus this week is the integration between all VMware products and 3th party products like InfoBlox and Service Now.
We started yesterday with the installation of a distributed vRealize Automation 6 environment. After clicking thru 281 pages of instruction the installation was finished. Some people in the class had problems with the lab base environment because of time out errors. The reason was a slow network connection not just slow but really really slow…
The lab environment consists of virtualized ESXi hosts and is using NSX for the networking part. In NSX there is some bug (or should I say undocumented feature ;-)) that cause lots of packet drops when using virtualized ESXi hosts and NSX. The solution to work around is to create DRS rules to keep some of the VMs (the ones you are working on) together on a virtualized ESXi host so all network traffic is kept locally. I think it’s also possible you experience the same slow connection if you are doing the VMware Hands On Labs because the setup is probably the same.
Today when booting up my lab again I had the issue that the infrastructure tab had a strange name. The name was changed in: firstname.lastname@example.org instead of just Infrastructure. All underlying tabs had the same problems. If you know where to click everything is still working, but it doesn’t feel good.
The solution to this problem is to just reboot some nodes of the vRA installation. But wait, which of the 10 servers do need a reboot? The answer is nearly all of them. The boot order for the complete stack is:
- Microsoft SQL Database server
- Identity appliance
- vRealize appliance 1
- vRealize appliance 2
- IAAS webserver 1 & 2 (vRealize webportal and ModelManagerData services)
- Application server 1 ( primary IAAS Manager and the DEM Orchestrator Server services)
- Application server 2 (secondary IAAS Manager and the DEM Orchestrator Server services)
- Proxy server 1 & 2 (DEM worker and Proxy Agent server services)
Rebooting from step 3 will resolve this issue. First shutdown all services in the reverse order and when you are at the vRealize appliance 1 just reboot this one. Wait till the VAMI is showing up in the console and then (and not earlier!) start the next one of the list. If the server is a Windows server give it some extra time to boot up all services.
If everything is restarted then you will see the normally names and tabs.
In half an hour we’ll go live with Storage Field Day 10. I do encourage you to watch live at the Tech Field Day site, but for those that really want to follow the livestream on vdicloud.nl, here you go:
The agenda will be as follows:
|Wednesday, May 25||9:30 – 11:30||Kaminario Presents at Storage Field Day 10|
|Wednesday, May 25||12:30 – 14:30||Primary Data Presents at Storage Field Day 10|
|Wednesday, May 25||15:00 – 17:00||Cloudian Presents at Storage Field Day 10|
|Thursday, May 26||9:30 – 11:30||Pure Storage Presents at Storage Field Day 10|
|Thursday, May 26||13:00 – 15:00||Datera Presents at Storage Field Day 10|
|Thursday, May 26||16:00 – 18:00||Tintri Presents at Storage Field Day 10|
|Friday, May 27||8:00 – 10:00||Nimble Storage Presents at Storage Field Day 10|
|Friday, May 27||10:30 – 12:30||Hedvig Presents at Storage Field Day 10|
|Friday, May 27||13:30 – 15:30||Exablox Presents at Storage Field Day 10|
If you have a question you would like to be asked during the presentation, please do so (use twitter and use the hashtag #SFD10)
See you online!