by Arjan | May 5, 2016 | VMware, vRealize
The last couple of weeks I’ve been doing research on the VMware vCloud Air for one of our customers. Our customers is looking at the vCloud Air solution for large pieces of their current infrastructure, and the big driver for this investigation is the fact that at first looks the prices for VMware vCloud Air look cheaper (way cheaper accually) in stead of building and migrating and their own Datacenters. But to get a clearer view of what the VMware vCloud Air , let’s dive in to vCloud Air to see if this is true and what vCloud Air is.
Vmware vCloud Air
First we’ll investigate what VMware is providing through their vCloud Air offering. To make sure we got everything right we’ll see what the VMware website tells us about their vCloud Air offering:

As you can see there are multiple ways the business could use the VMware vCloud Air solution as seen in picture above there are 6 different offerings:
- Disaster Recovery
- Virtual Private Cloud
- Virtual Private Cloud OnDemand
- vCloud Government Service
- Object Storage
- Dedicated Cloud
For this part we’re going to take a look at the Dedicated Cloud offering and we’re diving in to the benefits and positive points of the solution.
Welcome in the buzzword bingo?

There is to much buzzwords going around in this area that it is hard to keep track on what is what! What is the difference betweens private cloud, Infrastructure as a Service and Dedicated cloud and on-premises infrastructure? What does it mean for your company and where does fit in your IT environment?
Invisible IT?
For me the Invisible IT buzzwords is what most companies (I do business with) are really looking for…. Providing the IT resources instantly when the business needs it, and thereby being a business enabler is what IT should be all about.
In the the last couple of decades it often happened that when the business needed a new application to help the business growth, it could take months before the application could be used. With the birth, and adoption of virtualization most IT departments managed to cut this down to about a week. But that time is used to just implement the virtual servers needed for the application (with the right network and storage resources), and after this time the application still needs to implement and test the the application.
Welcome to the Cloud Era

With the introduction of cloud a lot of people were sceptic. But after a couple of years people use it all the time, and got used to the benefits of cloud computing. One of the biggest advantages of cloud computing is how fast it is to buy resources. Go to AWS, Azure, Google or whatever cloud providor you want and with a credit card and a few clicks your VM is running in minutes….
This is where most IT departments lost the battle (they think…). If a in house department still needs to wait weeks of even still months before they can really start developing, implementing and using the application they tend to run, and use, the public cloud quickly. They normally won’t think of the business impact of such a move, but on the other the project can deliver much quicker and that’s all that counts to them.
As Dilbert explained in the comic above there is a way for IT to use the on-premises resources as well as the public cloud to move be the business enabler IT needs to be.
Virtualization vs. Hybrid Cloud
It seems such a long time ago that virtualization needed to prove its place in the Datacenter. A lot of companies looked at the virtualization product and didn’t see it production ready, but after testing it in their test environments and seeing the benefits almost all companies testing also started using it in their production environments as well.
The same is seems to happen with the use of hybrid cloud, but it seems that the hybrid cloud adoption goes much faster. The way companies start using a hybrid cloud solution is lots of time driven by the fact that certain workloads already started their development in the public cloud, and the company would like to embed the posibilities the cloud provides. The Hybrid cloud is the combination of private (which could also be a traditional IT environment) and public cloud(s) which provides your company the best of both worlds. But to manage these clous, you’ll need the right tools.

Cloud Management Platform
To manage your comapnies Hybrid Cloud they’ll need a Cloud Management Platform. As already mentioned the CMP’s are Management portals that offer your business the management needed to provide the private and public IT services. It is important to know that although there are many CMP’s I have found any (yet) that offers the complete spectrum of private and public offerings, although they all offer REStful api support so you could create certain things yourself (if you have the development force to do so ;)) I’ll probably dive into a couple of the CMP’s at a later stage, but for now if you want to know more about CMP’s look at these:
There are many more, but for now it is more than enough to have some reading material during a couple of days 😉
VMware vRealize suite and vCloud Air
I started this post about the VMware vCloud Air solution, but in the end I didn’t really talk about it that much. I promise I’ll do more in depth in the next part but for now I want to focus a little more on VMware vRealize Suite and the vCloud Air products for building a VMware Hybrid cloud.
With a lot of companies that build their virtualization environment on the VMware vSphere product, it is an easy step to want to build their hybrid IT environment on this foundation. To do so, they can leverage the vRealize suite product to automate and orchestrate their current environment as well as the vCloud air solutions, and furthermore other cloud solutions like AWS, Azure and others.

For a lot of companies this would build the environment they need to be on the edge, while still maintaining a soltution build on the foundation they already had, keeping the knowledge they already have in house, and giving IT the power to become a business enabler again.
Conclusion
When I started this post I didn’t intend it to be this long, and that’s the main reason to stop puting more information in this single post. Where I started out with an introduction to VMware vCloud Air, it became much more, but that’s what blogging is all about (IMHO :D) I’ll be back with more information on vCloud Air, vRealize suite, CMP, and more…. But for now cheerio!
If you want to know more about this topic, I’ll be presenting at next TECHunplugged conference in London on 12/5/16. A one day event focused on cloud computing and IT infrastructure with an innovative formula combines a group of independent, insightful and well-recognized bloggers with disruptive technology vendors and end users who manage rich technology environments. Join us!
by Arjan | May 3, 2016 | Nutanix
A couple of weeks ago I attended a seminar in which a Nutanix competitor stated that the Nutanix Licensing was a really hard nut to crack. I immediatly went to the Nutanix licensing site to see what was so difficult about it but couldn’t explain why someone would find it so hard to figure out the Nutanix Licensing.
The Acropolis editions
So let’s see how the Nutanix Licensing is done, and start with Acropolis:
Nutanix Acropolis is a powerful scale-out data fabric for storage, compute and virtualization. Acropolis combines feature-rich software-defined storage with built-in virtualization in a turnkey hyperconverged infrastructure solution that can run any application at any scale.

Well get back on the Acropolis licensing details shortly 😀
And on the other hand Nutanix Prism:
Nutanix Prism gives administrators a simple and elegant way to manage virtual environments. Powered by advanced data analytics and heuristics, Prism simplifies and streamlines common workflows within a datacenter eliminating the need to have disparate management solutions.
So after we know the different versions of Acropolis and Prism let’s dive into the differences between the Acropolis Starter, Pro and Ultimate editions:
The first difference between the three can be Storage side of things and is clearly stated on the Nutanix Website:

So if you need more then 12 hosts in a cluster or Deduplication, Compression or Erasure coding you’ll need at least Pro, if you need some of your workloads to be pinned on flash you’ll have to switch over to Ultimate. Easy as that on the storage side, let’s continue.
The next one in the Nutanix list is Infrastructure Resilience:

In this case it is even easier, if you need enclosure or rack awereness Starter is a NoNo. You’ll need to decide on other features (like storage) if you need to go to Pro or Ultimate.
Next please 😀 That will be Data Protection:

So you need Cloud Connect or Time Stream? Move to Pro, if you want Multi Site DR, Metro Availability or Sync Replicatie and DR you’ll need to go to Ultimate, if you don’t need all of these services, you’ll do just fine with starter…
No hard nuts for me until now, but we’re not there so let’s continue:
The one thing everybody seems to be talking about these days is security, and that’s also the next on the Nutanix list:

Just need Client Authentication? Go for Starter. Also need Cluster Lockdown?:
Cluster Shield, which allows administrators to restrict access to a Nutanix cluster in security-conscious environments, such as government facilities and healthcare provider systems. Cluster Shield disables interactive shell logins automatically.
Go to Pro and if you also need Data-at-Rest encryption please continue to Ultimate 😉
The next “hard nut” to crack, would be Management & Analytics. But for me it’s another easy comparison between what is in the licensing offer:

What is important for our Prism comparison is that every Acropolis edition already offers the Prism Starter edition. We don’t really need to look at that one than, so we’ll concentrate on Pro if we get there 😀 For Management and Analytics it is kind of easy again, because if you need Rest API’s you’ll need Pro or Ultimate, otherwise you could do with Starter. But again, it depends on the other features in these licensing deals if you can choose the Starter/Pro or Ultimate.
The last one would be Virtualization, but there is difference between the three on this:

That’s all for the Acropolis site of things, and to be honest I didn’t find any hard nuts to crack. The list is very clear, and based on the business and technical requirements it should be able to choose the flavor you need.
The Prism editions
So it must be in the Prism site of things than. Let’s see how difficult this Prism thingy really is.
For the people that payed attention, I already told that all Acropolis editions included the Prism Starter edition so we can concentrate on Pro.
Just to make clear on what is in both of them:

It’s not that hard to make your choice if you ask me, but let’s explain what Prism Pro offers more than the already included Starter:
- Prism search is an integrated google-like search experience that will help you query and perform actions with a single click
- Customizable Operations Dashboard is a Visually rich dashboards that give actionable summary of applications, virtual machines and infrastructure state at-a-glance.
- Capacity Behavior Analytics is a Predictive analysis of capacity usage and trends based on workload behavior enabling pay-as-you-grow scaling
- Capacity Optimization Advisor is an Infrastructure optimization recommendations to improve efficiency and performance
So if you need one of these features you’ll need to buy the Prism Pro license.
Conclusion
It is one thing to bash your competitors and I know they all do this, including Nutanix themselves, but if you want to say something about your competitors, please make sure to know what you’re talking about. In this case (and in my “humble” opinion) the statement about the Nutanix Licensing being a hard nut to crack is really based on nothing.
The Nutanix licensing is very clear in what the licensing does and does not include and it’s up to you to create clear requirements about the environment on which you can than base you choice for the Acropolis and Prism edition you’ll need.
by Yannick Arens | Apr 5, 2016 | SimpliVity
This is a cross post from my Metis IT blogpost, which you can find here.
Today, April 5, 2016, SimpliVity announced new capabilities of the OmniStack Data Virtualization Platform. The announcement consists of three subjects:
- OmniStack 3.5
- OmniView
- Hyper-V
Omnistack 3.5
This new version is the first major update of this year and I hope there will come more updates. The latest major release, version 3.0, was in the early second half of 2015. SimpliVity say this new version will deliver new capabilities optimized for large, mission-critical and global enterprise deployments. Besides improvements to the code, this release will add three new main capabilities to the OmniStack Data Virtualization Platform.
Stretched Clusters
The first improvement in the OmniStack software is the ability to create multi-node stretched clusters. In the current versions it is only possible to create a stretched cluster with a total of 2 nodes divided over two sites. This limit is now increased and supported by default. With a stretched cluster it will be possible to achieve a RPO of zero and a RTO of seconds.

Intelligent Workload Optimizer
The second new capability is the Intelligent Workload Optimizer. SimpliVity will use a multi-dimensional approach to balance the workload over the platform. The balancing will be based on CPU, Memory, I/O performance and Data Location. This will result in less data migrations and a greater virtual machine improvement.

REST API
And the last new capability in the OmniStack Software is the REST API. In version 3.5 it will be possible to use the REST API to manage the SimpliVity data virtualization platform. It was already possible to integrate with VMware vRealize Automation but now it will be a lot easier to integrate with third-party management portals and applications.

OmniView
OmniView Predictive Insight tool is the second part of the announcement. OmniView is a web-based tool that gives custom visualization of an entire SimpliVity deployment. It can give predictive analytics and trends within a SimpliVity environment and helps to plan future grow. The tool can also help to investigate and troubleshoot issues within the environment. OmniView will be available for Mission-Critical-level support customers and approved partners.

Hyper-V
The last part of the announcement is support for Hyper-V. The OmniStack Data Virtualization platform will be extended to this platform to give customers more choice. SimpliVity will support mixed and dedicated Hyper-V environments with the release of Windows Server 2016. Planning and timing about the availability is aligned to the release of Microsoft Windows Server 2016.
Conclusion
The announcement is a great step in the right direction and I think just-in-time. For me the most important part of the announcement is the announcement of version 3.5 and more specifically the support for stretched clusters. In more and more large European organizations stretched cluster support is a requirement nowadays and SimpliVity will now have the ability to support this. Also the REST API will help to integrate SimpliVity in an existing ecosystem of a customer.
The OmniView Predictive Insight tool will give customers insight to their SimpliVity environment and provide predictive analytics and forecasts. In the current 3.0 version it was only possible to get some statistics about the storage but now you will have a self-learning system which customers can use to improve their environment.
The Hyper-V support announcement is also a long-awaited one. Now we only have to wait till Microsoft will release Windows Server 2016 to use this feature.
by Yannick Arens | Mar 30, 2016 | NLVMUG, VMware
This is a cross post from my Metis IT blogpost, which you can find here.
This year, The anual NLVMUG UserCon was on March 17, 2016 in the city of Den Bosch. Last year was my first time at the NLVMUG and this year I was one of the speakers. Together with my colleague Ronald van Vugt we presented “De kracht van de blueprint”, translated to English “The power of the blueprint”. Our presentation was scheduled at 11.30 right after the first coffee break.
The day started with a keynote presentation of Kit Colbert from VMware about Cloud-Native Apps. His presentation began with an example of John Deere, the tractor company, who formerly sold only tractors but now also collects and analyze data from all their equipment.
With this data analitics they can advise the farmer about the way they can optimize their equipment and land. Companies like John Deere need a co
mpletely different kind of apps, architecture
and how they develop and maintain applications. In his presentation he showed how VMware can support these new apps and how the VMware platform can support this. For these new apps VMware has developed the vSphere Integrated Containter architecture and the VMware Photon platform.
After the keynote it was time for us to do some last preparations for the presentation. We checked the VPN connection for the live demo, all demo steps and the presentation script. In the coffee break, just before our presentation we had enough time to setup our equipment and test the microphone. Then it was time for the presentation!
The main subject of our presentation was vRealize Automation and the way you can automate your application environment. In the first part of the
presentation we introduced the product and the functionalities. After the background information it was time to start with our live demo. In the demo we showed how you can automate the deployment of a two tier WordPress application with vRA and VMware NSX. Live on stage we composed the application environment, with all network services, relations and policies. After the demo there was some time for questions. If you are interested in our presentation and demo you can download the presentation including screenshots of the demo steps here.
In the afternoon there was a second keynote of Jay Marshall from Google about the Google Cloud Platform. He showed how Google has grown from search engine to a big player in the cloud market. He also showed the
partnership between VMware and Google to create a hybrid cloud. After this keynote I attended to some other presentations about vSAN and vRealize Automation and vRealize Orchestration. After the last presentation it was time for the reception and the prize drawing of the sponsors. After the price drawing the day was over.
I look back at a great event and an awesome new presentation experience. It was fun to be on stage to share our knowledge at the biggest VMUG in the world. I want to thanks the NLVMUG organization for all their hard work and I hope to meet you next year.
Attachment: NLVMUG 2016 handouts PDF
by Yannick Arens | Mar 29, 2016 | Cisco
This is a cross post from my Metis IT blogpost, which you can find here.
After teasing the market with a photo containing three servers, the word Hyper and some blank puzzle pieces, Cisco announced their own Hyper-converged Solution: Cisco HyperFlex. This solution is an extension of Cisco’s Unified Computing System (UCS). Until now the UCS platform portfolio did not contain a native Cisco storage solution. Finally Cisco entered the highly competitive Hyper-converged Infrastructure (HCI) market with HyperFlex.
The Cisco HyperFlex solution combines compute, storage and the network in one appliance. Cisco says the solution is unique in three ways: flexible scaling, continuous data optimization and an integrated network. All other HCI vendors do Hyper-converged with compute, storage and networking, but none of these have a complete integrated network solution. As expected of a former networking only company, Cisco also integrated the network.
The platform is built on existing UCS components and a new storage component. The servers used in the solution are based on the existing Cisco UCS product line. Networking is based on the Cisco UCS Fabric interconnects. The new storage component in Cisco’s platform is called the Cisco HyperFlex HX Data Platform, which is based on Springpath technology.
Springpath HALO and Cisco HyperFlex HX Data Platform
Springpath was founded in 2012 and Cisco co-invested the start-up. Springpath has developed its own data platform using HALO (Hardware Agnostic Log-structured Object) architecture. The HALO architecture offers a flexible platform with data distribution, caching, persistence and optimization. Cisco has re-branded this to the Cisco HyperFlex HX Data Platform.
All data on the Cisco HX Data platform is distributed over the cluster. Data optimization takes place by using inline de-duplication and compression. Cisco indicates most customers should reach 20-30% capacity reduction with de-duplication and another 30-50% with compression without any performance impact.

VMware and Cisco HyperFlex
First the HyperFlex solution will only be available with the VMware hypervisor using NFS as storage protocol. A Data Platform Controller for communication with the physical hardware will be used for the HyperFlex platform. This Data Platform Controller requires a dedicated number of processor cores and dedicated amount of memory. The controller integrates the HX Data Platform with the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs): IO Visor and VAAI. IO Visor provides a NFS mount point and VAAI offloads file system operations.

Management
The HyperFlex storage is managed with a vCenter plug-in. There are currently no details available about the layout and functionality of this plug-in. We expect the plugin will be the same as Springpath with Cisco branding.
The physical server and network is managed like any other Cisco UCS server. Each server will be connected to the Fabric Interconnect and managed from the UCS manager interface.
Cisco HyperFlex range
The HyperFlex platform is available in three different models, an 1U and 2U rack based server and a combination of rack servers with blade servers. The first model is for a small footprint, the 2U model is for maximal capacity and the last option is for maximal capacity and high compute.
All configurations must be ordered with a minimum of four servers. As far as we know at this stage the maximum number of servers in a HyperFlex cluster is eight. Each server will be delivered with VMware pre-installed.
The hardware configuration of the HyperFlex nodes is not fixed. You can choose your type of processor, amount of memory and the amount of disks. On the Cisco Build & Price website all available configuration options can be found. You can always scale your cluster by adding storage and/or compute nodes.

Licensing
Cisco has an interesting licensing model for the HyperFlex HX Data Platform. The HX Data Platform will be licensed on a per year basis. In the configuration tool by default a server is configured with a license for one year. This licensing model deviates from other HCI vendors who base their license model on raw or used TB’s, or use a perpetual license.
Conclusion
Cisco is a new and interesting player in the rapidly growing Hyper-converged market. The technology used provides some nice features, capabilities and an interesting licensing model. Time will tell if the product will be successful and what the roadmap will bring for the future. But at first sight it looks like a good alternative for the leading Hyper-converged solutions.
by Yannick Arens | Mar 29, 2016 | VMware, vRealize, vRealize Automation
This is a cross post from my Metis IT blogpost, which you can find here.
Last week VMware released a new version of vRealize Automation (vRA), version 7.0.1. In this version most of the version 7.0.0 bugs and issues are resolved. In the release notes you can find the list of all resolved issues. In this blog I will guide you through the upgrade process.
It is possible to upgrade to this new version from any supported vRealize Automation 6.2.x version and the latest 7.0 version. In this blog I will focus on an upgrade from version 7.0.0 to version 7.0.1. If you still use an earlier version of vRA you have to upgrade frist to version 6.2.x. The environment we will upgrade is a minimum deployment based on version 7.0.0.
The following steps are required for a successful upgrade of vRealize Automation.
- Backup your current installation
- Shut down vRealize Automation Windows services on your IAAS server
- Configure hardware resources
- Download and install upgrades to the vRA appliance
- Download and install updates for IAAS
- Post Upgrade Tasks
Backup your current installation
Before you start the upgrade it is important to backup some components of the existing installation. If something goes wrong you can always go back to the current version.
Configuration file backup
First start with a backup of the vRA configuration files. This file can be backupped with the following steps:
- Login with ssh on the vRA appliance
- Make a copy of the following directories:
- /etc/vcac/
- /etc/vco/
- /etc/apache2/
- /etc/rabbitmq/
First create a directory backup.
mkdir /etc/backupconf
Copy now all directories to this folder:
cp -R /etc/vcac/ /etc/backupconf/
Perform these steps for each folder.
Database backup
Make a SQL backup of the vRA IAAS database. For the integrated postgres database it is enough to snapshot the complete vRA appliance.
- Login to the database server
- Open the MSSQL Management Console and login
- Click left on the vRA database and choose Tasks and choose Backup Up…
- Choose the location for the backup and click on OK.
- Wait for the completion of the backup.
Screenshots of the tenant configuration and users
If something goes wrong with the upgrade it could be possible that this configuration information is changed. For safety it is recommended to capture this information.
- Login as administrator to the vRA interface
- Make a Screenshot of your tenants

- And the Local Users of the tenant

- And the Administrators

Backup any files you have customized
The vRA upgrade will possibly delete or modify all customized files. If you want to keep this files please backup them. In our environment we don’t use any customized files.
Create snapshot of the IAAS server
To take a snapshot of the IAAS server is the last step in the upgrade process.
- Shutdown the IAAS server and the vRA appliance in the correct order.
- Login to vCenter
- First select the IAAS VM and select shutdown guest. If the shutdown is complete select the vRA appliance and choose again for shutdown guest.
- Right-click on the IAAS VM and select Snapshots and Take Snapshot. Fill in the name of the snapshot and click on OK.
- Power On the IAAS VM
Disable the IAAS services
- Login on the IAAS server, open msc and stop the following services:
- All VMware vCloud Automation agents
- All VMware DEM workers
- All DEM orchestrator
- VMware vCloud Automation Center Service

Configure hardware resources of the vRA appliance
For the upgrade it is necessary to extend the existing disks of the vRA appliance. But before we do this, create a copy of the existing vRA appliance.
- Right-click on the vRA appliance, select Clone and Clone to Virtual Machine
- Give the VM a unique name and select the resources for the new VM and click on Finish.
- Wait for completion.
- Right-click on the original VM and select Edit Settings.
- Extend the first disks (1) to 50GB and click OK.

- Create a snapshot of the VM. Select the VM, click on Snapshots and click Take Snapshot.
- Wait for the snapshot.
- Power on the vRA VM.
- Wait for the machine to start
- SSH to the vRA VM and login with the root
- Execute the following commands to stop all vRA services:
Service vcac-server stop
Service vco-server stop
Service vpostgres stop
- Extend the Linux file system with the following commands:
Unmount swap table:
Swapoff –a
Delete existing partitions and create a 44GB root and 6GB swap partition. This command and the next command return an error about the kernel that is still active at this point. After a reboot at step 13 all changes will be active:
(echo d; echo 2; echo d; echo 1; echo n; echo p; echo ; echo ; echo ‘+44G’; echo n; echo p; echo ; echo ; echo ; echo w; echo p; echo q) | fdisk /dev/sda
Change the swap partition type:
(echo t; echo 2; echo 82; echo w; echo p; echo q) | fdisk /dev/sda
Set disk 1 bootable:
(echo a; echo 1; echo w; echo p; echo q) | fdisk /dev/sda
Register partition changes and format the new swap partition:
Partprobe
Mkswap /dev/sda2
Mount the swap partition:
Swapon –a
- Reboot the vRA partition
- When the appliance is started again login with SSH and resize the partiation table:
Resize2fs /dev/sda1
- Check the resize with command df -h
Install the vRA update
- Login on the management interface: https://vRAhostname:5480
- Click on the Services tab and check the services. All services should be registered except the iaas-service.
If everything is checked, click on the update tab. If not all services are running and you are using a proxy server, check this Vmware article: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144067
- Click on Check Updates. The new update will be displayed.

- Click now on Install update and Click OK.
- The follow the installation you can check the following log files: /opt/vmware/var/log/vami/updatecli.log
/opt/vmware/var/log/vami/vami.log
/var/log/vmware/horizon/horizon.log
The most useful information can be found in the vami.log and updatecli.log. In these log files you can see the download progress and information about the upgrade status.
Use tail –f /opt/vmware/var/log/vami/* to show all log files
- Wait untill the update is finished.

- If the upgrade is finished, reboot the appliance. Click on the System tab and click on
Upgrading the IAAS server components
The next step in this process is to upgrade the IAAS components. The IAAS installer will also upgrade the MSSQL database. In earlier upgrade processes it was needed to separately upgrade the database. To start the IAAS upgrade, follow the following steps:
- Open your favorite webbrowser and go to: https://vRAhostname:5480/installer
- Click the IAAS installer Save the prompted file. (Do not change the filename!)
- Open the installer and follow the wizard.
- Accept the license agreement and click on Next.
- Provide the Appliance Login Information. Click on Next.

- Choose for Upgrade. Click on Next.
- Provide the correct Service Account for the component services and the authentication information of the SQL server. Click on Next.
- Accept the certificate of the SSO Default Tenant and provide the SSO Administrator Credentials. Click on Next.
- Click now on Upgrade to start the upgrade.

- Click on Next and finish to complete the IAAS upgrade.
Post upgrade tasks
After the IAAS upgrade first check the correct operation of the vRA appliance. Click on the infrastructure tab and click on endpoint. Verify the endpoint overview is correct. Next try to request a blueprint and check if everything will finish successful.
If everything is correct, the last step is the upgrade of the vRA agents on the OS templates. The new agents also contain some bug fixes. In our environment we use CentOS and Windows Operating Systems. We will first start with the upgrade of the CentOS agent followed by the Windows Agent.
CentOS agent
- Convert the CentOS template to a VM and boot the VM.
- Download the prepare_vra_template.sh script from the following location: https://vRAhostname.local:5480/service/software/download/prepare_vra_template.sh
- Allow execution of the script with:
chmod +x prepare_vra_template.sh
- Execute the script: ./prepare_vra_template.sh.
- Follow the wizard and provide the correct information. I choose for vSphere, no certificate check and the install Java.
- Wait for completion and shutdown the VM.
- Convert the VM back to a template.
Windows Agent
For the upgrade of the Windows Agent we will use the script made by Gary Coburn. He developed a script that will install all the needed components and the vRA agent on Windows. Thanks to my colleague Ronald van Vugt for this modification on this script because of newer java version. The original script is based on vRA version 7.0.0 which included version jre-1.8.0-66. The java version included in version 7.0.1 is newer, so a modification to the script is required.
- Download the original script from here or here. And open the script and search for the following line:
$url=”https://” + $vRAurl + “:5480/service/software/download/jre-1.8.0_66-win64.zip”
- This line must be edited to:
$url=”https://” + $vRAurl + “:5480/service/software/download/jre-1.8.0_72-win64.zip”
- If the script is edited run the script with the following parameters:
./prepare_vra_template.ps1 vra-hostname iaas-hostnamePasswordofDarwinUser
- The script will sometimes ask for confirmation.
- Wait till the installation is complete.
- Shutdown the VM and convert it again to a template.
Verify the installation
Now request some of your blueprints to verify the correct operation of the vRA appliance, IAAS server and the guest agents. If everything is OK, then it is time to delete the snapshots of the vRA appliance and IAAS server.
- Select the VM, choose for snapshots and Manage Snapshots
- Delete the snapshot you have made before installation.
- Do this for both VMs
Conclusion
Before executing this upgrade in a production environment it is recommended to plan the upgrade and verify that all dependencies will work after the upgrade. Also plan enough time for this upgrade, so you have the time to check and verify the installation.
by Arjan | Feb 25, 2016 | Storage Field Day 7, Storage Field Day 9, Storage Field Days, VMware, VSAN
This is a cross post from my Metis IT blogpost, which you can find here.
VMware VSAN 6.2
On February 10 VMware announced Virtual SAN version 6.2. A lot of Metis IT customers are asking about the Software Defined Data Center (SDDC) and how products like VSAN fit into this new paradigm. Let’s investigate what VMware VSAN is, and what the value would be to use it, as well as what the new features are in version 6.2
VSAN and Software Defined Storage
In the data storage world, we all know that the growth of data is explosive (to say the least). In the last decade the biggest challenge for most companies was that people just kept making copies of their data and the data of their co-workers. Today we not only have this problem, but storage also has to provide the performance needed for data-analytics and more.
First the key components of Software Defined Storage:
- Abstraction: Abstracting the hardware from the software provides greater flexibility and scalability
- Aggregation: In the end it shouldn’t matter what storage solution you use, but it should be managed through only one interface
- Provisioning: the possibility to provision storage in the most effective and efficient way
- Orchestration: Make use of all of the storage platforms in your environment by orchestration (vVOLS, VSAN)

VSAN and Hyper-Converged Infrastructure
So what about Hyper-Converged Infrastructure (HCI)? Hyper-Converged systems allow the integrated resources (Compute, Network and Storage) to be managed as one entity through a common interface. With Hyper-converged systems the infrastructure can be expanded by adding nodes.
VSAN is Hyper-converged in a pure form. You don’t have to buy a complete stack, and you’re not bound to certain hardware configurations from certain vendors. Of course, there is the need for a VSAN HCL to make sure you reach the full potential of VSAN.
VMware VSAN 6.2. new features
With the 6.2 version of VSAN, VMware introduced a couple of really nice and awesome features, some of which are only available on the All-Flash VSAN clusters:
- Data Efficiency (Deduplication and Compression / All-Flash only)
- RAID-5/RAID-6 – Erasure Coding (All-Flash only)
- Quality of Service (QoS Hybrid and All-Flash)
- Software Checksum (Hybrid and All-Flash)
- IPV6 (Hybrid and All-Flash)
- Performance Monitoring Service (Hybrid and All-Flash)
Data Efficiency
Dedupe and compression happens during de-staging from the caching tier to the capacity tier. You enable “space efficiency” on a cluster level and deduplication happens on a per disk group basis. Larger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated, they are compressed. A significant saving already, but combined with deduplication, the results achieved can be up to 7x space reduction, off course fully dependent on the workload and type of VMs.
Erasure Coding
New is RAID 5 and RAID 6 support over the network, also known as erasure coding. In this case, RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts, 1 can fail without data loss. This results in a significant reduction of required disk capacity compared to RAID 1. Normally a 20GB disk would require 40GB of disk capacity with FTT=1, but in the case of RAID-5 over the network, the requirement is only ~27GB. RAID 6 is an option if FTT=2 is desired.
Quality of Service
This enables per VMDK IOPS Limits. They can be deployed by Storage Policy-Based Management (SPBM), tying them to existing policy frameworks. Service providers can use this to create differentiated service offerings using the same cluster/pool of storage. Customers wanting to mix diverse workloads will be interested in being able to keep workloads from impacting each other.
Software Checksum
Software Checksum will enable customers to detect corruptions that could be caused by faulty hardware/software components, including memory, drives, etc. during the read or write operations. In the case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection these errors without end-to-end integrity checking.
IPV6
Virtual SAN can now support IPv4-only, IPv6-only, and also IPv4/IPv6-both enabled. This addresses requirements for customers moving to IPv6 and, additionally, supports mixed mode for migrations.
Performance Monitoring Service
Performance Monitoring Service allows customers to be able to monitor existing workloads from vCenter. Customers needing access to tactical performance information will not need to go to vRO. Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter. The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally to 3rd party monitoring solutions by API. The Performance monitoring service runs on a distributed database that is stored directly on Virtual SAN.
Conclusion
VMware is making clear that the old way to do storage is obsolete. A company needs the agility, efficiency and scalability that is provided by the best of all worlds. VSAN is one of these, and although it has a short history, it has grown up pretty fast. For more information make sure to read the following blogs, and if you’re looking for a SDDC/SDS/HCI consultant to help you in solving your challenges, make sure to look for Metis IT.
Blogs on VMware VSAN:
http://www.vmware.com/products/virtual-san/
http://www.yellow-bricks.com/virtual-san/
http://www.punchingclouds.com/
http://cormachogan.com/vsan/
VMware to present on VSAN at Storage Field Day 9
I’m really exited to see the VMware VSAN team during Storage Field Day 9, where they will probably dive deep into the new features of VSAN 6.2. It will be an open discussion, where a I’m certain that the delegates will have some awesome questions. Also I would advise you to watch our earlier visit to the VMware VSAN team in Palo Alto about a year ago, at Storage Field Day 7 (Link)
by Arjan | Dec 7, 2015 | TechUnplugged
The last couple of months I’ve been kinda quiet, right… It’s not that I didn’t have anything to tell, but I’ve been busy with a couple of awesome projects:
- TechUnplugged (Austin 2016)
- NLVMUG (UserCon 2016)
- Projects: VUMC, UMCU and Abu Dhabi
Let’s dive (a little) deeper in to all of them, with a start of the TechUnplugged conference in Austin. I’ll write about the other two later this week.
TechUnplugged
TechUnplugged is a new kind of conference which was initiated by my Italian friend Enrico Signoretti. The conference is based on interaction during the entire event, giving the audiance the opportunity to really gain the knowledge the need.
A good example of what awesome content we get during the even is this presentation by Nigel Poulton during the first TechUnplugged in London:
After the first two events in 2015, the first in London and the second in Amsterdam, we decided to cross the ocean and try the same recipe in the US. So on February 2nd we’ll kickoff TechUnplugged US with a great line up of influencers as well as sponsors.
First the Influencer list:
Second the Sponsor list (not completed):

@DDN_limitless

@CloudianStorage

@HedvigInc

@Pernixdata

@ZertoCorp
We’ve seen that the conference is growing in Europe and we hope to achieve the same in the US. If you’re in, or near, Austin on the second of February 2016.
The agenda of the Austin event will be available here. The conference is free for end users, but seating is limited. Make sure to reserve your seat now, sign up here!
by Arjan | Oct 27, 2015 | Veeam, VeeamOn

So after a long day of travel on sunday (Amsterdam-Detroit and Detroit-Las Vegas) I arrived in Las Vegas late in the evening and fell into a deep sleep as soon as I hit the rather large bed in the Aria Resort and Casino, where the #VeeamOn2015 is held. I really like the venue in the way that it is an awesome resort where everything you need is in the same building(s) and if you need something it’s just a short 5 minute (or a little longer) walk. A esbig surprise was waiting for me when I entered the room and found a great gift (See picture). I actually had a discussion in the plane with the dutch guy (Ikea filmcrew) about this awesome headphones.. So a big thanks Veeam!
Monday at VeeamOn2015
So on monday after a good night sleep I went to the conference location to pick up my pass as well as a backpack and meet with a couple of guys. As I’m a foreigner with a big jetlag I decided to really take it easy this (Partner)day, and give myself the time to adjust.
I did went to the Grand opening of the Expo Lounge, to meet with peers and enjoy some great food and drinks. Always great to meet with people like Vladan Seget (blog: ESX Virtualization), Andrea Mauro (blog: vInfrastructure) and Joep Piscaer (blog: VirtualLifeStyle). Those three guys are all Veeam Vanguards and if you don’t know what that is, my suggestion is that you start reading more about this program here.

#VeeamON2015
Evening in Vegas
In the evening I decided to make the best of my time here in Sincity and did a walk over the strip to see the fountains, take pictures of all the crazy stuff in this town, before heading to my room to do a Skype call with the homefront and get a (not so) good night sleep.

Day 1 of VeeamOn2015 is already started and I’ll be writing another blogpost on that asap 
by Arjan | Sep 9, 2015 | Storage Field Day 7
All Flash Arrays (AFA) are hot for a couple of years now, and for a good reason! During Storage Field Day 1 we had 3 AFA vendors presenting with Kaminario, NimbusData and PureStorage. Although they have a different go-to-market strategies, as well as a different technology strategies, all three are still standing (allthough 1 of them seems to be struggling…)
At Storage Field Day 7 we had the privilege to get another Kaminario presentation and in this post I would like to take some time to see what Kaminario offers, and what new features they presented the last couple of months.
The K2 All-Flash Array
To give my readers who don’t know anything about who Kaminario is, and what Kaminario does, here is the first part of their presentation during SFD7 (done by their CEO Dani Golan):
There are couple of features provided by Kaminario that I find interesting (based on what was included 6 months ago):
– Choice of FC or ISCSI
– VMware integration (VAAI, vvols (not yet))
– Non-disruptive upgrades
– Great GUI
– Inline deduplication and compression
– Scale Up and Out
– K-Raid protection
– Industry standard SSD warranty (7 years now)
But there are/were still a couple of things missing, but it might be even better and go back a couple of years and see what the Kaminario solution looked like back then. A great post to look at the Kaminario solution back 2012 is the one of Hans De Leenheer:
Kaminario – a Solid State startup worth following
As you can see, there is so much innovation done by Kaminario, and in the last 6 months a lot more has been done.
What’s new in Kaminario K2 v5.5?
In the last couple of weeks Kaminario released the 5.5 version of their K2 product. In this release a couple of new (awesome) features were introduced that we’ll investigate a little deeper:
- Use of 3D TLC NAND
- Replication (asynchronous)
- Perpetual Array (Mix and match SSD/Controller)
Let’s start with the use of 3D TLC NAND. In earlier versions of their products Kaminario always used MLC NAND and a customer could choose between 400 and 800 GB MLC SSD’s. Knowing Kaminario can scale up and out that would mean that it could hold around 154 TB of Flash (with dedupe and compression this would go up to around 720+ TB according to kaminario documents). With the new 3D flash technology the size of the drives changed to 480, 960 GB MLC and a 1,92 TB TLC SSD which doubles the capacity:

The next new feature is Replication, although the documentation found on the Kaminario site on replication goes back to 2014, but it still mentioned in the what’s new in v5.5 documents. Something that is new with replication is the fact that Kaminario now integrates with VMware SRM to meet customer needs. This is great news for customers already using SRM or thinking about using. The way Kaminario does replication is based on their snapshot (application consistent).

Last but not least is Perpetual Array, which gives a customer the possibilty to mix and match SSD’s as well as Controller’s. This feature gives the customer the freedom to start building their storage system and continue growing even if Kaminario will change controller hardware or SSD technology.
Final thoughts
Looking at what changed at Kaminario the last couple of months (and the last couple of years, for that matter) I’m certain we’ll see a lot of great innovation from Kaminario in their upcoming releases. 3D NAND will get Kaminario to much bigger scale (ever heard of Samsung showing a 16 TB 3D TLC SSD), and with their Scale Up and Scale out technology Kaminario has the right solution for each and every business. What I think would be a great idea for Kaminario is more visibilty outside the US, when my customers start talking about AFA I notice they almost never talk about Kaminario, mainly because they jut don’t know about them, and there are no local sales team to tell them about the Kaminario offering. That’s just to bad, as I still think Kaminario is a very cool AFA vendor. It was also great to see them as a sponsor at TechUnplugged Amsterdam, which is a start :D.
Disclaimer: I was invited to this meeting by TechFieldDay to attend SFD7 and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or edited by any other person than the me.