NLVMUG 2016 impression

This is a cross post from my Metis IT blogpost, which you can find here.

VMUG01This year, The anual NLVMUG UserCon was on March 17, 2016 in the city of Den Bosch. Last year was my first time at the NLVMUG and this year I was one of the speakers. Together with my colleague Ronald van Vugt we presented “De kracht van de blueprint”, translated to English “The power of the blueprint”. Our presentation was scheduled at 11.30 right after the first coffee break.

The day started with a keynote presentation of Kit Colbert from VMware about Cloud-Native Apps. His presentation began with an example of John Deere, the tractor company, who formerly sold only tractors but now also collects and analyze data from all their equipment. VMUG02With this data analitics they can advise the farmer about the way they can optimize their equipment and land. Companies like John Deere need a co
mpletely different kind of apps, architecture
and how they develop and maintain applications. In his presentation he showed how VMware can support these new apps and how the VMware platform can support this. For these new apps VMware has developed the vSphere Integrated Containter architecture and the VMware Photon platform.

After the keynote it was time for us to do some last preparations for the presentation. We checked the VPN connection for the live demo, all demo steps and the presentation script. In the coffee break, just before our presentation we had enough time to setup our equipment and test the microphone. Then it was time for the presentation!
VMUG03The main subject of our presentation was vRealize Automation and the way you can automate your application environment. In the first part of the
presentation we introduced the product and the functionalities. After the background information it was time to start with our live demo. In the demo we showed how you can automate the deployment of a two tier WordPress application with vRA and VMware NSX. Live on stage we composed the application environment, with all network services, relations and policies. After the demo there was some time for questions. If you are interested in our presentation and demo you can download the presentation including screenshots of the demo steps here.

VMUG04In the afternoon there was a second keynote of Jay Marshall from Google about the Google Cloud Platform. He showed how Google has grown from search engine to a big player in the cloud market. He also showed the
partnership between VMware and Google to create a hybrid cloud. After this keynote I attended to some other presentations about vSAN and vRealize Automation and vRealize Orchestration. After the last presentation it was time for the reception and the prize drawing of the sponsors. After the price drawing the day was over.

I look back at a great event and an awesome new presentation experience. It was fun to be on stage to share our knowledge at the biggest VMUG in the world. I want to thanks the NLVMUG organization for all their hard work and I hope to meet you next year.

Attachment: NLVMUG 2016 handouts PDF

Read More

Upgrading vRealize Automation 7 to 7.0.1

This is a cross post from my Metis IT blogpost, which you can find here.

Last week VMware released a new version of vRealize Automation (vRA), version 7.0.1. In this version most of the version 7.0.0 bugs and issues are resolved. In the release notes you can find the list of all resolved issues. In this blog I will guide you through the upgrade process.

It is possible to upgrade to this new version from any supported vRealize Automation 6.2.x version and the latest 7.0 version. In this blog I will focus on an upgrade from version 7.0.0 to version 7.0.1. If you still use an earlier version of vRA you have to upgrade frist to version 6.2.x. The environment we will upgrade is a minimum deployment based on version 7.0.0.

The following steps are required for a successful upgrade of vRealize Automation.

  1. Backup your current installation
  2. Shut down vRealize Automation Windows services on your IAAS server
  3. Configure hardware resources
  4. Download and install upgrades to the vRA appliance
  5. Download and install updates for IAAS
  6. Post Upgrade Tasks

Backup your current installation

Before you start  the upgrade it is important to backup some components of the existing installation. If something goes wrong you can always go back to the current version.

Configuration file backup

First start with a backup of the vRA configuration files. This file can be backupped with the following steps:

  1. Login with ssh on the vRA appliance
  2. Make a copy of the following directories:
    • /etc/vcac/
    • /etc/vco/
    • /etc/apache2/
    • /etc/rabbitmq/

First create a directory backup.

mkdir /etc/backupconf

Copy now all directories to this folder:

cp -R /etc/vcac/ /etc/backupconf/

Perform these steps for each folder.

Database backup

Make a SQL backup of the vRA IAAS database. For the integrated postgres database it is enough to snapshot the complete vRA appliance.

  1. Login to the database server
  2. Open the MSSQL Management Console and login
  3. Click left on the vRA database and choose Tasks and choose Backup Up…
  4. Choose the location for the backup and click on OK.
  5. Wait for the completion of the backup.

 

Screenshots of the tenant configuration and users

If something goes wrong with the upgrade it could be possible that this configuration information is changed. For safety it is recommended to capture this information.

  1. Login as administrator to the vRA interface
  2. Make a Screenshot of your tenantsafb1
  3. And the Local Users of the tenantafb2
  4. And the Administratorsafb3

Backup any files you have customized

The vRA upgrade will possibly delete or modify all customized files. If you want to keep this files please backup them. In our environment we don’t use any customized files.

Create snapshot of the IAAS server

To take a snapshot of the IAAS server is the last step in the upgrade process.

  1. Shutdown the IAAS server and the vRA appliance in the correct order.
    1. Login to vCenter
    2. First select the IAAS VM and select shutdown guest. If the shutdown is complete select the vRA appliance and choose again for shutdown guest.
  2. Right-click on the IAAS VM and select Snapshots and Take Snapshot. Fill in the name of the snapshot and click on OK.
  3. Power On the IAAS VM

Disable the IAAS services

  1. Login on the IAAS server, open msc and stop the following services:
    1. All VMware vCloud Automation agents
    2. All VMware DEM workers
    3. All DEM orchestrator
    4. VMware vCloud Automation Center Service

afb4

Configure hardware resources of the vRA appliance

For the upgrade it is  necessary to extend the existing disks of the vRA appliance. But before we do this, create a copy of the existing vRA appliance.

  1. Right-click on the vRA appliance, select Clone and Clone to Virtual Machine
  2. Give the VM a unique name and select the resources for the new VM and click on Finish.
  3. Wait for completion.
  4. Right-click on the original VM and select Edit Settings.
  5. Extend the first disks (1) to 50GB and click OK.afb5
  6. Create a snapshot of the VM. Select the VM, click on Snapshots and click Take Snapshot.
  7. Wait for the snapshot.
  8. Power on the vRA VM.
  9. Wait for the machine to start
  10. SSH to the vRA VM and login with the root
  11. Execute the following commands to stop all vRA services:

Service vcac-server stop

Service vco-server stop

Service vpostgres stop

  1. Extend the Linux file system with the following commands:

Unmount swap table:
Swapoff –a

Delete existing partitions and create a 44GB root and 6GB swap partition. This command and the next command return an error about the kernel that is still active at this point. After a reboot at step 13 all changes will be active:
(echo d; echo 2; echo d; echo 1; echo n; echo p; echo ; echo ; echo ‘+44G’; echo n; echo p; echo ; echo ; echo ; echo w; echo p; echo q) | fdisk /dev/sda

Change the swap partition type:

(echo t; echo 2; echo 82; echo w; echo p; echo q) | fdisk /dev/sda

 Set disk 1 bootable:

(echo a; echo 1; echo w; echo p; echo q) | fdisk /dev/sda

 Register partition changes and format the new swap partition:

Partprobe

Mkswap /dev/sda2

Mount the swap partition:

Swapon –a

  1. Reboot the vRA partition
  2. When the appliance is started again login with SSH and resize the partiation table:

Resize2fs /dev/sda1

  1. Check the resize with command df -h

Install the vRA update

  1. Login on the management interface: https://vRAhostname:5480
  2. Click on the Services tab and check the services. All services should be registered except the iaas-service.

If everything is checked, click on the update tab. If not all services are running and you are using a proxy server, check this Vmware article: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144067

  1. Click on Check Updates. The new update will be displayed.afb6
  2. Click now on Install update and Click OK.
  3. The follow the installation you can check the following log files: /opt/vmware/var/log/vami/updatecli.log

/opt/vmware/var/log/vami/vami.log

/var/log/vmware/horizon/horizon.log

The most useful information can be found in the vami.log and updatecli.log. In these log files you can see the download progress and information about the upgrade status.

Use tail –f /opt/vmware/var/log/vami/* to show all log files

  1. Wait untill the update is finished.afb7
  2. If the upgrade is finished, reboot the appliance. Click on the System tab and click on

Upgrading the IAAS server components

The next step in this process is to upgrade the IAAS components. The IAAS installer will also upgrade the MSSQL database. In earlier upgrade processes it was needed to separately upgrade the database. To start the IAAS upgrade, follow the following steps:

  1. Open your favorite webbrowser and go to: https://vRAhostname:5480/installer
  2. Click the IAAS installer Save the prompted file. (Do not change the filename!)
  3. Open the installer and follow the wizard.
  4. Accept the license agreement and click on Next.
  5. Provide the Appliance Login Information. Click on Next.

afb8

  1. Choose for Upgrade. Click on Next.
  2. Provide the correct Service Account for the component services and the authentication information of the SQL server. Click on Next.
  3. Accept the certificate of the SSO Default Tenant and provide the SSO Administrator Credentials. Click on Next.
  4. Click now on Upgrade to start the upgrade.afb9
  5. Click on Next and finish to complete the IAAS upgrade.

Post upgrade tasks

After the IAAS upgrade first check the correct operation of the vRA appliance. Click on the infrastructure tab and click on endpoint. Verify the endpoint overview is correct. Next try to request a blueprint and check if everything will finish successful.

If everything is correct, the last step is the upgrade of the vRA agents on the OS templates. The new agents also contain some bug fixes. In our environment we use CentOS and Windows Operating Systems. We will first start with the upgrade of the CentOS agent followed by the Windows Agent.

CentOS agent

  1. Convert the CentOS template to a VM and boot the VM.
  2. Download the prepare_vra_template.sh script from the following location: https://vRAhostname.local:5480/service/software/download/prepare_vra_template.sh
  3. Allow execution of the script with:

chmod +x prepare_vra_template.sh

  1. Execute the script: ./prepare_vra_template.sh.
  2. Follow the wizard and provide the correct information. I choose for vSphere, no certificate check and the install Java.
  3. Wait for completion and shutdown the VM.
  4. Convert the VM back to a template.

Windows Agent

For the upgrade of the Windows Agent we will use the script made by Gary Coburn. He developed a script that will install all the needed components and the vRA agent on Windows. Thanks to my colleague Ronald van Vugt for this modification on this script because of newer java version. The original script is based on vRA version 7.0.0 which included version jre-1.8.0-66. The java version included in version 7.0.1 is newer, so a modification to the script is required.

  1. Download the original script from here or here. And open the script and search for the following line:
    $url=”https://” + $vRAurl + “:5480/service/software/download/jre-1.8.0_66-win64.zip”
  1. This line must be edited to:
    $url=”https://” + $vRAurl + “:5480/service/software/download/jre-1.8.0_72-win64.zip”
  1. If the script is edited run the script with the following parameters:

./prepare_vra_template.ps1 vra-hostname iaas-hostnamePasswordofDarwinUser

  1. The script will sometimes ask for confirmation.
  2. Wait till the installation is complete.
  3. Shutdown the VM and convert it again to a template.

Verify the installation

Now request some of your blueprints to verify the correct operation of the vRA appliance, IAAS server and the guest agents. If everything is OK, then it is time to delete the snapshots of the vRA appliance and IAAS server.

  1. Select the VM, choose for snapshots and Manage Snapshots
  2. Delete the snapshot you have made before installation.
  3. Do this for both VMs

Conclusion

Before executing this upgrade in a production environment it is recommended to plan the upgrade and verify that all dependencies will work after the upgrade. Also plan enough time for this upgrade, so you have the time to check and verify the installation.

Read More

VMware VSAN 6.2, what’s new?

This is a cross post from my Metis IT blogpost, which you can find here.

VMware VSAN 6.2

On February 10 VMware announced Virtual SAN version 6.2. A lot of Metis IT customers are asking about the Software Defined Data Center (SDDC) and how products like VSAN fit into this new paradigm. Let’s investigate what VMware VSAN is, and what the value would be to use it, as well as what the new features are in version 6.2

VSAN and Software Defined Storage

In the data storage world, we all know that the growth of data is explosive (to say the least). In the last decade the biggest challenge for most companies was that people just kept making copies of their data and the data of their co-workers. Today we not only have this problem, but storage also has to provide the performance needed for data-analytics and more.

First the key components of Software Defined Storage:

  • Abstraction: Abstracting the hardware from the software provides greater flexibility and scalability
  • Aggregation: In the end it shouldn’t matter what storage solution you use, but it should be managed through only one interface
  • Provisioning: the possibility to provision storage in the most effective and efficient way
  • Orchestration: Make use of all of the storage platforms in your environment by orchestration (vVOLS, VSAN)

vsan01

VSAN and Hyper-Converged Infrastructure

So what about Hyper-Converged Infrastructure (HCI)? Hyper-Converged systems allow the integrated resources (Compute, Network and Storage) to be managed as one entity through a common interface. With Hyper-converged systems the infrastructure can be expanded by adding nodes.

VSAN is Hyper-converged in a pure form. You don’t have to buy a complete stack, and you’re not bound to certain hardware configurations from certain vendors. Of course, there is the need for a VSAN HCL to make sure you reach the full potential of VSAN.

VMware VSAN 6.2. new features

With the 6.2 version of VSAN, VMware introduced a couple of really nice and awesome features, some of which are only available on the All-Flash VSAN clusters:

  • Data Efficiency (Deduplication and Compression / All-Flash only)
  • RAID-5/RAID-6 – Erasure Coding (All-Flash only)
  • Quality of Service (QoS Hybrid and All-Flash)
  • Software Checksum (Hybrid and All-Flash)
  • IPV6 (Hybrid and All-Flash)
  • Performance Monitoring Service (Hybrid and All-Flash)

Data Efficiency

Dedupe and compression happens during de-staging from the caching tier to the capacity tier. You enable “space efficiency” on a cluster level and deduplication happens on a per disk group basis. Larger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated, they are compressed. A significant saving already, but combined with deduplication, the results achieved can be up to 7x space reduction, off course fully dependent on the workload and type of VMs.

Erasure Coding

New is RAID 5 and RAID 6 support over the network, also known as erasure coding. In this case, RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts, 1 can fail without data loss. This results in a significant reduction of required disk capacity compared to RAID 1. Normally a 20GB disk would require 40GB of disk capacity with FTT=1, but in the case of RAID-5 over the network, the requirement is only ~27GB. RAID 6 is an option if FTT=2 is desired.

Quality of Service

This enables per VMDK IOPS Limits. They can be deployed by Storage Policy-Based Management (SPBM), tying them to existing policy frameworks. Service providers can use this to create differentiated service offerings using the same cluster/pool of storage. Customers wanting to mix diverse workloads will be interested in being able to keep workloads from impacting each other.

Software Checksum

Software Checksum will enable customers to detect corruptions that could be caused by faulty hardware/software components, including memory, drives, etc. during the read or write operations. In the case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection these errors without end-to-end integrity checking.

IPV6

Virtual SAN can now support IPv4-only, IPv6-only, and also IPv4/IPv6-both enabled. This addresses requirements for customers moving to IPv6 and, additionally, supports mixed mode for migrations.

Performance Monitoring Service

Performance Monitoring Service allows customers to be able to monitor existing workloads from vCenter. Customers needing access to tactical performance information will not need to go to vRO. Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter. The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally to 3rd party monitoring solutions by API. The Performance monitoring service runs on a distributed database that is stored directly on Virtual SAN.

Conclusion

VMware is making clear that the old way to do storage is obsolete. A company needs the agility, efficiency and scalability that is provided by the best of all worlds. VSAN is one of these, and although it has a short history, it has grown up pretty fast. For more information make sure to read the following blogs, and if you’re looking for a SDDC/SDS/HCI consultant to help you in solving your challenges, make sure to look for Metis IT.

Blogs on VMware VSAN:
http://www.vmware.com/products/virtual-san/
http://www.yellow-bricks.com/virtual-san/
http://www.punchingclouds.com/
http://cormachogan.com/vsan/

VMware to present on VSAN at Storage Field Day 9

I’m really exited to see the VMware VSAN team during Storage Field Day 9, where they will probably dive deep into the new features of VSAN 6.2. It will be an open discussion, where a I’m certain that the delegates will have some awesome questions. Also I would advise you to watch our earlier visit to the VMware VSAN team in Palo Alto about a year ago, at Storage Field Day 7 (Link)

Read More

VMware vR Ops 6.x changes

The last couple of weeks I’ve been busy with a couple vR Ops designs and implementation in very different environments, and the question I get a lot is what the differences are between vCOPS and vR Ops. First of all I must point at the naming difference where vR stands for v Realize and Operation manager has become a part of much larger suite. A suite that will give you the opportunity to leverage, monitor, automate and build hybrid cloud environments.

Back to the question:

The vR Ops architecture consists of 1 Virtual Machines (VM) that works on a scale out basis, which differs from ealier version that consisted of a vApp with 2 VM’s and which was based on a Scale-up architecture. You’ll get a better picture looking at figure 1 and reading the information below.

f2766-merge

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

As shown in the figure above, the deployment of vR Ops starts with a single VM (which will become the Master Node) and can easily be scaled out with additional nodes (which can be data nodes or remote collectors). To provide HA ,a master node can have a replica node (holding the same data as the MasterNode) which will take over if the master node fails. see the figure below for more information.

Screen Shot 2014-11-08 at 1.47.10 pmThe Master node as well as a replica node holds the Global xDB and is responsible for collecting data from the vCenter Server, other vR Ops suite product and 3rd party data sources (metrics, topology and change events) and storing that raw data in its scalable File System Database (FSDB).

I’ll dive into other differences and more in depth posts in a later stage, but for now I just wanted to get this information out 😉

Read More

Realtek NIC on vSphere 6

During the beta fase of vSphere 6 I was looking for a couple of workarounds for problems during installation process in my homelab. One of those problems is that (as with vSphere 5.5) certain not supported hardware are the onboard realtek NICs on the cheaper “homelab” motherboards. During this search I came across this workaround (login needed) by Andreas Peetz explaining a method to install the drivers onto the vSphere host(s) in your environment. Thanks Andreas, it worked well for me!!

here is a workaround for you … I have created a package that includes the original VMware net-r8168, net-r8169, net-sky2 and net-s2io drivers and uses the name (net51-drivers), and published it on my V-Front Online Depot.

If your host is already installed and has a direct Internet connection then you can install it from an ESXi shell by running the following commands:

esxcli software acceptance set –level=CommunitySupported

Slide01

esxcli network firewall ruleset set -e true -r httpClient

Slide02

esxcli software vib install -n net51-drivers -d http://vibsdepot.v-front.de

Slide03

As you can see I had to add the –no-sig-check to install the vib. It might be this is not needed in your situation, though.

reboot

If you need the VIB file for injecting into an installation ISO or offline installation then you can download it from

http://vibsdepot.v-front.de/depot/vft/net51-drivers-1.0/net51-drivers-1.0.0-1vft.510.0.0.799733.x86_64.vib

(right-click the above link and save as …)

I have not yet tested this myself, and of course this is completely unsupported by VMware! Use at your own risk, but – honestly – I expect it to just work …

Andreas

As Andreas points out this is totally unsupported by VMware! But hey, it’s your homelab so probably most of it is unsupported ;-P

The other thing to notice is the awesomeness of the VMware community! Always helping one another and making sure people are able to get hands on experience with the little resources they have at home.

Read More

PSOD vSphere 5.5 U2 HP BL460c Gen8 v2

After an upgrade to our HP BL460c Gen8 blades we had multiple blades showing a Purple Screen of Death (See screenshot)

image003

In our environment we have multiple DC’s as well as multiple versions of the HP BL series blades. Multiple adminstrators across multiple DC’s experienced the same problem. The problem only seems to affect the latest HP BL460c Gen8 v2 series (as far as we can tell), but it might be other vendors with Intel Haswell Xeon v2 CPU’s might be affected as well (please let me know if you’re having the same problems on other hardware).

We decided to roll the affected servers back to 5.5 U1 and wait for a solution but Frank Büchsel (@fbuechsel) of fbuechsel.eu provided me with following workaround. In the BIOS when you turn of the Intel VT-D option, your server will start without trouble:

image005

Thanks Frank!

VMware is working hard to process the logfiles and making sure this problem is solved ones and for all! Keep you updated on the status. If you have a solution or questions/answers please give us a tweet or something!

UPDATE!!

After a long and extensive investigation we’re able to say the problem in our environment has been solved. After checking all Firmware we reached the point where the only difference between a purple screen and a stable running ESXi 5.5 U2 host was the firmware of the HP FlexFabrics (Emulex) in the server. When we used the 4.6 (or lower) firmware we had Purple screens, but after updating to the latest 4.9 firmware the Purple screens were gone, and the servers ran stable.

Capture2

VMware is still examaning the logfiles, but htis seems to solve our problem for now! If you experience any problems with this, please let me know…

Read More