NLVMUG 2016 impression

This is a cross post from my Metis IT blogpost, which you can find here.

VMUG01This year, The anual NLVMUG UserCon was on March 17, 2016 in the city of Den Bosch. Last year was my first time at the NLVMUG and this year I was one of the speakers. Together with my colleague Ronald van Vugt we presented “De kracht van de blueprint”, translated to English “The power of the blueprint”. Our presentation was scheduled at 11.30 right after the first coffee break.

The day started with a keynote presentation of Kit Colbert from VMware about Cloud-Native Apps. His presentation began with an example of John Deere, the tractor company, who formerly sold only tractors but now also collects and analyze data from all their equipment. VMUG02With this data analitics they can advise the farmer about the way they can optimize their equipment and land. Companies like John Deere need a co
mpletely different kind of apps, architecture
and how they develop and maintain applications. In his presentation he showed how VMware can support these new apps and how the VMware platform can support this. For these new apps VMware has developed the vSphere Integrated Containter architecture and the VMware Photon platform.

After the keynote it was time for us to do some last preparations for the presentation. We checked the VPN connection for the live demo, all demo steps and the presentation script. In the coffee break, just before our presentation we had enough time to setup our equipment and test the microphone. Then it was time for the presentation!
VMUG03The main subject of our presentation was vRealize Automation and the way you can automate your application environment. In the first part of the
presentation we introduced the product and the functionalities. After the background information it was time to start with our live demo. In the demo we showed how you can automate the deployment of a two tier WordPress application with vRA and VMware NSX. Live on stage we composed the application environment, with all network services, relations and policies. After the demo there was some time for questions. If you are interested in our presentation and demo you can download the presentation including screenshots of the demo steps here.

VMUG04In the afternoon there was a second keynote of Jay Marshall from Google about the Google Cloud Platform. He showed how Google has grown from search engine to a big player in the cloud market. He also showed the
partnership between VMware and Google to create a hybrid cloud. After this keynote I attended to some other presentations about vSAN and vRealize Automation and vRealize Orchestration. After the last presentation it was time for the reception and the prize drawing of the sponsors. After the price drawing the day was over.

I look back at a great event and an awesome new presentation experience. It was fun to be on stage to share our knowledge at the biggest VMUG in the world. I want to thanks the NLVMUG organization for all their hard work and I hope to meet you next year.

Attachment: NLVMUG 2016 handouts PDF

Cisco HyperFlex: A new HCI solution

This is a cross post from my Metis IT blogpost, which you can find here.

After teasing the market with a photo containing three servers, the word Hyper and some blank puzzle pieces, Cisco announced their own Hyper-converged Solution: Cisco HyperFlex. This solution is an extension of Cisco’s Unified Computing System (UCS). Until now the UCS platform portfolio did not contain a native Cisco storage solution. Finally Cisco entered the highly competitive Hyper-converged Infrastructure (HCI) market with HyperFlex.

The Cisco HyperFlex solution combines compute, storage and the network in one appliance. Cisco says the solution is unique in three ways: flexible scaling, continuous data optimization and an integrated network. All other HCI vendors do Hyper-converged with compute, storage and networking, but none of these have a complete integrated network solution. As expected of a former networking only company, Cisco also integrated the network.

The platform is built on existing UCS components and a new storage component. The servers used in the solution are based on the existing Cisco UCS product line. Networking is based on the Cisco UCS Fabric interconnects. The new storage component in Cisco’s platform is called the Cisco HyperFlex HX Data Platform, which is based on Springpath technology.

Springpath HALO and Cisco HyperFlex HX Data Platform

Springpath was founded in 2012 and Cisco co-invested the start-up. Springpath has developed its own data platform using HALO (Hardware Agnostic Log-structured Object) architecture. The HALO architecture offers a flexible platform with data distribution, caching, persistence and optimization. Cisco has re-branded this to the Cisco HyperFlex HX Data Platform.

All data on the Cisco HX Data platform is distributed over the cluster. Data optimization takes place by using inline de-duplication and compression. Cisco indicates most customers should reach 20-30% capacity reduction with de-duplication and another 30-50% with compression without any performance impact.

Picture 1

VMware and Cisco HyperFlex

First the HyperFlex solution will only be available with the VMware hypervisor using NFS as storage protocol. A Data Platform Controller for communication with the physical hardware will be used for the HyperFlex platform. This Data Platform Controller requires a dedicated number of processor cores and dedicated amount of memory. The controller integrates the HX Data Platform with the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs): IO Visor and VAAI. IO Visor provides a NFS mount point and VAAI offloads file system operations.

Picture 2

Management

The HyperFlex storage is managed with a vCenter plug-in. There are currently no details available about the layout and functionality of this plug-in. We expect the plugin will be the same as Springpath with Cisco branding.

The physical server and network is managed like any other Cisco UCS server. Each server will be connected to the Fabric Interconnect and managed from the UCS manager interface.

Cisco HyperFlex range

The HyperFlex platform is available in three different models, an 1U and 2U rack based server and a combination of rack servers with blade servers. The first model is for a small footprint, the 2U model is for maximal capacity and the last option is for maximal capacity and high compute.

All configurations must be ordered with a minimum of four servers. As far as we know at this stage the maximum number of servers in a HyperFlex cluster is eight. Each server will be delivered with VMware pre-installed.

The hardware configuration of the HyperFlex nodes is not fixed. You can choose your type of processor, amount of memory and the amount of disks. On the Cisco Build & Price website all available configuration options can be found. You can always scale your cluster by adding storage and/or compute nodes.

Picture 3

Licensing

Cisco has an interesting licensing model for the HyperFlex HX Data Platform. The HX Data Platform will be licensed on a per year basis. In the configuration tool by default a server is configured with a license for one year. This licensing model deviates from other HCI vendors who base their license model on raw or used TB’s, or use a perpetual license.

Conclusion

Cisco is a new and interesting player in the rapidly growing Hyper-converged market. The technology used provides some nice features, capabilities and an interesting licensing model. Time will tell if the product will be successful and what the roadmap will bring for the future. But at first sight it looks like a good alternative for the leading Hyper-converged solutions.

Upgrading vRealize Automation 7 to 7.0.1

This is a cross post from my Metis IT blogpost, which you can find here.

Last week VMware released a new version of vRealize Automation (vRA), version 7.0.1. In this version most of the version 7.0.0 bugs and issues are resolved. In the release notes you can find the list of all resolved issues. In this blog I will guide you through the upgrade process.

It is possible to upgrade to this new version from any supported vRealize Automation 6.2.x version and the latest 7.0 version. In this blog I will focus on an upgrade from version 7.0.0 to version 7.0.1. If you still use an earlier version of vRA you have to upgrade frist to version 6.2.x. The environment we will upgrade is a minimum deployment based on version 7.0.0.

The following steps are required for a successful upgrade of vRealize Automation.

  1. Backup your current installation
  2. Shut down vRealize Automation Windows services on your IAAS server
  3. Configure hardware resources
  4. Download and install upgrades to the vRA appliance
  5. Download and install updates for IAAS
  6. Post Upgrade Tasks

Backup your current installation

Before you start  the upgrade it is important to backup some components of the existing installation. If something goes wrong you can always go back to the current version.

Configuration file backup

First start with a backup of the vRA configuration files. This file can be backupped with the following steps:

  1. Login with ssh on the vRA appliance
  2. Make a copy of the following directories:
    • /etc/vcac/
    • /etc/vco/
    • /etc/apache2/
    • /etc/rabbitmq/

First create a directory backup.

mkdir /etc/backupconf

Copy now all directories to this folder:

cp -R /etc/vcac/ /etc/backupconf/

Perform these steps for each folder.

Database backup

Make a SQL backup of the vRA IAAS database. For the integrated postgres database it is enough to snapshot the complete vRA appliance.

  1. Login to the database server
  2. Open the MSSQL Management Console and login
  3. Click left on the vRA database and choose Tasks and choose Backup Up…
  4. Choose the location for the backup and click on OK.
  5. Wait for the completion of the backup.

 

Screenshots of the tenant configuration and users

If something goes wrong with the upgrade it could be possible that this configuration information is changed. For safety it is recommended to capture this information.

  1. Login as administrator to the vRA interface
  2. Make a Screenshot of your tenantsafb1
  3. And the Local Users of the tenantafb2
  4. And the Administratorsafb3

Backup any files you have customized

The vRA upgrade will possibly delete or modify all customized files. If you want to keep this files please backup them. In our environment we don’t use any customized files.

Create snapshot of the IAAS server

To take a snapshot of the IAAS server is the last step in the upgrade process.

  1. Shutdown the IAAS server and the vRA appliance in the correct order.
    1. Login to vCenter
    2. First select the IAAS VM and select shutdown guest. If the shutdown is complete select the vRA appliance and choose again for shutdown guest.
  2. Right-click on the IAAS VM and select Snapshots and Take Snapshot. Fill in the name of the snapshot and click on OK.
  3. Power On the IAAS VM

Disable the IAAS services

  1. Login on the IAAS server, open msc and stop the following services:
    1. All VMware vCloud Automation agents
    2. All VMware DEM workers
    3. All DEM orchestrator
    4. VMware vCloud Automation Center Service

afb4

Configure hardware resources of the vRA appliance

For the upgrade it is  necessary to extend the existing disks of the vRA appliance. But before we do this, create a copy of the existing vRA appliance.

  1. Right-click on the vRA appliance, select Clone and Clone to Virtual Machine
  2. Give the VM a unique name and select the resources for the new VM and click on Finish.
  3. Wait for completion.
  4. Right-click on the original VM and select Edit Settings.
  5. Extend the first disks (1) to 50GB and click OK.afb5
  6. Create a snapshot of the VM. Select the VM, click on Snapshots and click Take Snapshot.
  7. Wait for the snapshot.
  8. Power on the vRA VM.
  9. Wait for the machine to start
  10. SSH to the vRA VM and login with the root
  11. Execute the following commands to stop all vRA services:

Service vcac-server stop

Service vco-server stop

Service vpostgres stop

  1. Extend the Linux file system with the following commands:

Unmount swap table:
Swapoff –a

Delete existing partitions and create a 44GB root and 6GB swap partition. This command and the next command return an error about the kernel that is still active at this point. After a reboot at step 13 all changes will be active:
(echo d; echo 2; echo d; echo 1; echo n; echo p; echo ; echo ; echo ‘+44G’; echo n; echo p; echo ; echo ; echo ; echo w; echo p; echo q) | fdisk /dev/sda

Change the swap partition type:

(echo t; echo 2; echo 82; echo w; echo p; echo q) | fdisk /dev/sda

 Set disk 1 bootable:

(echo a; echo 1; echo w; echo p; echo q) | fdisk /dev/sda

 Register partition changes and format the new swap partition:

Partprobe

Mkswap /dev/sda2

Mount the swap partition:

Swapon –a

  1. Reboot the vRA partition
  2. When the appliance is started again login with SSH and resize the partiation table:

Resize2fs /dev/sda1

  1. Check the resize with command df -h

Install the vRA update

  1. Login on the management interface: https://vRAhostname:5480
  2. Click on the Services tab and check the services. All services should be registered except the iaas-service.

If everything is checked, click on the update tab. If not all services are running and you are using a proxy server, check this Vmware article: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144067

  1. Click on Check Updates. The new update will be displayed.afb6
  2. Click now on Install update and Click OK.
  3. The follow the installation you can check the following log files: /opt/vmware/var/log/vami/updatecli.log

/opt/vmware/var/log/vami/vami.log

/var/log/vmware/horizon/horizon.log

The most useful information can be found in the vami.log and updatecli.log. In these log files you can see the download progress and information about the upgrade status.

Use tail –f /opt/vmware/var/log/vami/* to show all log files

  1. Wait untill the update is finished.afb7
  2. If the upgrade is finished, reboot the appliance. Click on the System tab and click on

Upgrading the IAAS server components

The next step in this process is to upgrade the IAAS components. The IAAS installer will also upgrade the MSSQL database. In earlier upgrade processes it was needed to separately upgrade the database. To start the IAAS upgrade, follow the following steps:

  1. Open your favorite webbrowser and go to: https://vRAhostname:5480/installer
  2. Click the IAAS installer Save the prompted file. (Do not change the filename!)
  3. Open the installer and follow the wizard.
  4. Accept the license agreement and click on Next.
  5. Provide the Appliance Login Information. Click on Next.

afb8

  1. Choose for Upgrade. Click on Next.
  2. Provide the correct Service Account for the component services and the authentication information of the SQL server. Click on Next.
  3. Accept the certificate of the SSO Default Tenant and provide the SSO Administrator Credentials. Click on Next.
  4. Click now on Upgrade to start the upgrade.afb9
  5. Click on Next and finish to complete the IAAS upgrade.

Post upgrade tasks

After the IAAS upgrade first check the correct operation of the vRA appliance. Click on the infrastructure tab and click on endpoint. Verify the endpoint overview is correct. Next try to request a blueprint and check if everything will finish successful.

If everything is correct, the last step is the upgrade of the vRA agents on the OS templates. The new agents also contain some bug fixes. In our environment we use CentOS and Windows Operating Systems. We will first start with the upgrade of the CentOS agent followed by the Windows Agent.

CentOS agent

  1. Convert the CentOS template to a VM and boot the VM.
  2. Download the prepare_vra_template.sh script from the following location: https://vRAhostname.local:5480/service/software/download/prepare_vra_template.sh
  3. Allow execution of the script with:

chmod +x prepare_vra_template.sh

  1. Execute the script: ./prepare_vra_template.sh.
  2. Follow the wizard and provide the correct information. I choose for vSphere, no certificate check and the install Java.
  3. Wait for completion and shutdown the VM.
  4. Convert the VM back to a template.

Windows Agent

For the upgrade of the Windows Agent we will use the script made by Gary Coburn. He developed a script that will install all the needed components and the vRA agent on Windows. Thanks to my colleague Ronald van Vugt for this modification on this script because of newer java version. The original script is based on vRA version 7.0.0 which included version jre-1.8.0-66. The java version included in version 7.0.1 is newer, so a modification to the script is required.

  1. Download the original script from here or here. And open the script and search for the following line:
    $url=”https://” + $vRAurl + “:5480/service/software/download/jre-1.8.0_66-win64.zip”
  1. This line must be edited to:
    $url=”https://” + $vRAurl + “:5480/service/software/download/jre-1.8.0_72-win64.zip”
  1. If the script is edited run the script with the following parameters:

./prepare_vra_template.ps1 vra-hostname iaas-hostnamePasswordofDarwinUser

  1. The script will sometimes ask for confirmation.
  2. Wait till the installation is complete.
  3. Shutdown the VM and convert it again to a template.

Verify the installation

Now request some of your blueprints to verify the correct operation of the vRA appliance, IAAS server and the guest agents. If everything is OK, then it is time to delete the snapshots of the vRA appliance and IAAS server.

  1. Select the VM, choose for snapshots and Manage Snapshots
  2. Delete the snapshot you have made before installation.
  3. Do this for both VMs

Conclusion

Before executing this upgrade in a production environment it is recommended to plan the upgrade and verify that all dependencies will work after the upgrade. Also plan enough time for this upgrade, so you have the time to check and verify the installation.

VMware VSAN 6.2, what’s new?

This is a cross post from my Metis IT blogpost, which you can find here.

VMware VSAN 6.2

On February 10 VMware announced Virtual SAN version 6.2. A lot of Metis IT customers are asking about the Software Defined Data Center (SDDC) and how products like VSAN fit into this new paradigm. Let’s investigate what VMware VSAN is, and what the value would be to use it, as well as what the new features are in version 6.2

VSAN and Software Defined Storage

In the data storage world, we all know that the growth of data is explosive (to say the least). In the last decade the biggest challenge for most companies was that people just kept making copies of their data and the data of their co-workers. Today we not only have this problem, but storage also has to provide the performance needed for data-analytics and more.

First the key components of Software Defined Storage:

  • Abstraction: Abstracting the hardware from the software provides greater flexibility and scalability
  • Aggregation: In the end it shouldn’t matter what storage solution you use, but it should be managed through only one interface
  • Provisioning: the possibility to provision storage in the most effective and efficient way
  • Orchestration: Make use of all of the storage platforms in your environment by orchestration (vVOLS, VSAN)

vsan01

VSAN and Hyper-Converged Infrastructure

So what about Hyper-Converged Infrastructure (HCI)? Hyper-Converged systems allow the integrated resources (Compute, Network and Storage) to be managed as one entity through a common interface. With Hyper-converged systems the infrastructure can be expanded by adding nodes.

VSAN is Hyper-converged in a pure form. You don’t have to buy a complete stack, and you’re not bound to certain hardware configurations from certain vendors. Of course, there is the need for a VSAN HCL to make sure you reach the full potential of VSAN.

VMware VSAN 6.2. new features

With the 6.2 version of VSAN, VMware introduced a couple of really nice and awesome features, some of which are only available on the All-Flash VSAN clusters:

  • Data Efficiency (Deduplication and Compression / All-Flash only)
  • RAID-5/RAID-6 – Erasure Coding (All-Flash only)
  • Quality of Service (QoS Hybrid and All-Flash)
  • Software Checksum (Hybrid and All-Flash)
  • IPV6 (Hybrid and All-Flash)
  • Performance Monitoring Service (Hybrid and All-Flash)

Data Efficiency

Dedupe and compression happens during de-staging from the caching tier to the capacity tier. You enable “space efficiency” on a cluster level and deduplication happens on a per disk group basis. Larger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated, they are compressed. A significant saving already, but combined with deduplication, the results achieved can be up to 7x space reduction, off course fully dependent on the workload and type of VMs.

Erasure Coding

New is RAID 5 and RAID 6 support over the network, also known as erasure coding. In this case, RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts, 1 can fail without data loss. This results in a significant reduction of required disk capacity compared to RAID 1. Normally a 20GB disk would require 40GB of disk capacity with FTT=1, but in the case of RAID-5 over the network, the requirement is only ~27GB. RAID 6 is an option if FTT=2 is desired.

Quality of Service

This enables per VMDK IOPS Limits. They can be deployed by Storage Policy-Based Management (SPBM), tying them to existing policy frameworks. Service providers can use this to create differentiated service offerings using the same cluster/pool of storage. Customers wanting to mix diverse workloads will be interested in being able to keep workloads from impacting each other.

Software Checksum

Software Checksum will enable customers to detect corruptions that could be caused by faulty hardware/software components, including memory, drives, etc. during the read or write operations. In the case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection these errors without end-to-end integrity checking.

IPV6

Virtual SAN can now support IPv4-only, IPv6-only, and also IPv4/IPv6-both enabled. This addresses requirements for customers moving to IPv6 and, additionally, supports mixed mode for migrations.

Performance Monitoring Service

Performance Monitoring Service allows customers to be able to monitor existing workloads from vCenter. Customers needing access to tactical performance information will not need to go to vRO. Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter. The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally to 3rd party monitoring solutions by API. The Performance monitoring service runs on a distributed database that is stored directly on Virtual SAN.

Conclusion

VMware is making clear that the old way to do storage is obsolete. A company needs the agility, efficiency and scalability that is provided by the best of all worlds. VSAN is one of these, and although it has a short history, it has grown up pretty fast. For more information make sure to read the following blogs, and if you’re looking for a SDDC/SDS/HCI consultant to help you in solving your challenges, make sure to look for Metis IT.

Blogs on VMware VSAN:
http://www.vmware.com/products/virtual-san/
http://www.yellow-bricks.com/virtual-san/
http://www.punchingclouds.com/
http://cormachogan.com/vsan/

VMware to present on VSAN at Storage Field Day 9

I’m really exited to see the VMware VSAN team during Storage Field Day 9, where they will probably dive deep into the new features of VSAN 6.2. It will be an open discussion, where a I’m certain that the delegates will have some awesome questions. Also I would advise you to watch our earlier visit to the VMware VSAN team in Palo Alto about a year ago, at Storage Field Day 7 (Link)

Long time no posts, and TechUnplugged

The last couple of months I’ve been kinda quiet, right… It’s not that I didn’t have anything to tell, but I’ve been busy with a couple of awesome projects:

  • TechUnplugged (Austin 2016)
  • NLVMUG (UserCon 2016)
  • Projects: VUMC, UMCU and Abu Dhabi

Let’s dive (a little) deeper in to all of them, with a start of the TechUnplugged conference in Austin. I’ll write about the other two later this week.

TechUnplugged

TechUnplugged is a new kind of conference which was initiated by my Italian friend Enrico Signoretti. The conference is based on interaction during the entire event, giving the audiance the opportunity to really gain the knowledge the need.

A good example of what awesome content we get during the even is this presentation by Nigel Poulton during the first TechUnplugged in London:

After the first two events in 2015, the first in London and the second in Amsterdam, we decided to cross the ocean and try the same recipe in the US. So on February 2nd we’ll kickoff TechUnplugged US with a great line up of influencers as well as sponsors.

First the Influencer list:

 

Second the Sponsor list (not completed):

@DDN_limitless

Screen-Shot-2015-11-30-at-22.08.24

@CloudianStorage

Screen-Shot-2015-11-30-at-22.50.37

@HedvigInc

Pernix

@Pernixdata

Zerto

@ZertoCorp

 We’ve seen that the conference is growing in Europe and we hope to achieve the same in the US. If you’re in, or near, Austin on the second of February 2016.

The agenda of the Austin event will be available here. The conference is free for end users, but seating is limited. Make sure to reserve your seat now, sign up here!

The day before #VeeamOn2015

DSC05940

So after a long day of travel on sunday (Amsterdam-Detroit and Detroit-Las Vegas) I arrived in Las Vegas late in the evening and fell into a deep sleep as soon as I hit the rather large bed in the Aria Resort and Casino, where the #VeeamOn2015 is held. I really like the venue in the way that it is an awesome resort where everything you need is in the same building(s) and if you need something it’s just a short 5 minute (or a little longer) walk. A esbig surprise was waiting for me when I entered the room and found a great gift (See picture). I actually had a discussion in the plane with the dutch guy (Ikea filmcrew) about this awesome headphones.. So a big thanks Veeam!

Monday at VeeamOn2015

So on monday after a good night sleep I went to the conference location to pick up my pass as well as a backpack and meet with a couple of guys. As I’m a foreigner with a big jetlag I decided to really take it easy this (Partner)day, and give myself the time to adjust.

I did went to the Grand opening of the Expo Lounge, to meet with peers and enjoy some great food and drinks. Always great to meet with people like Vladan Seget (blog: ESX Virtualization), Andrea Mauro (blog: vInfrastructure) and Joep Piscaer (blog: VirtualLifeStyle). Those three guys are all Veeam Vanguards and if you don’t know what that is, my suggestion is that you start reading more about this program here.

#VeeamON

#VeeamON2015

Evening in Vegas

In the evening I decided to make the best of my time here in Sincity and did a walk over the strip to see the fountains, take pictures of all the crazy stuff in this town, before heading to my room to do a Skype call with the homefront and get a (not so) good night sleep.

DSC05979

Day 1 of VeeamOn2015 is already started and I’ll be writing another blogpost on that asap Open-mouthed smile

Kaminario 7 months after Storage Field Day 7

All Flash Arrays (AFA) are hot for a couple of years now, and for a good reason! During Storage Field Day 1 we had 3 AFA vendors presenting with Kaminario, NimbusData and PureStorage. Although they have a different go-to-market strategies, as well as a different technology strategies, all three are still standing (allthough 1 of them seems to be struggling…)
At Storage Field Day 7 we had the privilege to get another Kaminario presentation and in this post I would like to take some time to see what Kaminario offers, and what new features they presented the last couple of months.

The K2 All-Flash Array

To give my readers who don’t know anything about who Kaminario is, and what Kaminario does, here is the first part of their presentation during SFD7 (done by their CEO Dani Golan):

There are couple of features provided by Kaminario that I find interesting (based on what was included 6 months ago):

– Choice of FC or ISCSI
– VMware integration (VAAI, vvols (not yet))
– Non-disruptive upgrades
– Great GUI
– Inline deduplication and compression
– Scale Up and Out
– K-Raid protection
– Industry standard SSD warranty (7 years now)

But there are/were still a couple of things missing, but it might be even better and go back a couple of years and see what the Kaminario solution looked like back then. A great post to look at the Kaminario solution back 2012 is the one of Hans De Leenheer:

Kaminario – a Solid State startup worth following

As you can see, there is so much innovation done by Kaminario, and in the last 6 months a lot more has been done.

What’s new in Kaminario K2 v5.5?

In the last couple of weeks Kaminario released the 5.5 version of their K2 product. In this release a couple of new (awesome) features were introduced that we’ll investigate a little deeper:

  • Use of 3D TLC NAND
  • Replication (asynchronous)
  • Perpetual Array (Mix and match SSD/Controller)

Let’s start with the use of 3D TLC NAND. In earlier versions of their products Kaminario always used MLC NAND and a customer could choose between 400 and 800 GB MLC SSD’s. Knowing Kaminario can scale up and out that would mean that it could hold around 154 TB of Flash (with dedupe and compression this would go up to around 720+ TB according to kaminario documents). With the new 3D flash technology the size of the drives changed to 480, 960 GB MLC and a 1,92 TB TLC SSD which doubles the capacity:

The next new feature is Replication, although the documentation found on the Kaminario site on replication goes back to 2014, but it still mentioned in the what’s new in v5.5 documents. Something that is new with replication is the fact that Kaminario now integrates with VMware SRM to meet customer needs. This is great news for customers already using SRM or thinking about using. The way Kaminario does replication is based on their snapshot (application consistent).

availability_banner

Last but not least is Perpetual Array, which gives a customer the possibilty to mix and match SSD’s as well as Controller’s. This feature gives the customer the freedom to start building their storage system and continue growing even if Kaminario will change controller hardware or SSD technology.

Final thoughts

Looking at what changed at Kaminario the last couple of months (and the last couple of years, for that matter) I’m certain we’ll see a lot of great innovation from Kaminario in their upcoming releases. 3D NAND will get Kaminario to much bigger scale (ever heard of Samsung showing a 16 TB 3D TLC SSD), and with their Scale Up and Scale out technology Kaminario has the right solution for each and every business. What I think would be a great idea for Kaminario is more visibilty outside the US, when my customers start talking about AFA I notice they almost never talk about Kaminario, mainly because they jut don’t know about them, and there are no local sales team to tell them about the Kaminario offering. That’s just to bad, as I still think Kaminario is a very cool AFA vendor. It was also great to see them as a sponsor at TechUnplugged Amsterdam, which is a start :D.

Disclaimer: I was invited to this meeting by TechFieldDay to attend SFD7 and they paid for travel and accommodation, I have not been compensated for my time and am not obliged to blog. Furthermore, the content is not reviewed, approved or edited by any other person than the me.

Catalogic 6 months later

Almost half a year ago I was at Storage Field Day 7 in San Jose (CA) where we had a couple of awesome presentations by multiple companies. One of these companies was Catalogic who presented on their ECX copy datamanagement platform. A couple of my fellow delegates have written some great content on this technology and I’ll include them at the end of this post, and encourage you to read them. Also I’ll include the first presentation done by Ed Walsh (CEO Catalogic)

As said we’re almost half a year further, and I was curious what changed in this time with the companies that presented at SFD7

Copy Data Management

So what is Copy Data Management according to Catalogic, and what challenges does it solve? If you’ve watched the above video you’ve seen that Catalogic defines three challenges: data growth, manageability and business agility.

In a world where data seems to exploding it seems more then iminent to have a mechanism to create order in this data sprawl and that’s where ECX comes in.
By implementing an OVF (docker based) and without any agents on your servers, you’ll get a system which provides you with: Orchestration, Automation DR and Data analytics. Using this for Test/Dev better RTO/RPO, reduce Capex/Opex, create orchastration to use the power of the cloud, and analyze and report on your data is very interesting.

And hearing about all these awesome posibilities, it kind of struck me that this was only possible with NetApp storage. I understand you need to start somewhere, but for a company in business since 1996 it must be doable to support more then just NetApp…

Fast forward 6 months

As mentioned this was what I absorbed during the presentation during Storage Field Day 7 and I kinda lost track, mainly because of the NetApp only thing, to be honest. I really think that ECX has a lot of potential, but it just needs to be available for all (or almost all ;-P) storage systems.

In the week before VMworld Catalogic announced ECX 2.2 which introduced support new storage vendor IBM. As of version 2.2 the IBM storage customers can use ECX to do the amazing things ECX provides. Although I would love to see more storage vendors on the list, it shows Catalogic is working hard to get more and more on the HCL 😀

But that’s not all for the 2.2 version ,the other new key features are:

  • Enhanced Policy-Based Copy Data Management Workflow Automation
  • Copy Data Management for IBM platforms
  • Improved Role Based Access Control (RBAC)
  • Expanded scalability and performance
  • Improved fault tolerance

I’ll be keeping a close watch on Catalogic to see what news will follow in the next couple of months.

Other resources

A couple of my SFD7 friends also wrote some very interesting posts on ECX and I’ll include their post here (just click the links):

 – Jon Klaus: Storage Field Day 7 – Catalogic ECX reducing copy data sprawl
Chris Evans: SFD7 – Catalogic Software Addresses Data Copy Management
Dan Frith: STORAGE FIELD DAY 7 – DAY 1 – CATALOGIC SOFTWARE
Keith Townsend: CopyData yeah… Long live Data Virtualization

Also for all SFD7 videos visit the techfieldday website:

Catalogic Software

TechUnplugged Amsterdam

As I mentioned in my last post, my good friend Enrico is the organizer of a great event that started in London a couple of months ago. Because this event was a big succes, Enrico figured it was time to take the event on a tour and proceed to Amsterdam. He asked me to help him (a little) with the organization of this event. And during this amazing event, in one of the greatest capitials in the world (*cough*,*cough*) it would be great to see and meet you. And the event is free, so that can’t be your excuse 😀

To make sure you know what this event is all about, I would like to add the video of one of the presentations during the London event. Just to make sure you know what content will be shared during the event:

The rest of the videos can be viewed here!

The speakers

Looking at the speakers presenting in Amsterdam, this is a great opportunity to learn a lot about what is happening in IT at the moment and what the future holds in the next couple of years… To make sure you won’t be bored Enrico made sure the speaker line up is just amazing:

Enrico Signoretti
Chris Mellor
Howard Marks
Tom Hollingsworth
Chris Evans
Nigel Poulton
Martin Glassborow
Viktor van de Berg
Arjan Timmerman

As you can see this is a very compelling list of speakers and most of the speakers will have the opportunity to speak for the entire audience, while a couple of them will need to share because an additional track/room in the afternoon. For more information on the speakers look here. For more information on the agenda look here.

The Sponsors

This event couldn’t be done without the help of some great sponsors. They pay for everything that is needed to make this such a great event, and for that I would like to ask you to take a look at their websites, as well as talk with them during the event.

Exagrid
Hedvig
Kaminario
Stormagic
Veeam
Tintri
Nutanix
Zerto

These are all really cool companies. I for one am very interested in to hear more about their products, and I hope you feel the same. Where do these technologies fit in my DC strategy and what problem do they solve, in which way.

Join us in Amsterdam

If this isn’t enough for you to join us in Amsterdam, I don’t know what would. So if you’re interested in joining us for a great 1 day event (and some beers the evening before) make sure to sign up using this link:

https://www.eventbrite.it/e/techunplugged-amsterdam-registration-17026517773

Oh yeah, IT’S FREE 😀

See you in Amsterdam!

Beer, Tech and did I mention beer?

My good friend Enrico Signoretti organizes a great event named TechUnplugged. We started a couple of months ago in London and for those who want to know what was presented during the event, here is a link to the videos and the website: Video link and Website Link (and for those interested in the presentations in PDF format click here).

But as the title of the post suggests this post is mainly on BEER! and that’s because although it’s great to be informed and learn from some great people in the industry, it is also great to get to have a more informal meeting with each other. The evening before the event there will be a storagebeers in the Beer temple in Amsterdam!

Make sure you’ll join us and RSVP here!!!!

A big thanks to one of the sponsors of the event (which will be a day later) Tegile, who will pay the first two rounds!!!!