Using the cloud as your NAS Filer will most of the time end up with poor performing applications and complaining users. Avere Systems FXT and Avere OS (AOS) could be the solution for this problem. Using the Avere edge filer cluster capabilities and use it in front of the Amazon’s AWS S3 cloud storage (public Cloud) or the Cleversafe’s object Storage (S3 API) solution as the core filer should give you the flexibilty and performance you need while still being manageble. Let’s have a closer look on how this works.
First, take a look at this video of Ron Bianchini, President and CEO at Avere Systems talking about the Avere Cloud NAS solution on SFD4.
A Brief Avere History
Avere accelerates your NAS system by putting an edge filer in front of an core filer (any NAS filer). In version 1.0, which was released around 3 years ago, Avere introduced their hardware solution as well as a software solution (AOS) on top of their Edge filer (containing RAM, SSD and SAS) to accelerate the operations performance. As well as linear performance scaling through clustering multiple edge filers. The core filers are then capacity (SATA and NL-SAS) and can be any 3rd party NAS (EMC, Oracle, NetApp, all others or a whitebox) and are responsible for data protection as well as compliance functions.
While AOS 1.0 was a revolutionary step in NAS performance optimizing the Avere customer base wanted the ability to support multiple core filers from multiple vendors. AOS 2.0 was released 2 years ago and gave the Avere customers a heterogeneous solution through a Global Name Space (GNS). The opportunity to add multiple vendor filers through an NFS mountpoint to add them to the GNS.
The next question the Avere customers had was to be able to manage all filers through one management interface, which Avere released with AOS version 3.0. FlashMove is used to transparently move data between Core filers, while FlashMirror gives the Avere customer the opportunity to replicate between multiple core filers. More information on this later.
In their latest AOS 4.0 (which is still in Beta) Avere introduces the possiblity to an Object Store as a core filer. At this moment AWS S3 (S3 API), EMC Atmos (S3 API) and Cleversafe (S3 API) are supported. In a later stadium Rackspace, HP and Openstack (Swift) will be supported as well. With AOS 4.0 FlashCloud you can dynamically add public and privat cloud solutions (object storage) to the Global Name System (GNS).
FlashMove and FlashMirror
FlashMove gives you the ability to move data without interruption. By taking advantage of the Avere Global Name System functionality data can be migrated between core NAS filers (or Cloud/Object Storage) by serving active data to the clients through the FXT cluster while moving the data between core filers. This gives you the possibilty to move your data from the filers to the cloud without downtime.
FlashMirror makes sure data is mirrored between a primary and a secundary core NAS Filers. Keeping them both in sync by sending updates in parallel to both core NAS filers. FlashMirror offloads the replication-processing load from the storage and supports clustering to scale replication performance. FlashMirror is easy to install in existing environments and is a storage-side replication solution that works with all NAS vendors, which makes it one of a kind.
The A3 Architecture is an architecture build by the creators of a wide variaty of File Systems like the Andrew File System, Spinnaker File Sytem, NetApp Data ONTAP and others. The A3 architecture is build on three innovations:
Tiered File System
The Tiered File System(TFS) is responsible for the placement of active data on the RAM, SSD, and HDD media on to the FXT cluster and is resposible for the placement of inactive data on the Core filer(s). It makes sure data is monitored through allocation algorithms so data can be managed and allocated in the right way to increase performance. TFS provides NFSv3 and SMB to the client/server side, and NFSv3 to the Core Filer(s) side.
The Avere FXT cluster provides great performance scaling and High Availabilty. A cluster can excist of up to 50 FXT appliances which can deliver millions of operations/sec and tens of gigabytes/sec throughput. Active data is balanced across the FXT cluster to avoid hot spots and so that resources are shared and scalable. With the High Availability (N+1) functionality data is available even in the event of outage.
Virtualization and Visualization
Using Avere’s Global Name Space (GNS) the FXT appliance provides File System Virtualization. This enables a customer to managing multiple vendor NAS systems through a single pain of glass and presenting all data to the clients through a single mountpoint. Visibilty is provided through a statistics through grapical interface. It provides the customer with the needed insight in the operations of the NAS environment which includes the clients/server side and the storage side. This can also be monitored by 3rd-party products through the SNMP and XML-RPC interfaces.
Avere your Cloud
Avere gives their customers the possibilty to keep the “hot” data on the local FXT cluster, close to the user while the FXT cluster “talks” over WAN links with the Core Filer(s) in a private and/or public cloud. When “cold” data becomes written a couple of times, it will be promoted to the FXT cluster so users don’t struggle with performance. This results in much better application performance and data consistency.
Giving users a single pain of glass to manage the storage server across multiple datacenter and storage providers Avere FXT clusters are a very effecient way to make sure your filer is manageble, and meets the performance needs of your company. Leveraging Cloud and making sure you’re multiple vendor NAS infrastructure is manageble is one thing, giving blazing fast performance with it is just awesome.
Avere at SFD4.
As the video above is only one of the 3 video’s of Avere Systems at Storage Field Day 4, here are the other 2:
Ron Bianchini’s Overview of Avere Systems
Avere Systems Use Cases
Other websites to keep in mind (click to open):Read More
In about half an hour we’ll start with Storage Field Day 4 (CloudByte will kick-off the event). I’ll put up a live feed on this site, and hope you’ll join the conversation. When you have a question about the company presenting make sure you’ll give us (the delegates) a tweet which includes the hashtag #SFD4 and one of us will ask the question for you.
Here’s the link where you can follow us:
Looking forward to your questions. So please join the conversation. For the people who wants to pick the presentation they are really interested in, us this calender:
GestaltIT SFD4 Calender
While visiting VMworld 2013 in San Francisco I attended some of the Tech Field Day roundtables. One of the presenting companies was Infinio. I already seen their presentation during Tech Field Day 9 (watch the video’s here )
In the world of accelerating your storage performance in a virtual environment most vendors use flash for acceleration, while Infinio chose to accelerate though a different path. As said most of the modern day storage performance accelerators use flash as a way to accelerate the reads (and sometimes writes like PernixData, Flashsoft and Proximal Data) Infinio uses RAM (and CPU) to accelerate performance. Here is a video on how this works:
Infinio works on NAS only, at the moment but this might change in the future. As the video shows the solution works through a VM that allocates 8 GB memory (and 2 vCPU’s) from the ESXi host to create a deduplicated memory cache pool. The installation process for infinio is very easy:
No reboots required for boosting your storage performance and no Flash device needed. It sounds awesome and it is awesome. So surf to the infinio website and make sure you download and install the software (you can try it for 30 days and if it doesn’t meet your requirements, you can just as easy de-install the product as it was installed). After you convinced yourself of the Infinio product you can buy the software for $ 499,- per socket (which will turn out to be $ 998,- per ESXi host in most environments)
One other video you should see is the Tech Field Day Roundtable at VMworld 2013 in San Francisco:
All information in this blogpost was collected from:Read More
During VMworld, and in the months prior for the event the buzzword seems to be Software Defined (Everything). And although I think highly of innovative and perfectly developed software, hardware is still the driver behind the software force. Without hardware, there would be no software, and because of the innovations at the hardware level, software is able to do it’s awesomeness these days. So naming it software defined is a bit stupid IMHO!
As said, I think highly of great software. But when you think of it lots of the software houses that developed software for many years just used the innovation of hardware in a boring way. Using the extra resources the hardware would (or could) offer is what a lot of these software houses did instead of doing the same innovation in their software. as the hardware vendors did. It made developers lazy. So a revolution in Software development is needed, but giving all credits to software isn’t.
With virtualization of the X86 hardware (with VMware as the main driver behind this force) the hardware vendors (Like Intel and AMD) developed more and more cool features on the hardware side that can be leveraged by software. After server virtualization we started virtualizing the storage and now we start virtualizing the network. This are all awesome achievements and I hope to see much more on these great technologies.
Did a server (before Software Defined) do anything without software installed on it? Or a switch or router? Have you ever seen a NAS or a SAN perform without software? So why does it now all of a sudden has to be named software defined? Hardware needs Software, as Software needs Hardware, so let’s rename it to something both of these awesome technologies are equally represented
Lots of questions. Do I have the answers? not really, but I guess the most of the renaming and rebranding has to do with Marketing. Renaming and creating buzzwords sells. So doing renamed technologies that already existed is all about making decision makers drool, and buy the new software (and hardware) products. It’s all about selling, and the developers just keep on creating the awesome software they are creating as well as the hardware vendors will keep on creating incredible cool hardware.
In my first post I talked about the Tech Field Day Roundtables where Asigra will take the lead in the first roundtable discussion. The second roundtable (on monday 26-08-2013) will be lead by the Infinio team. To be honest, I really don’t know that much about Infinio, other then the information that they gave during the presentation on Tech Field Day 9. I’ll put these video’s up for it is extremely valuable information if your like me a bit new to Infinio.
Infinio: How it works
So three simple steps to seperate performance of capacity:
1. Download the setup file from the Infinio website
2. Discoever your virtual infrastructure
3. Deploy the accelerator engines and boost the performance
Infinio doesn’t use SSD (Flash) to boost performance, but the Infinio accelerator engine deploys a virtual appliance on each host that uses 2 vCPU’s ,8 GB of RAM and 15 GB of local storage to provide a pooled, deduplicated cache in memory to your vSphere environment.
You can follow Infinio on twitter @InfinioSystemsRead More
I received an invitation of the TechFieldDay team to join them on their roundtables during VMworld 2013 in San Francisco. I attended these roundtables last year also, and it was awesome to do these deep dives with the companies presenting.
This year we have some vendors that I already met during previous Tech Field Days, but also some new ones.
The first roundtable being held will be on Asigra, a company providing companies a cloud model backup and recovery suite.
As the picture above shows, Asigra is an agentless backup and recovery suite. This is important as you want to protect your infrastructure the most easy but secure way. Installing an agent on every server can be time-consuming, it consumes resources (CPU, RAM) from the server and for agents to do it’s work you’ll probably have to open a firewall port on which communication to take place posing a security risk. Using the Asigra agentless approach you’ll bypass these problems.
As most modern backup and recovery vendors, Asigra offers deduplication. With Deduplication and Compression Asigra provides you as a customer, as well as the ISP the best possible solution for Cloud Backup.
I’m looking forward too seeing the Asigra guys again, as we already met them during Storage Field 2.
More information on Asigra:
Video’s of Storage Field Day 2
Today Nutanix released Nutanix Operating System (NOS) 3.5, as well with a technology preview for Microsoft Hyper-V support . This release introduces us to some interesting new developments like the Nutanix Elastic Deduplication Engine for RAM and Flash and dedupe for the capacity tier coming in the near future, a REST-API (Prism API), the PRISM UI and a Site Recovery Adapter for VMware’s SRM.
Microsoft Hyper-V support (Tech Preview)
Nutanix started with support for VMware vSphere. As NOS is hypervisor agnostic, after some time KVM support was included. Now with NOS 3.5 will include Microsoft HyperV and will use SMB3 for the shared storage access between the nodes. With the new PRISM UI you can manage all hypervisors from one interface. A great thing but think about the potential of this new hypervisor support on the Nutanix Platform. Using VMware vCloud Automation Center a customer could use Nutanix and create clusters that are best suited for the job.
Nutanix Elastic Deduplication Engine
In the earlier releases of NOS, inline compression of data was already included. NOS 3.5 introduces the Nutanix Elastic Deduplication Engine. It provides a Real-time deduplication for RAM and Flash, and in the near future Nutanix will include dedupe on the capacity tier (HDD) too. Nutanix looks at data that comes into the Content Cache (RAM and SSD) and puts it all in a Single touch pool (no dedupe will take place here) when the data is accessed for the second time it will be moved to the multi touch pool (here Dedupe will take place) after the data isn’t accessed any more it will be moved to de Capacity tier (HDD). Things like indexing and virusscanning will be catched in the single touch pool, for optimal deduplication.
The New Prism UI
Nutanix Prism is build from the ground up in HTML5. It is a management framework that simplifies the user experience. Within the Prism UI a user can view information in organized manor on system and VM information. With a tile based interface a customer can drill down into information on server, storage and network operations. with the Prism API, Nutanix provides a customer an easy way to explore the scripts needed to maintain and monitor the Nutanix environment even further.
75 new and updatet features in NOS 3.5
In the NOS 3.5 release Nutanix included a Site Recovery Agent for VMware SRM. VMware VAAI support and and extension of compression for the native Nutanix Disaster Recovery feature and so on…
For more infromation visit the Nutanix website, and make sure to visit Nutanix at VMworld in San Francisco (Booth 1521)Read More