Category: vmware

Software and hardware vendors: Hand over the keys

 

via Brenda Clarke on Flickr

Everyone talks about the consumerization of IT and how end-users are demanding enterprise support of things like iPhones, iPads and many other pieces of technology.  People want to be able to consume IT as a resource on any device or platform they have.  This is happening between enterprise hardware and software vendors and service providers.

I’ve met with quite a few them recently and they usually fall into two camps.  Ones who have invested and attempted to develop their own intellectual property and others who have leveraged economies of scale and rely on vendors to supply the IP.  There are exceptions to this rule of course.  The first camp is what I want to focus on.

Here is what they want:

  • Align with my business and go-to-market strategy
  • Don’t can your offering with SP-focused marketing materials if it can’t honor the promises
  • Have hardware and software with open interfaces
  • Have well-documented interfaces
  • Be agile in the adoption of new interfaces

These conversations tend to revolve around the self-service portals many of the pure-play service providers have developed.  They don’t want a canned out of the box offering, they want to be able to provision and orchestrate the compute, network and storage layer through things like SOAP and REST protocols.  When you develop these interfaces and hand them over to the developers, strange things happen.  Nick “@lynxbat” Weaver exemplifies this.  He isn’t a developer by day but you give him some APIs and he can do crazy things on a plane like write a vSphere plugin that allows VM teleportation with our (EMC’s) VPLEX product.

Now I’m not ignoring the need for software development lifecycle management, version control.  Those are all important.  The thing is that the “neat stuff” us and our customers do can only get better if we open up with good APIs that have a happy balance between standardization and cutting-edge agility.

Why do I beat this drum?  Because it’s a win for enterprise hardware/software vendors and our customers.  What is most exciting about this is that I’ve beat this drum inside of EMC, not as a VP of strategic direction but as a joe-schmoe vSpecialist.  What has come out of it?  A lot of people have listened and it is a huge priority.  I’ve said it before on twitter but one of the best things about working at EMC is that the organization is huge but very flat.  The reality is that I’ve been able to nudge an aircraft carrier with the help of others and start to change course.  This isn’t a post about why I love working at EMC but I think it’s a darn compelling reason.  Our work has just begun…

Tidbits: EMC Unified Storage and Infosmack Podcast – Oracle & VMware

EMC’s Unified Storage and Tiering blogpost @ GestaltIT.com:

Over the last week or two Devang Panchigar (@storagenerve) and I have been collaborating on a post at GestaltIT.com regarding EMC’s storage strategy.  If you have read it already, great but if you haven’t yet there are a couple of things to remember while reading.  First, Devang and I do not have any insight into EMC beyond their public statements and ideas our peers have discussed.  Second, we’re not endorsing their potential strategy over anyone elses but merely stating that it is a step in the right direction.

Hopefully over the next couple of months we’ll continue the posts about other vendors and next-gen storage infrastructure which should serve to enlighten people on where companies appear to be going.

So again if you haven’t read our post, take a look and give us some feedback.

Oracle & VMware and Dell/Perot:

This week I was invited yet again to participate in the StorageMonkeys Infosmack Podcast #21 by the gracious Greg Knieriemen and Marc Farley.  Steve Kenniston, Storage Technologist at EMC and blogger at BackupAndBeyond.com, was also a guest on the show.  I always enjoy talking with Steve because of his experience in the industry and how knowledgeable he is from multiple perspectives.  As always, Greg and Marc do a great job of getting good discussion going.  We discussed the Oracle and VMware spat going on as well as the Dell/Perot acquisition.  If you haven’t had a chance to go listen, go do so now!  Oracle seems insistent on locking people in and Dell is going after the services business with Perot.

Why desktop virtualization projects fail

Desktop virtualization is one of the hottest topics of interest and a major initiative of many companies. Touted benefits include lower operating costs, simpler management and desktop mobility. Below we’ll explore what the barriers to wide-scale adoption of desktop virtualization solutions are and some approaches to deal with them. It’s not a fit for everyone in a company but it can be for many.

Challenge #1: Assuming desktop virtualization makes sense because thin clients are cheap - Many people assume that virtualizing desktops is going to be magnitudes cheaper because thin clients can be found for approximately $300-400 whereas a PC can cost $500-$1200.

Tip: Client costs are only part of the picture. Desktop virtualization can reduce capital expenditures but do not expect that to be the case in the first year. Building the infrastructure is expensive (storage, servers, licenses, etc.) and may be the same in the first year. Think about using existing PCs as clients instead of replacing them with thin clients. Thin clients are cheaper than PCs but the reduction in hardware costs may not be seen for a couple of years due to the infrastructure needing to be built. More importantly, operational expenses will be seen immediately and that is where the true cost savings can be found.

Challenge #2: Infrastructure people not understanding the desktop people - Server ops are not the same as desktop ops. Users have different behaviors and expectations on how their desktops will function. It is easy to virtualize a windows desktop but delivering what the user expects is not easy.

Tip: Understand your users and identify your use cases - Learn what apps users need to use, how they use them, where they use them and what they expect. Do your users need different apps depending on their physical location? Do they need dual monitors or multimedia acceleration? How should you deliver user profiles? Is printing going to be an issue? Spend a bit of time identifying and categorizing your use cases so you can design your solution around them.

Challenge #3: Bad desktop practices follow into the virtual world - Refreshing desktops will not be any easier if you allow users to install their own applications or store data with a desktop image. Not ensuring good security policies (screensaver locking and passwords) may leave desktops unprotected as users go from office to kiosk.

Tip: Identify unhealthy desktop practices and change what is feasible (in phases) - Start thinking about what makes managing desktops difficult today. If users don’t absolutely need to install their own apps, set policies that stop that behavior. Storage space, desktop refreshes and manageability will all be improved. If security is lax, improve it by doing basic things like auto-locking displays so someone can’t hijack a desktop left logged in.

Challenge #4: Not understanding Microsoft licensing - Microsoft bars OEM licenses from being transferred and they also require VECD (Virtual Enterprise Centralized Desktops) for all Windows desktops that are virtualized. There are additional per seat licenses from VMware and other desktop virtualization vendors.

Tip: Understand the licensing before starting a pilot - At the time of this writing, VECD is a device-based subscription and is $23/seat for SA (Software Assurance) or $110/device/year. An example from Microsoft’s website:

For example, a company with 10 thin clients and 10 laptops (not covered under SA) accessing a VDI environment requires a total of 20 Windows VECD licenses (20 x $110/year). However, if the same company has 10 thin clients and 10 laptops covered under SA, it will require 10 VECD licenses (10 x $110/year) and 10 VECD for SA licenses (10 x $23/year).

Challenge #5: Poor virtual desktop performance - The two biggest challenges after the ops piece is sizing and end-point selection. The desktops take a long time to boot up and flash video is choppy. There are new limitations in a virtual environment that were nonexistent when everyone had their own PC.

Tip: Work with a partner who can help size and architect a system - This is critical because of all the variables involved. Design is dictated by many of the answers to challenge #2. Also, end-points (thin clients, PCs, web-based access) all are unique in the user experience they deliver. If Youtube video is important, get an endpoint that specifically accelerates adobe flash. If users are far and network latency is high, either deploy WAN accelerators from companies like Riverbed or Cisco or use thin clients like Sunray’s from Sun Microsystems.

Desktop virtualization is still rapidly changing. The challenges and tips above are not an inclusive or exclusive list. They are meant to prompt some thought before jumping in if you want a higher probability of success. Don’t take on too much at once, do things in phases. As always, feedback is welcome.

VMworld 2009 Recap – Clouds, Desktops and Mobility

VMworld 2009 was a great conference in spite of some bumps such as scheduling and lab issues. The social media aspect made the conference even better by allowing people to see what was going on where they weren’t.

The VMware datacenter probably got the most visual attention. A whole row of Cisco UCS nodes, Clariion, V-Max, HP blades, Netapp filers and other assorted infrastructure made it feel like you were walking into a blast furnace as you came down the escalators.

VMware formally announced their vCloud initiative with over 1000 service providers participating. vCloud Express was launched which provides an easily accessible platform for users to get started in the cloud and pay with a credit card. AT&T, Savvis, Verizon and Terremark spoke at the press and analyst event about their enterprise offerings and how things are taking shape. It was obvious that the service providers are still figuring things out. VMware also talked about their vCloud API which allows ISV’s to develop software that hooks into the clouds powered by VMware. That said, specifics on futures on vCloud were thin despite the fact that VMware is known for talking futures early.

SpringSource also demoed their software and platform. There was a lukewarm if not cold reception from many at VMworld but it’s because most of your VMware admins aren’t the right audience. The people who understood SpringSource were excited and thought it was a good acquisition for VMware.

The desktop was once again a very big focus. Dr. Stephen Herrod previewed virtual desktop mobility by moving a VDI session from one device to another. He also showed an android app running on a windows mobile phone. Wyse had a large presence on the show floor as did many other desktop virtualization solutions. It was clear that desktop virtualization is about more than shoving a desktop into a virtual machine and more about the operations aspect with things like profiles and persona awareness.

You can hear more commentary about VMworld 2009 on the Infosmack podcast led by Greg Knieriemen, Marc Farley of 3Par with myself and Devang Panchigar of CDS as guests.

Things I want out of VMworld 2009

Cloud Strategy - VMware’s cloud strategy is still maturing and growing.  We have been hearing from Maritz and others that technology is built into vSphere and other products to leverage it as a cloud platform.  I expect we’ll be hearing more about some tangible developments with cloud providers out there today. It will be interesting to see if VMware continues to build itself as a cloud platform or if it shifts gears and starts chasing after Amazon’s AWS and Microsoft’s Azure platforms.  Though they have invested in Teramark, without some good explanation, it would be detrimental for VMware to try to be the provider.  I suspect the folks at VMware know this and are have no desire to be the provider but instead need to seed the field.

Enhanced infrastructure awareness - VMware and its network and storage partners need to more visibility to each other.  Not only do people need to be able to see what is going on under the covers (storage and network) with things like AppSpeed but they also need to be able to make intelligent decisions on how to fix problems.  It should be easy for an admin to see what LUN on the storage side has too many VMs without having to interpret naa392dxxxxxxxxxxxxxxxx numbers.  This is continuing to happen but still has a ways to go.

Desktop Virtualization – The improvements from VDI (2.0) to View (3.1) and continuing to View 4.0 have been good but there is so much work to do.  When I meet with customers, the challenges that they face aren’t just getting applications and desktops virtualized from a technical perspective.  We need more flexibility to determine not only what desktop a user receives but what kind of desktop a physical location receives.  We need application persistence with a physical endpoint.  This is counterintuitive to what virtualizing desktops is all about but this is all going to drive back to the persona of both the person and the endpoint.  Entrigue Systems, which is being acquired by Liquidware Labs,  and other ISV’s are doing this but it needs to be seamless and well supported.

If you have anything you want to know or news to share with me about some of these things, let me know.

VMware’s cloud strategy

It’s obvious VMware and virtualization are playing a huge role in cloud computing from the perspective of Infrastructure as a Service (IaaS).  VMware’s lead Cloud Architect, Mike Dipetrillo, was gracious enough to provid some great insight into VMware’s strategy.

IaaS is where most managed service providers focusing on today for their cloud offerings.  We discussed that developing self-service infrastructure as a service provider offering is tricky.  The folks who have done infrastructure understand the glue that is needed for provisioning and allowing users control over their own environment.  Giving users the ability to turn the knobs that control things takes a lot of work.  Today that is VMware Lab Manager under the covers with some special glue for provisioning.  Lab Manager has some challenges today because it wasn’t designed for a multi-tenant environment.  Over the next year you’ll start to see products come out that address this for service providers.  VMware is also heavily focused on delivering more APIs which allow companies like RightScale to hook into VMware to provision and manage virtual machines.  VMware categorizes all of its cloud computing initiatives under the vCloud umbrella.  This will include all of their cloud-focused products and APIs.  The roadmap has developed rapidly over the last year or two.

Helping small, medium and large businesses build out their internal clouds has been a big focus as well.  It needs to be easy to allow people to have the flexibility to move things between the internal and external cloud.  One of the questions I get asked the most is “How can I move my applications out into the cloud?”  It is a lot cheaper and easier to virtualize your existing software stack compared to rewriting things to fit on exotic platform as a service software at the moment.

Another thing we discussed is how enterprise companies don’t really like “elastic” or “usage-based” billing models.  They actually prefer allocation-based where they billed in a consistent fashion.  I’ve never given a lot of thought to this but it makes sense.  A lot of companies do business the way they do because it works well for them.

Today VMware has over 500 service providers who are either in the process to getting a cloud offering off the ground or have one today.  The team at VMware is small when compared to the rest of the organization but things have been progressing fast.

We discussed competition briefly but both agreed how things are changing rapidly.  It was obvious that VMware is agressively ramping up its vCloud offering and the internal structure to go along with it.  The benefit they have as a company is that they’re able to leverage so many of their existing products and IP.

Many thanks to Mike for sparing some time to discuss VMware’s vCloud initiative.