Category: gestaltit

Tech Field Day Boston – Being on the vendor side

I got the opportunity to present about VCE for Cisco to Tech Field Day delegates a couple weeks ago.  It was eye-opening to be on the other side of the fence.  Many thanks to Rob Callory and Stuart Miniman from EMC and Omar Sultan from Cisco for organizing things for both companies.  Also, none of this would’ve been possible without Stephen Foskett who is the father of the mixed-vendor Tech Field Day events.  It was awesome meeting some new faces like Scott D. Lowe (Tech Republic & SearchCIO), David Davis (TrainSignal), Simon Long (The SLOG and fellow GestaltIT contributor), Matt Simmons (The Standalone Sysadmin) and Gabrie van Zanten (Gabe’s Virtual World).  It was also great seeing the usual suspects as well (Devang, Bas, Simon Seagrave, Claire, Greg Knieriemen, Robin, Carlo, John, Greg Ferro and Ed).

Update: I don’t know how it happened but I left the venerable Jason Boche (Boche.net)  out of the original version of this.  He’s always been one of the usual suspects although this was his first Tech Field Day.

In the past as a delegate I was focused on ingesting and analyzing information from the vendors.  As a presenter, your responsibility is to *effectively communicate* a position to a small group.  You might be thinking, “So what, I do that everyday.”  The difference is that everyone is bent a certain way and they interact.  We’d like to think that people check their religion at the door, but that simply isn’t the case.  If you’re presenting something controversial, new and unclear you must be prepared.  I didn’t want people to treat me differently than other vendors just because I had been a previous delegate and they didn’t.

The discussion was lively with a lot of very insightful questions.  The delegates this time around were more diverse than before. They had backgrounds ranging from small companies to large enterprises and disciplines across the infrastructure spectrum.  As everyone was participating, it was clear that they were trying to relate the message to what they do everyday.  After I was done presenting I found out that Simon Seagrave was live-streaming the event and twitter was lit up like a Christmas tree with people going back and forth.

Some suggestions for current and future attendees:

  1. Try as hard as you can to keep an open mind
  2. Be critical
  3. Don’t beat the proverbial dead horse
  4. Give vendors constructive criticism, it’s better for everyone

Some suggestions for current and future sponsors:

  1. Bring your fireproof suit, sometimes the discussions get quite warm
  2. Take the criticism and do something constructive with it
  3. Keep the pace moving, be prepared to have half or a quarter of the time to present.  The schedule is tight and it’s common for things to get pushed back quite a bit.

One universal that I discovered is this, “Just because you work for a vendor doesn’t mean you’re biased and just because you don’t work for a vendor doesn’t mean you’re unbiased.”

I was asked by some EMC folks if we should do these events in the future and I’d say that there is no doubt we get a ton of value for them.  The only things I’d like to see different is that the schedule have some more flexibility built in so vendors get all the time they paid for.  Towards the end of the last session I think everyone was getting worn down and I give tons of credit to Stephen and the attendees for sticking with it.

Why Proof of Concept projects fail

This may seem obvious to some readers but I haven’t seen a good list of considerations to help ensure a successful PoC project.  Here are some training wheels to make sure you don’t crash and burn.

  1. Lack of requirementsAll key stakeholders involved should sign off on a detailed requirements document.  It doesn’t have to be in blood, an email response with a “yes” will suffice unless there are contractual obligations.  I hear, “We just want to see if it will work” all the time.  When you’re doing a PoC, be as specific as possible in defining “IT”.  Unless a solution is completely unbaked, think about how you would envision it working in your environment.  Talk to people and ask them how it works in their environment as you come up with requirements.  Be as transparent as possible with the vendor so there is no hidden agenda or confusion.
  2. Lack of a leader – Designate a lead. I’d be rich if I got a nickel for every PoC that failed because of a lack of a leader.  You need someone to keep track of the requirements, vendor involvement and testing.  PoC’s are easy to get lost in the fray because there aren’t obvious penalties for the customer who doesn’t see a PoC through. Conditional PO’s instead of freebie PoC’s are becoming more common.
  3. Lack of experience with the product – Let the vendor show you how a product was meant to be used.  If you’ve never touched a product before, why would you want to run a PoC all by yourself?  Seriously, your parents had to teach you how to velcro your shoes.  Which leads me to the next point that comes after someone says “We couldn’t get it to work this way so we tried X, Y and Z”.
  4. No documentation – Document your setup and any changes as they’re made. There are a ton of variables in your environment, document them.  I can’t stress this enough.  First make sure you deployed the product according to best practices.  If you need exceptions then run them by someone who knows what they’re doing and note them.
  5. Not asking for help – If you must, call and allow time for help. Yes, you might anger someone but it’s worth calling in for help before declaring a project a failure.  I can’t promise that a white knight will come in and save the day as the deadline for your PoC approaches but call for help anyway.

When I come in to help out with PoC’s that are in trouble in their 11th hour, two things are usually immediately apparent.  First, there was no leader or everyone went off in their own direction without accountability.  Second, there was a lack of familiarity with the product.  This list isn’t for my benefit, it’s all to help others have successful PoC’s.  If you have suggestions, send them in!

The Five Rules of Tech Field Day Club

The GestaltIT Tech Field Day event just wrapped up and it was a very interesting event. Stephen Foskett and Claire Chaplais did a phenomenal job at keeping the wheels on the bus. I realized that the attendees were just as critical as making the event a success above and beyond the vendors. I learned so much from others who either knew more or had different perspectives. The genesis of this list comes from the question I asked myself and other attendees constantly which was, “What can we do to get deeper than a standard technical presentation or trade show booth demo.”

1. Ask yourself what you want out of it – Remember, some of your attendees have never heard of you but many know some of your pitch already. Figure out what you want to get out of the event ahead of time and ask yourself if attendees will walk away talking about your presentations the way you wanted them to.

2. Cover the basics and then get into the weeds – We love the weeds. Some of us do anyway. It shows us you know what you’re talking about. It separates you from your competition. Tell us your strengths and weaknesses. We are more effective when we are armed with more information.

3. Bring your best people – You want to bring your best and brightest because there will be people (like me) who will grind into the details. 3Par and Ocarina brought their rockstars and it was apparent to each and every attendee. They knew their stuff and didn’t push questions aside.

4. Think and re-think your demo or hands-on labs – Some of the ones we experienced were great but others weren’t effective. Demos and labs that cover the basics *aren’t* always the best. People who are following the event will say, “I could’ve done that. Show me something new and different.” Remember, some of us love the CLI and others could care less. Make sure your activity will keep people engaged. Data Robotics did this very well but a big reason is because their technology is *different*. They understood how to deliver an experience much like Steve Jobs and Apple does. Their CEO even did a whiteboard of their technology and he got into the weeds.

5. There is never enough time – Almost all the vendors were a bit over schedule. Don’t try to cram too much in if it won’t fit or get a bigger timeslot. Many vendors had this happen but kudos to them for rolling right through.

Remember that you will get both good and bad feedback but being in tune with your audience is what matters. The rules above are not a guaranteed recipe for successful but they’ll give you a good start. They are universal and apply whenever you are pitching anything, not just during a Tech Field Day event. Stephen will be posting the videos of the sessions, watch them and learn from what worked and what didn’t.

Data Dedupe comes to ZFS

It’s official… Data deduplication has been added to ZFS (read the link if you’re new to data deduplication). Hats off to Jeff Bonwick and Bill Moore who did a ton of the work in addition to Mark Maybee, Matt Ahrens, Adam Leventhal, George Wilson and the entire ZFS team.  The implementation is a synchronous block-level one which deduplicates data immediately as it is written.  This is analogous as to how DataDomain does it in their dedupe appliances.

What’s interesting about this is now dedupe will be available for *free* unless Oracle does something stupid.  Sun’s implementation is complimentary to the already-existing filesystem compression.  I’m not sure how much of an issue this is yet but the current iteration can not take advantage of SHA256 acceleration in the SPARC Niagara2 CPUs but eventually we should see hardware acceleration implemented.

When will it be available? It should be available in the Opensolaris dev branches in the next couple of weeks as code was just committed to be part of snv_128.  General available in Solaris 10 will take a bit longer until the next update happens.

For OpenSolaris, you change your repository and switch to the development branches – should be available to public in about 3-3.5 weeks time.  Plenty of instructions on how to do this on the net and in this list.  — James Lever on the zfs-discuss mailing list

How do I use it? If you haven’t built an Opensolaris box before, you should start looking at this great blog post here.  I wouldn’t get things rolling until dedupe is in the public release tree.

Ah, finally, the part you’ve really been waiting for.

If you have a storage pool named ‘tank’ and you want to use dedup, just type this:

zfs set dedup=on tank

That’s it.

Like all zfs properties, the ‘dedup’ property follows the usual rules for ZFS dataset property inheritance. Thus, even though deduplication has pool-wide scope, you can opt in or opt out on a per-dataset basis.

— Jeff Bonwick http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup#comments

What does this mean to me? Depends.  For people who like to tinker, you can build your own NAS or iSCSI server with dedupe *and* compression turned on.  Modern CPUs keep increasing in speed and can handle this.  This is huge.  Now, should you abandon considering commercial dedupe appliances that are shipping today?  Not if you want a solution for production as this won’t be officially supported until it’s rolled into the next Solaris update.  For commercial dedupe technology vendors, this is another mark on the scorecard for the commoditization of dedupe.

What things do I need to be aware of? The bugs need to be worked out of this early on so apply standard caution.  READ JEFF’s BLOG POST FIRST!!! There is a verification feature, use it if you’re either worried about your data or using fletcher-4 as a hashing algorithm to speed up dedupe performance (zfs set dedup=verify tank or zfs set dedup=fletcher4,verify tank).

How do I stay up to date on ZFS in general? Subscribe to the zfs-discuss mailing list (also in forum format).  It can be high volume but it is worth it if you want to stay on top of all things zfs.

http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.htmlHow do

Can and when will SSDs + SATA replace FC/SAS?

Simon Seagrave (http://www.techhead.co.uk/) asked, “How long do you think it’ll be before SSD will overtake SCSI as primary tier 1 SAN storage? Giving a new SSD and SATA tiered mix.”

Short answer: Yes, it will be SSDs + SAS and within 2 years.

The real quesion is when 15K RPM highspeed drives will be replaced with SSDs and 7.2K RPM high capacity drives.  SAS will probably end up replacing both FC and SATA in the majority of mid-range storage but the jury is still out on if this will happen in high-end arrays.

What are we talking about here? From an array design perspective, frequently accessed blocks of data should be served from ram and SSD.  SSDs have a much faster response time (microseconds vs. milliseconds) than traditional hard disk drives which enable this possibility.  The Sun Unified Storage platform was one of the first platforms to do this all in one array with their Hybrid Storage Pool design.  There are also some new appliances coming out like the Xcelasan from Dataram.  NetApp offers read accelerationt through their PAM 1 (Performance Acceleration Modules).  EMC will also start the transition at the LUN level with their implementation of FAST as described by Devang Panchigar.  This list is not meant to be comprehensive as I’m sure I have left out vendors and their roadmaps.

When will this happen? I expect the majority of storage vendors to implement this type of solution at the block level within the next 2 years based on current development cycles from most major storage vendors.  It will take some longer than others because of their architectures. It will be a key differentiating feature between vendors.  Beyond basic tiering between SSDs and high capacity disks we should see more advanced algorithms on what data and where to move it to.  I’ve followed journey of Sun’s ZFS  on the zfs-discuss mailing lists over the last year and have noticed that peculiar performance behaviors happen (write-pulsing) have required fine-tuning.

Tidbits: EMC Unified Storage and Infosmack Podcast – Oracle & VMware

EMC’s Unified Storage and Tiering blogpost @ GestaltIT.com:

Over the last week or two Devang Panchigar (@storagenerve) and I have been collaborating on a post at GestaltIT.com regarding EMC’s storage strategy.  If you have read it already, great but if you haven’t yet there are a couple of things to remember while reading.  First, Devang and I do not have any insight into EMC beyond their public statements and ideas our peers have discussed.  Second, we’re not endorsing their potential strategy over anyone elses but merely stating that it is a step in the right direction.

Hopefully over the next couple of months we’ll continue the posts about other vendors and next-gen storage infrastructure which should serve to enlighten people on where companies appear to be going.

So again if you haven’t read our post, take a look and give us some feedback.

Oracle & VMware and Dell/Perot:

This week I was invited yet again to participate in the StorageMonkeys Infosmack Podcast #21 by the gracious Greg Knieriemen and Marc Farley.  Steve Kenniston, Storage Technologist at EMC and blogger at BackupAndBeyond.com, was also a guest on the show.  I always enjoy talking with Steve because of his experience in the industry and how knowledgeable he is from multiple perspectives.  As always, Greg and Marc do a great job of getting good discussion going.  We discussed the Oracle and VMware spat going on as well as the Dell/Perot acquisition.  If you haven’t had a chance to go listen, go do so now!  Oracle seems insistent on locking people in and Dell is going after the services business with Perot.

Why desktop virtualization projects fail

Desktop virtualization is one of the hottest topics of interest and a major initiative of many companies. Touted benefits include lower operating costs, simpler management and desktop mobility. Below we’ll explore what the barriers to wide-scale adoption of desktop virtualization solutions are and some approaches to deal with them. It’s not a fit for everyone in a company but it can be for many.

Challenge #1: Assuming desktop virtualization makes sense because thin clients are cheap – Many people assume that virtualizing desktops is going to be magnitudes cheaper because thin clients can be found for approximately $300-400 whereas a PC can cost $500-$1200.

Tip: Client costs are only part of the picture. Desktop virtualization can reduce capital expenditures but do not expect that to be the case in the first year. Building the infrastructure is expensive (storage, servers, licenses, etc.) and may be the same in the first year. Think about using existing PCs as clients instead of replacing them with thin clients. Thin clients are cheaper than PCs but the reduction in hardware costs may not be seen for a couple of years due to the infrastructure needing to be built. More importantly, operational expenses will be seen immediately and that is where the true cost savings can be found.

Challenge #2: Infrastructure people not understanding the desktop people – Server ops are not the same as desktop ops. Users have different behaviors and expectations on how their desktops will function. It is easy to virtualize a windows desktop but delivering what the user expects is not easy.

Tip: Understand your users and identify your use cases – Learn what apps users need to use, how they use them, where they use them and what they expect. Do your users need different apps depending on their physical location? Do they need dual monitors or multimedia acceleration? How should you deliver user profiles? Is printing going to be an issue? Spend a bit of time identifying and categorizing your use cases so you can design your solution around them.

Challenge #3: Bad desktop practices follow into the virtual world – Refreshing desktops will not be any easier if you allow users to install their own applications or store data with a desktop image. Not ensuring good security policies (screensaver locking and passwords) may leave desktops unprotected as users go from office to kiosk.

Tip: Identify unhealthy desktop practices and change what is feasible (in phases) – Start thinking about what makes managing desktops difficult today. If users don’t absolutely need to install their own apps, set policies that stop that behavior. Storage space, desktop refreshes and manageability will all be improved. If security is lax, improve it by doing basic things like auto-locking displays so someone can’t hijack a desktop left logged in.

Challenge #4: Not understanding Microsoft licensing – Microsoft bars OEM licenses from being transferred and they also require VECD (Virtual Enterprise Centralized Desktops) for all Windows desktops that are virtualized. There are additional per seat licenses from VMware and other desktop virtualization vendors.

Tip: Understand the licensing before starting a pilot – At the time of this writing, VECD is a device-based subscription and is $23/seat for SA (Software Assurance) or $110/device/year. An example from Microsoft’s website:

For example, a company with 10 thin clients and 10 laptops (not covered under SA) accessing a VDI environment requires a total of 20 Windows VECD licenses (20 x $110/year). However, if the same company has 10 thin clients and 10 laptops covered under SA, it will require 10 VECD licenses (10 x $110/year) and 10 VECD for SA licenses (10 x $23/year).

Challenge #5: Poor virtual desktop performance – The two biggest challenges after the ops piece is sizing and end-point selection. The desktops take a long time to boot up and flash video is choppy. There are new limitations in a virtual environment that were nonexistent when everyone had their own PC.

Tip: Work with a partner who can help size and architect a system – This is critical because of all the variables involved. Design is dictated by many of the answers to challenge #2. Also, end-points (thin clients, PCs, web-based access) all are unique in the user experience they deliver. If Youtube video is important, get an endpoint that specifically accelerates adobe flash. If users are far and network latency is high, either deploy WAN accelerators from companies like Riverbed or Cisco or use thin clients like Sunray’s from Sun Microsystems.

Desktop virtualization is still rapidly changing. The challenges and tips above are not an inclusive or exclusive list. They are meant to prompt some thought before jumping in if you want a higher probability of success. Don’t take on too much at once, do things in phases. As always, feedback is welcome.

VMworld 2009 Recap – Clouds, Desktops and Mobility

VMworld 2009 was a great conference in spite of some bumps such as scheduling and lab issues. The social media aspect made the conference even better by allowing people to see what was going on where they weren’t.

The VMware datacenter probably got the most visual attention. A whole row of Cisco UCS nodes, Clariion, V-Max, HP blades, Netapp filers and other assorted infrastructure made it feel like you were walking into a blast furnace as you came down the escalators.

VMware formally announced their vCloud initiative with over 1000 service providers participating. vCloud Express was launched which provides an easily accessible platform for users to get started in the cloud and pay with a credit card. AT&T, Savvis, Verizon and Terremark spoke at the press and analyst event about their enterprise offerings and how things are taking shape. It was obvious that the service providers are still figuring things out. VMware also talked about their vCloud API which allows ISV’s to develop software that hooks into the clouds powered by VMware. That said, specifics on futures on vCloud were thin despite the fact that VMware is known for talking futures early.

SpringSource also demoed their software and platform. There was a lukewarm if not cold reception from many at VMworld but it’s because most of your VMware admins aren’t the right audience. The people who understood SpringSource were excited and thought it was a good acquisition for VMware.

The desktop was once again a very big focus. Dr. Stephen Herrod previewed virtual desktop mobility by moving a VDI session from one device to another. He also showed an android app running on a windows mobile phone. Wyse had a large presence on the show floor as did many other desktop virtualization solutions. It was clear that desktop virtualization is about more than shoving a desktop into a virtual machine and more about the operations aspect with things like profiles and persona awareness.

You can hear more commentary about VMworld 2009 on the Infosmack podcast led by Greg Knieriemen, Marc Farley of 3Par with myself and Devang Panchigar of CDS as guests.

VMware’s cloud strategy

It’s obvious VMware and virtualization are playing a huge role in cloud computing from the perspective of Infrastructure as a Service (IaaS).  VMware’s lead Cloud Architect, Mike Dipetrillo, was gracious enough to provid some great insight into VMware’s strategy.

IaaS is where most managed service providers focusing on today for their cloud offerings.  We discussed that developing self-service infrastructure as a service provider offering is tricky.  The folks who have done infrastructure understand the glue that is needed for provisioning and allowing users control over their own environment.  Giving users the ability to turn the knobs that control things takes a lot of work.  Today that is VMware Lab Manager under the covers with some special glue for provisioning.  Lab Manager has some challenges today because it wasn’t designed for a multi-tenant environment.  Over the next year you’ll start to see products come out that address this for service providers.  VMware is also heavily focused on delivering more APIs which allow companies like RightScale to hook into VMware to provision and manage virtual machines.  VMware categorizes all of its cloud computing initiatives under the vCloud umbrella.  This will include all of their cloud-focused products and APIs.  The roadmap has developed rapidly over the last year or two.

Helping small, medium and large businesses build out their internal clouds has been a big focus as well.  It needs to be easy to allow people to have the flexibility to move things between the internal and external cloud.  One of the questions I get asked the most is “How can I move my applications out into the cloud?”  It is a lot cheaper and easier to virtualize your existing software stack compared to rewriting things to fit on exotic platform as a service software at the moment.

Another thing we discussed is how enterprise companies don’t really like “elastic” or “usage-based” billing models.  They actually prefer allocation-based where they billed in a consistent fashion.  I’ve never given a lot of thought to this but it makes sense.  A lot of companies do business the way they do because it works well for them.

Today VMware has over 500 service providers who are either in the process to getting a cloud offering off the ground or have one today.  The team at VMware is small when compared to the rest of the organization but things have been progressing fast.

We discussed competition briefly but both agreed how things are changing rapidly.  It was obvious that VMware is agressively ramping up its vCloud offering and the internal structure to go along with it.  The benefit they have as a company is that they’re able to leverage so many of their existing products and IP.

Many thanks to Mike for sparing some time to discuss VMware’s vCloud initiative.

Storage Layout – Why care?

Why should you care about how you lay your storage out?  Maybe because it’s your job or because it’s the right thing to do.  Perhaps it’s because your application performance isn’t acceptable or your boss won’t let you buy shelves full of 15K RPM disks anymore.  It’s not uncommon for pure frustration to stream out of a CIO’s mouth regarding how expensive enterprise storage is and that they’re “sick of throwing fibre channel disks at a problem”.

Even if your array does this “automatically” or you’ve got performance to spare, here are some things to keep in mind as you scale:

1. Analytics tools are your best friend – If you have no instrumentation, you’re flying blind.  Your storage should allow you to see what’s going on underneath the covers so you can track down performance issues.  Third-party tools to do this are available but make sure you buy the analytics tools when you purchase an array.  We want to know if latency is horrible or if IOPs are high but throughput is low.

2. Workloads on the same RAID groups should be complimentary (caveat, see #3) – If you’ve got SQL and Exchange, try putting SQL log LUNs on the Exchange data LUN RAID group and Exchange log LUNs on the SQL data RAID group.  Don’t put two of the same type of workloads in the same RAID group and expect harmony.

3. Pick an array that has some sort of QoS – If you’ve got the space and want to put the video storage on the same RAID group as SQL logs, do it but make sure you can put some restrictions on video if SQL should get better performance.

4. Monitor performance periodically and move LUNs to different tiers – If you’re using a ton of the expensive fibre channel disk space for an app that doesn’t need the performance, move it to more dense fibre channel or SATA disks.

If you have a finite budget and need to be more mindful of storage costs, this will all start to mean something.  If you’re lost and don’t know how to begin monitoring then ask a storage systems engineer for help or call your SAN vendor’s support line.