Category: storage

Software and hardware vendors: Hand over the keys

 

via Brenda Clarke on Flickr

Everyone talks about the consumerization of IT and how end-users are demanding enterprise support of things like iPhones, iPads and many other pieces of technology.  People want to be able to consume IT as a resource on any device or platform they have.  This is happening between enterprise hardware and software vendors and service providers.

I’ve met with quite a few them recently and they usually fall into two camps.  Ones who have invested and attempted to develop their own intellectual property and others who have leveraged economies of scale and rely on vendors to supply the IP.  There are exceptions to this rule of course.  The first camp is what I want to focus on.

Here is what they want:

  • Align with my business and go-to-market strategy
  • Don’t can your offering with SP-focused marketing materials if it can’t honor the promises
  • Have hardware and software with open interfaces
  • Have well-documented interfaces
  • Be agile in the adoption of new interfaces

These conversations tend to revolve around the self-service portals many of the pure-play service providers have developed.  They don’t want a canned out of the box offering, they want to be able to provision and orchestrate the compute, network and storage layer through things like SOAP and REST protocols.  When you develop these interfaces and hand them over to the developers, strange things happen.  Nick “@lynxbat” Weaver exemplifies this.  He isn’t a developer by day but you give him some APIs and he can do crazy things on a plane like write a vSphere plugin that allows VM teleportation with our (EMC’s) VPLEX product.

Now I’m not ignoring the need for software development lifecycle management, version control.  Those are all important.  The thing is that the “neat stuff” us and our customers do can only get better if we open up with good APIs that have a happy balance between standardization and cutting-edge agility.

Why do I beat this drum?  Because it’s a win for enterprise hardware/software vendors and our customers.  What is most exciting about this is that I’ve beat this drum inside of EMC, not as a VP of strategic direction but as a joe-schmoe vSpecialist.  What has come out of it?  A lot of people have listened and it is a huge priority.  I’ve said it before on twitter but one of the best things about working at EMC is that the organization is huge but very flat.  The reality is that I’ve been able to nudge an aircraft carrier with the help of others and start to change course.  This isn’t a post about why I love working at EMC but I think it’s a darn compelling reason.  Our work has just begun…

Data Dedupe comes to ZFS

It’s official… Data deduplication has been added to ZFS (read the link if you’re new to data deduplication). Hats off to Jeff Bonwick and Bill Moore who did a ton of the work in addition to Mark Maybee, Matt Ahrens, Adam Leventhal, George Wilson and the entire ZFS team.  The implementation is a synchronous block-level one which deduplicates data immediately as it is written.  This is analogous as to how DataDomain does it in their dedupe appliances.

What’s interesting about this is now dedupe will be available for *free* unless Oracle does something stupid.  Sun’s implementation is complimentary to the already-existing filesystem compression.  I’m not sure how much of an issue this is yet but the current iteration can not take advantage of SHA256 acceleration in the SPARC Niagara2 CPUs but eventually we should see hardware acceleration implemented.

When will it be available? It should be available in the Opensolaris dev branches in the next couple of weeks as code was just committed to be part of snv_128.  General available in Solaris 10 will take a bit longer until the next update happens.

For OpenSolaris, you change your repository and switch to the development branches – should be available to public in about 3-3.5 weeks time.  Plenty of instructions on how to do this on the net and in this list.  — James Lever on the zfs-discuss mailing list

How do I use it? If you haven’t built an Opensolaris box before, you should start looking at this great blog post here.  I wouldn’t get things rolling until dedupe is in the public release tree.

Ah, finally, the part you’ve really been waiting for.

If you have a storage pool named ‘tank’ and you want to use dedup, just type this:

zfs set dedup=on tank

That’s it.

Like all zfs properties, the ‘dedup’ property follows the usual rules for ZFS dataset property inheritance. Thus, even though deduplication has pool-wide scope, you can opt in or opt out on a per-dataset basis.

– Jeff Bonwick http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup#comments

What does this mean to me? Depends.  For people who like to tinker, you can build your own NAS or iSCSI server with dedupe *and* compression turned on.  Modern CPUs keep increasing in speed and can handle this.  This is huge.  Now, should you abandon considering commercial dedupe appliances that are shipping today?  Not if you want a solution for production as this won’t be officially supported until it’s rolled into the next Solaris update.  For commercial dedupe technology vendors, this is another mark on the scorecard for the commoditization of dedupe.

What things do I need to be aware of? The bugs need to be worked out of this early on so apply standard caution.  READ JEFF’s BLOG POST FIRST!!! There is a verification feature, use it if you’re either worried about your data or using fletcher-4 as a hashing algorithm to speed up dedupe performance (zfs set dedup=verify tank or zfs set dedup=fletcher4,verify tank).

How do I stay up to date on ZFS in general? Subscribe to the zfs-discuss mailing list (also in forum format).  It can be high volume but it is worth it if you want to stay on top of all things zfs.

http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.htmlHow do

Tidbits: EMC Unified Storage and Infosmack Podcast – Oracle & VMware

EMC’s Unified Storage and Tiering blogpost @ GestaltIT.com:

Over the last week or two Devang Panchigar (@storagenerve) and I have been collaborating on a post at GestaltIT.com regarding EMC’s storage strategy.  If you have read it already, great but if you haven’t yet there are a couple of things to remember while reading.  First, Devang and I do not have any insight into EMC beyond their public statements and ideas our peers have discussed.  Second, we’re not endorsing their potential strategy over anyone elses but merely stating that it is a step in the right direction.

Hopefully over the next couple of months we’ll continue the posts about other vendors and next-gen storage infrastructure which should serve to enlighten people on where companies appear to be going.

So again if you haven’t read our post, take a look and give us some feedback.

Oracle & VMware and Dell/Perot:

This week I was invited yet again to participate in the StorageMonkeys Infosmack Podcast #21 by the gracious Greg Knieriemen and Marc Farley.  Steve Kenniston, Storage Technologist at EMC and blogger at BackupAndBeyond.com, was also a guest on the show.  I always enjoy talking with Steve because of his experience in the industry and how knowledgeable he is from multiple perspectives.  As always, Greg and Marc do a great job of getting good discussion going.  We discussed the Oracle and VMware spat going on as well as the Dell/Perot acquisition.  If you haven’t had a chance to go listen, go do so now!  Oracle seems insistent on locking people in and Dell is going after the services business with Perot.

V-Max, Benchmarks and Social Media – EMC World Day 1 Recap

We’re off to a pretty good start here at EMC World.  I’ve gotten to meet up with many other twitter folks at the ZDNet Blogger’s Lounge organized by @lendevanna.  Last year social media was just a small lunch tweetup but this year we have the lounge and a lot more networking going on.  There have been so many good conversations going on and social media is creeping into EMC more every day thanks to the hard work of @lendevanna, @stu, @davidkspencer, @gminks, @davegraham and many others.

I’ve spent quite a bit of time talking with my fellow GestaltIT.com bloggers (@storagenerve and @chrismevans) and others including the folks above and @basraayman, @storagezilla, @mike_fishman, @storageanarchy and many others.

Joe Tucci’s keynote was more or less the same as it has been in years past but he spent a lot of time talking about Cloud Computing.  Their cloud view was the same as what VMware is pitching.  Lots of talk about private and public cloud with federation between for traditional IT applications.

V-Max:

I attended a couple of V-Max sessions with @storagenerve on architecture and enginuity.  The architecture really is built to scale but a I’m not sure who will be scaling beyond 8 or 16 engines.  What will probably be more common is more V-Max engines able to federate data between systems instead of having one large global system.  Federation will probably be a big focus for EMC because most customers aren’t running the same modular but monolithic array for 5-10 years, they usually roll them after 3-4 because of technology and financial reasons.

We also saw a lot of numbers on IOPs and performance that I had never seen before for both DMX and V-Max.  I’ve always had the perception that EMC doesn’t publish much if any numbers but either that’s changing because of openness or the possibility that V-Max has good numbers and there isn’t much ambiguity on what is faster than what.  The numbers we saw were more about architectual limits and not benchmarking.

Powerpath/Powerpath VE:

Powerpath is getting some licensing changes where there will be an option of using a license server so licenses can be much more easily managed.  EMC did say that Powerpath VE for VMware will be released on May 21st.  As some admins may already know, multipathing for storage in VMware is manual and difficult today.  VMware vSphere Enterprise Plus will be required if you want to use Powerpath VE.  It will do multipathing across VMs, load balancing and EMC array optimization.

Storage Layout – Why care?

Why should you care about how you lay your storage out?  Maybe because it’s your job or because it’s the right thing to do.  Perhaps it’s because your application performance isn’t acceptable or your boss won’t let you buy shelves full of 15K RPM disks anymore.  It’s not uncommon for pure frustration to stream out of a CIO’s mouth regarding how expensive enterprise storage is and that they’re “sick of throwing fibre channel disks at a problem”.

Even if your array does this “automatically” or you’ve got performance to spare, here are some things to keep in mind as you scale:

1. Analytics tools are your best friend – If you have no instrumentation, you’re flying blind.  Your storage should allow you to see what’s going on underneath the covers so you can track down performance issues.  Third-party tools to do this are available but make sure you buy the analytics tools when you purchase an array.  We want to know if latency is horrible or if IOPs are high but throughput is low.

2. Workloads on the same RAID groups should be complimentary (caveat, see #3) – If you’ve got SQL and Exchange, try putting SQL log LUNs on the Exchange data LUN RAID group and Exchange log LUNs on the SQL data RAID group.  Don’t put two of the same type of workloads in the same RAID group and expect harmony.

3. Pick an array that has some sort of QoS – If you’ve got the space and want to put the video storage on the same RAID group as SQL logs, do it but make sure you can put some restrictions on video if SQL should get better performance.

4. Monitor performance periodically and move LUNs to different tiers – If you’re using a ton of the expensive fibre channel disk space for an app that doesn’t need the performance, move it to more dense fibre channel or SATA disks.

If you have a finite budget and need to be more mindful of storage costs, this will all start to mean something.  If you’re lost and don’t know how to begin monitoring then ask a storage systems engineer for help or call your SAN vendor’s support line.

sync or async replication?

The question of whether or not to do synchronous or asynchronous replication between storage arrays does not come up often but I suspect it will as more and more people expand their business continuity infrastructure.  It’s an important question because it can have a serious impact on the production environment.

With EMC’s Mirrorview/S (sync) there is a distance limitation of between 50km and 200km depending on what fibre optics you are using (short/long wave vs. dwdm).  Mirrorview/A (async) is more widely used over an IP WAN connection but can also be used over fibre as well.

Mirrorview/S -

Pros:

  • Synchronous – Exact copy of data on production
  • Little to no data lost

Cons:

  • Distance limited (60km using short wave gbics, long wave gbics or optical extenders, 200km using dense wave division multiplexors)
  • WAN link more expensive (fibre vs. copper/ip) unless Fibre Channel over IP converters are used and those are still a little expensive

Mirrorvew/A -

Pros:

  • Cheaper WAN link between sites (IP usually)
  • Writes to prod don’t have to wait on mirror site to write
  • Not distance limited like sync replication

Cons:

  • Data can be lost depending on write intervals from prod to DR site

What you need to know -

Array-based mirroring is a great way to protect multiple hosts in an environment instead of buying per-server or per-application replication.  As I’ve discussed before, the biggest drawback is that it provides a restartable copy which isn’t the same as an active-active cluster application transaction-level replication (i.e. Oracle Dataguard, Exchange CCR, MySQL Master/Slave replication).  Be careful of adequate LAN/WAN line quality, poor comm lines can cause insanely painful headaches (troubleshooting, added latency, etc).  Get line tests done to determine available bandwidth, line quality and latency.

Comments welcome.

Data growth

EMC worked with IDC to make a Worldwide Information Growth Ticker as seen here:

One thing I’ve noticed in all this talk about explosive information growth is how most vendors are sticking to a strategy of how to store it and manage it.  A lot of these vendors make a lot of money storing content but I’m beginning to wonder how good being a bunch of digital “pack rats” truly is.  Even if we build systems to manage the information, how much value can we extract out of the digital “junk” we keep.  It’s not the responsibility of companies to figure out the value of the information for us but it would be nice to know along with the calculator, how much that information truly costs.  I think as the information grows, we’ll start to see people come to terms with how they manage that information and what they decide to consume or store.

Here’s an example: My digital camera (Fuji Finepix S5 Pro) shoots 25MB raw files and I choose to shoot raw because it’s a “digital negative”.  Now compared to Canon and Nikon, Fuji’s raw format is horribly inefficient.  On some days I can run through an 8GB flash card which gets me roughly 260 pictures.  That gives me 84 days of pictures assuming I fill up a card.  That’s a lot of pictures to shoot but even if I shoot half as much, I could end up filling up a 750GB SATA drive in a couple of years given how often I take pictures.  That is a ton of data to create, manage and protect.  Pile on all the mp3′s and movies people download and it’s even easier to see how people fill up 500GB hard drives in a years time.  Now maybe I’m an extreme case but the point is that even cutting the average user’s data creation rate by 1/8th of mine, it isn’t cheap.  Most consumers aren’t used to buying new drives every couple of years and also figuring out how to protect that kind of data.

I don’t think technology is keeping up with generating at least from a consistent cost perspective.  Part of my reasoning is that now people are placing much more value on their data than they used to.  How will the average joe handle the this cost and growth?