Who is this “max” bloke anyway? (maxCachePlus that is)

Someone in Adaptec marketing has a son called “Max” … that’s my theory anyway … because just about everything we do these days has letters “max” involved.

The latest incarnation is maxCachePlus. This is an evolution of our previous maxCache product – maxCachePlus (mcp for short) is a teiring technology, as well as a caching technology like our previous versions.

You’ve all seen and heard of Seagate’s hybrid drives – a spinning drive with a chunk of flash storage built into the drive – with the theory being that what is required to be “fast” gets put on the flash and what is not so important stays on the spinning media. This is great for laptops and single drive systems, but it doesn’t work in RAID or large configurations because … (basically too many cooks spoil the broth – the controller can’t tell where the data is residing in the individual drives).

mcp does the same thing on a much larger scale.

In the past, maxCache (caching only) copied the hot data onto SSDs acting as fast storage – the data still remained on the hard drives with a copy of “hot” data on the SSD. This works pretty well, but you don’t get to see, or use, the capacity of the SSDs … it’s hidden from you. Combined with the fact that this only works on drives attached to the controller and there are “some” limitations.

mcp can use any storage in your system (with the usual proviso’s) … and can combine all that storage into a single volume, of which you get to see and use the entire capacity.

Now the simple easy way to think about this is that you take 8 x 4tb hdd and put them in a RAID 5 – not all that fast and certainly no good for random data, but big (eg 28tb). To this you might add 4 x 250gb SSD in a RAID 10 (good for random data) – that’s 512gb of storage that you’d like to see and make use of. With mcp you combine all that into one volume – giving a total of 28.5tb.

When you store data on that volume, the controller and associated software looks at the blocks of data, learns the usage patterns and moves the “hot” blocks onto the SSDs. So what you see is a file or files on your drive like any other drive in your server, but some of that data lives on ssd and some lives on hdd (even within the one file). This means that random data (read and writes) get placed on the storage that suits its requirements, while “cold” or streaming data remains on the hdd. All this, without you having to do anything – the controller and software works it out for you.

Now take that a step further. You’ve spent a lot of money on a flash controller drive – it’s very, very fast, very, very expensive but limited in size. You put some data on that, and you put some data on your RAID array … but you have to manage what goes where and keep managing it as your data needs change – painful.

As I said before, mcp can use any storage in your system, so you can create a single volume that is a combination of the fast expensive flash drive you purchased, and a bunch of hdd connected to our RAID controller. Imagine that … a disk that is both big and fast and cheap (the hdd component at least) … you get to see all the capacity of the storage, and the controller/software works out what data should live on the flash drive, and what should live on the hdd.

Pretty cool. No management required and you get the flexibility of purchasing different storage media to save money – all handled by the management of the controller/software. This makes a lot of sense in a lot of environments. Many general servers have the requirement for fast database storage, along with a bunch of word documents and videos – it’s too expensive to put it all on ssd, and hdd just don’t give the performance requirements of the database – but mcp will work out what goes where and place the data accordingly.

At the other end of the spectrum, I can think of many cloud providers selling platform as a service (PAAS) – instead of having your server in your office you use a server provided by a cloud provider somewhere in the stratosphere. Problem for that provider is that he doesn’t know what your data requirements are … so he sells you what he thinks will do the job, or charges you an arm and a leg for fast storage you don’t need.

If he used mcp, then the controller/software would work out the user requirements and place the user’s hot data on flash/ssd, and leave the rest of the not-so-hot data on the hdd – theoretically reducing both his and your costs in implementing storage platforms for the cloud.

That makes a bit of sense doesn’t it. Talk to your Adaptec rep about mcp – it’s worth getting to know.

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Am I a luddite? (or … do I purchase on price or features)

Yes I have an iphone, imac and yes I use a laptop (constantly), but I’m only just getting around to moving up in the world to an ipad. I recently decided that I would have use for one of these new-fangled contraptions so went to work on my research as to which model was the right one for me.

While I’m not about doing any free promotional work for Apple (they can afford to pay me if they want me to promote) … my research paralleled quite a few phone calls from customers talking about RAID cards and I saw some synergies I thought a little interesting.

With the boss in mind (wife, not employer), I immediately started looking at the lowest price unit. Thinking I had to justify my expenditure and that the lower the price the easier the justification, I started reading websites, reviews and technical specifications like any good technician would.

However … as soon as I started reading I realized that the base model didn’t give me the oomph I was looking for, or the fancy features, or the warm and fuzzy feeling that my purchase would do the job I was looking for and not leave me with any apprehensions or misgivings that I should have purchased something different (or better). Yes I went and did the look and feel in some shops, but I have little time for salespeople who try and tell me that this is better than that etc. Instead, I consulted my kids (now adults), whom I regard as seasoned, experienced and knowledgeable advocates of this sort of products – content experts so to speak.

Note that I didn’t look at other vendor’s tablet products – it was an ipad or nothing as far as my mindset was concerned. End result – an ipad air is on order – heading my way from central China … and I’m waiting with baited breath.

Now let’s transfer that purchase process to my customers and RAID cards … here’s the typical process (in this scenario I’m pretending to be just one type of my many customers):

1. Look at the pricelist and start at the bottom (cheapest first)
2. Look at the features and see what that cheap model can do for me
3. Look at my customer requirements and see how I can work around the limitations of the cheapest model  and still keep the customer happy
4. Maybe read some technical specs … but generally I don’t understand too much about that RAID mumbo-jumbo
5. Maybe talk to a sales person from my reseller or distributor, but generally I think he’s only going to try and sell me a more expensive model so he can make more commission

End result … I’ll buy the cheapest I think I can get away with, then turn my attention to the other components I need to build the server.

Now I’m not sitting here promoting that you should go out and purchase the most expensive product we make in the blind ignorance that it will ‘do the job you need’. Instead, I’m trying to point out that even with all the technical reading in the world, you still may not consider all the aspects of a product purchase … and that you don’t lose anything by talking to a product expert.

I do this a lot (be the product expert that is). Lots of my customers, even the ones who use our products a lot (in fact these guys ring me more than others), ring me to discuss what is the right RAID card to for a certain application. Along with the card discussion I generally have drive discussions at the same time – looking at the overall picture. Note that SSDs are prominent in these discussions these days – the price seems to have dropped to a point where people are more than willing to put them in their systems.

So I end up with discussions like:

1. “I am looking at purchasing a 6405e (entry card) and 4 x 1tb SSDs to put in a RAID 10” … I can see a lot of problems with this so talk to the customer about IOP performance, initially recommending that if they are going to put together a fast database system on RAID 10 they need a 7 series for IOP performance to match the SSDs – ie a 71605e will be fine because they don’t need cache protection (with SSDs we turn off cache). However when I ask my usual question of “what are you going to be doing with this system – what sort of data are you dealing with?” I find out that we are talking about “saving PDFs from a high speed scanner. Hmmm … end result is a 6405 controller with 4 x enterprise SATA in a RAID 5 … a much lower overall cost to the consumer, much more capacity and a much more sensible solution to the problem at hand.

2. “I’m looking to put 6 x 15K SAS drives in RAID 10 with 2 x SSD as caching for the array – do I need to purchase an AFM700 with the 7805Q?” … answer: “No, Q cards come with AFM included”. Again the discussion comes about to what the customer is doing – high-speed but relatively small capacity vmware server running a few vm’s … storage will be elsewhere. A pretty good build that will do the job more than adequately at a good pricepoint.

Question 1 was from a customer who doesn’t do much RAID, while Question 2 is from a customer who uses hundreds of our cards … but both those customers needed some additional support to help determine the correct purchase. Lesson to all this is that before you go and purchase your latest and greatest or cheapest possible RAID card – talk to the vendor to see if they can give you recommendations to help you get the right product at the right price.

Interestingly neither of these customers looked anywhere other than Adaptec, like I didn’t look anywhere other than Apple – brand loyalty drives that (as I said i have an iphone and I also have a imac on the desk), but why should they not take advantage of the accessibility of the vendor – if the support is there: use it!

Now where’s that tracking number for the ipad – I’m in a hurry to get my hands on this thing :-)

Ciao
Neil

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

A trip down memory (storage) lane …

For some reason I was looking at Ebay (don’t tell the boss) at the number of old Adaptec products to be had for the intrepid nostalgia buff.

SCSI HBA, SCSI raid, network cards, SATA raid cards, zero channel raid cards, jbod, fibre jbod power supplies, NAS products, iSCSI initiatior cards, firewire, USB (1 and 2), USB to SCSI, SCSI software … the list goes on (and on).

It’s a bit scary to think I’ve been involved with all of these technologies at one stage or another – think I’ve forgotten more Adaptec products than I currently know. However it’s an eye-opener to think that a lot of this stuff is still required out there (judging by the number of bids on things) and that some of these products are over 10 years old and still working well (according to the sales pitch on most of the products listed).

I’m not sure that the rest of the industry can still boast having drives or motherboards that support such old products (SCSI and PCI64 respectively), but it’s interesting to see that while we charge ahead into the brave new world of PCIe connectivity directly to devices (as the datacenters of the world are pushing for) that there is still a tail to the story and many people are still out there using old products in innovative ways.

So what’s your story? What are you using that’s older than it should be and still working fine? I’d love to hear those stories.

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

The SAS (r)evolution …

When someone within the company started discussing the evolution of SAS a couple of weeks ago, I thought I misheard them … was that “revolution” or “evolution”? Turns out it’s been a bit of both. In early days it was a revolution, but now that it is a well-matured technology we are within the “evolution” section of its lifecycle.

At the core of all of this technology is the venerable SCSI command set … which turned out to be just about the most long-standing and solid technology developed within the storage industry. Developed over 30 years ago, it is still running strongly today across many and varied delivery mechanisms.

So since we are talking about SAS, and not the SCSI command set (which is used in a lot of places in storage today), let’s look back, at today, and at tomorrow to see where we’ve been, are and are going.

When serial first came along, my first thoughts were … “thank goodness for simpler cabling”. Didn’t really turn out that way as yes, we didn’t need termination like on the old parallel bus, but we ended up with a truckload of new cables, and probably even more than we had in the old scsi days. However, 1.5Gb down the pipe sounded pretty good, and indeed early performance was great right out of the box.

Adaptec made a decision back in those days to build both SAS and SATA controllers, but we quickly worked out that we could do it all with one controller (SAS) because SATA is a subject of the SAS protocol, and we only need to make one controller to talk to all disk drives at the same time. While SAS jumped straight into the performance end of town, early SATA implementations were pretty dodgy and somewhat slow – and really held things back until SATA II and SATA III came along.

So 1.5Gb … wow, that’s pretty fast. In fact it’s faster than most spinning disks can go even today. Hmmm, don’t think we’ll need an upgrade for a while. No, as usual we had to have an upgrade, and so we got 3Gb. Well that had to be it, surely … spinning media will never catch up with this (and indeed it hasn’t today), but wait, what are those funny little things called SSD? OK, let go to 6Gb … the old “double it and they will come” principle. Great, now we’re cooking. Overkill for spinning drives but what the heck, we’re keeping up with the SSDs … or so we thought. OK, so your SSD can do more than 600Mb/sec … then well go to 12Gb and gloat about our performance (for a while).

And so here we are – 12Gb SAS. Interestingly we haven’t had the complications or growing pains of the old parallel SCSI – shorter cables, different connectors (oh yes, that’s right, we did do different connectors for 12Gb), but generally the evolution has seen less pain and therefore quicker uptake than the previous parallel regime. That said 12Gb is very new and we don’t really know the uptake of it yet. In fact the 12Gb standard has brought some very nice negotiation processes with it for device handling so in that respect it’s a step forward in both functionality and speed over 6Gb SAS.

But is it enough?

For the moment it will have to be … and for a couple of reasons. The PCIexpress bus is capable of somewhere in the vicinity a theoretical 8000Mb/sec, while our 24-port 6Gb SAS chips can hit a theoretical 9600Mb/sec. Note the theory side of things because the fastest I’ve ever achieved out of our controllers is 6,600Mb/sec (wow!). So there is no real advance to PCIe3 on the horizon so we’re not going to get any degree faster there.

There are also some new kids on the block – SCSI Express and NVM Express which drive the storage device across a PCIe bus rather than a SAS bus – and this “may” be the way forward instead of an ever-increasing level of SAS (school is still out somewhat on exactly what the future holds in the interface market just yet).

The bit I really find ironic is that no-one I’ve heard of is really asking for more Mb/sec than they can get today with a bunch of 6Gb SSD. It’s all about IOPs and latency. If we are talking 4Kb blocks, then it takes an awful lot of those to saturate a bus capable of handling 6000Mb/sec … it will generally be the processor that flood and bottlenecks before the bus in this scenario. Latency is proving a problem child for the SAS world which is one of the claims to fame of the SCSI Express/NVMexpress world … since they are PCIe both sides of the controller they claim to have lower latency – and that is very, very appealing to datacenters today – it’s all about latency and IOPs.

So, on this “evolutionary path” of SAS, have we made it to the end? I might have been thinking so until I received a phone call this morning from a bloke looking for a SCSI card … and I was reminded that SCSI/SAS etc in all its formats and variations over the years as outlived more than one computer tech J

Don’t know about you but I’m looking forward to learning new interface, new technology and new way of putting together yet another complex solution for a customer!

Ciao
Neil

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

I want a rubber/lego/meccano hard disk …

Stop sniggering. This is not about being able to drop a disk from a great height and have it survive … it’s about me making my own disk. Now that sounds as dumb as anything I’ve put in a blog before, but let’s look at it. What exactly do I want from the disk in my server?

  1. It has to be big – as big as I want to make it (512Tb should be enough)
  2. It has to be redundant – and I want lots of options on how redundant I make it (because I’m a gambler at heart like most IT pros and want to be able to determine just how close I sail to disaster J)
  3. It has to be fast (SSD or faster is what I mean) – but I want to be able to determine how much of it is fast and how much is “other” (other being big, fat, cheap, SATA)
  4. It has to be cheap (SATA) … this works with points 2 and 3 above – the price will come down with less redundancy but will go up with more speed … but I want to be able to configure this disk the way I want it with any combination of the above
  5. It has to manage itself … because like all IT pros I like playing with things for a while, but then get bored of such menial tasks pretty quickly
  6. So it has to look and feel like one large hard drive with all of the above characteristics

So let’s summarise what I’m after here …

“a large, redundant, flexible, configurable, fast, cheap, self-managing disk”

Hmmm …

Not asking much am I. Seagate make “hybrid” hard drives … spinning drives with some NAND flash built in … and they use some fancy algorithms to work out what should go where. Only problem is that they are a bit limited in size – I want a 20Tb disk (or larger) and it will be a while before I see something that large in a single disk.

Enter maxCache Plus. This clever technology lets me take any storage in the server and combine it into a single self-managing disk. While that sounds groovy, let’s look at a little more practical solution.

You need big capacity so you purchase 16 large enterprise sata hard drives (4tb enterprise SATA). However there is going to be data on this disk that needs to be fast (random, database type material), as well as large amounts of nothingness (ie it’s a VMware machine). So instead of wasting ports on your RAID controller, you purchase a Flash Drive (NAND flash drive such as Fusion IO) to accelerate certain data.

Grab yourself an 81605ZQ, plug all the drives in (doesn’t matter if they are 6gb or 12gb drives), make a RAID 5 with a hot spare and you’re ready to go. Only problem is speed – it’s not fast enough. So stuff the RAID 5 into a “pool” in maxcache (it will be Tier 1 – the slower of the two storage pools). Then grab your Fusion IO or other Flash Drive and stuff it into Tier 0 (the faster of the two pools).

Now grab the storage from both pools (Tier 0 and Tier 1), and combine them together into a single disk (Virtual Volume) that lets you see the capacity of both storage devices, moulds them into one disk and manages the data positioning for you.

So maxCache, in this environment, will move blocks of data around within the virtual volume, repositioning hot data onto the Fusion IO portion of the storage (Tier 0), and moving the not-so-hot stuff onto the RAID 5 array. And what management do you need to do? Sit back, grab a cold one from the fridge (after 5 of course) and put your feet up – there is no user management of the data volume required.

Sounds a bit too good to be true … a disk that is made of up a Fusion IO-like card, a large RAID array of whatever redundancy level I want, made out of whatever storage I have in my system, not just attached to my RAID card, all managing itself and constantly optimizing the customer data onto the best storage medium for the data type.

What I didn’t mention is that I can actually chop up the Flash Drive and RAID array into different pieces so I can make multiple disks – maybe even one with a lot of flash and reasonable amount of SATA, and one with very little flash acceleration and mostly SATA – either way the choice is mine. In other words, it is “flexible” (to a crazy degree).

So I end up with a “large, redundant, flexible, configurable, fast, cheap, self-managing disk” (or disks). But wait … I don’t want to buy a Fusion IO card … I have a 71605E (entry card) sitting on the desk and good fast SSD are cheap as chips these days. No worries – plug the SSD onto the 7 series, stuff it in next to the 8 series, make a raid 10 of ssd arrays (doesn’t need cache on controller or ZMCP cache protection on 7 series for this). Then you can connect up to 16 SSD and use that as Tier 0 storage instead of a flash drive.

So what limits the configuration possibilities? The grey matter between your ears – pretty much your imagination. In other words this really is a flexible and highly configurable technology.

Ciao
Neil

 

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Did you know (2) …

It’s happened again … the brain stopped working. So instead of writing something new I’m going to tell you about another little quirk of our controllers … just in case you need a bit more technical information. In reality I’m knee deep in the middle of redeveloping all my technical training slides and the brain needs a rest …

Boot order. In the bios of a Series 7 and newer controller these is an option to “Select Boot Device”. Now that sounds pretty simple and surely I can’t be blogging about that for very long? Well there is a bit more to this than meets the eye.

1. You can have multiple arrays on a controller

2. You can have raw devices (pass through) – disks that are shown straight to the OS

3. You can boot from any of the arrays, or any of the raw devices

Hmmm. The age old way of changing the boot order was to go into the raid array configuration utility (card bios) and use control + B to set the boot order. The array on the top of the list is the one that the card will boot from. Unless of course, there is a raw device (uninitialised drive) being presented to the OS … and then, of course, only if that drive has been put in in the boot order.

But wait, you can’t set raw devices using control + B as they don’t appear in the list. That is where the option “Select Boot Device” comes into it’s own. It allows you to choose raw devices (uninitialised disks) from the list and add one to be the boot device. Now what happens to your RAID array if you do that? Simply put the raw disk will precede the RAID array in the boot order.

A little trick here is that since the raw disk has no metadata we can’t store the fact that it’s a boot device on the disk … that information has to be stored in the controller. If that device fails then the boot order will revert to the first array in the list. So we have two ways of setting the boot device because each one is attached to two different types of disks (raw disks and RAID arrays).

Of course, all of this needs to be handled by your motherboard bios … and you need to set the correct device in the motherboard for your system to boot from (but they tend to handle that automatically these days).

I just wouldn’t want you getting confused :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Sometimes I surprise even myself …

I wrote the following back in early 2009. I’ve never really thought too much about how right or wrong I am … that’s for the critics to work out. However read the following in italics, then read on about our latest incarnation of hybridness (is that a word?)

“… (lots of blah which I’ll save you reading) …

What storage technologies will survive into the future and what won’t?

The simple answer to that question which is an extremely safe bet is … they all will.

Will SSD kill off SAS? Did SAS kill off Fibre? Will SATA kill off SSD due to price? No. None of those things will happen for a few simple reasons. Those reasons don’t happen to have much to do with technology … they are more to do with the people behind the technology.

Fibre will survive. SSD will grow in popularity and eat into the 3.5″ SATA market. SATA will move to the 2.5″ market and increase in size to the point where we have insanely huge drives at ridiculously cheap prices. SAS will remain due to the fact that poeple just loved SCSI and can’t bear to give it up, so they’ll accept SAS as it’s heathen reincarnation and live with that because “it just works”.

All of our current technologies will be with us for the forseeable future in the same way your favourite football team will continue to survive. People follow a technology like they follow a football team. Good, bad, indifferent, financially stuffed, incredibly successful … none of that matters. It works for me so I’ll stick with it.

The answer, of course, is that the future will see a combination of the more successful technologies in some sort of hybrid form. The car manufacturers are doing it now and we are all agreeing. Pure electric cars suck. Pure petrol cars suck (petrol that is). Combine the two and you have a winner for everyone. So it is with storage technology.

Pure SATA is too slow and too unreliable. Pure Fibre and pure SAS are too expensive. Pure SSD is nuts (price and capacity). However … combine those technologies into a hybrid creature that uses the best features of each one and you have a winner. So keep an eye out for products that use a little of everything … balancing your performance, capacity and growth factors while keeping your balance sheet in a colour other than red.

Hybrid technology. That’s what will survive. Keep an eye out at a store near you.”

Now either I was ahead of my time, or I’m even more cynical than I had originally thought. Adaptec has just released maxCache Plus with its soon-to-be-available Series 8 RAID controllers. maxCache plus is a tiering technology – where you define what is your fastest down to slowest storage, and the controller/software intelligently stores important data on your fast storage and your unused or unimportant data on the slow stuff.

So now all those different kinds of storage you have in the one box can be made to work together to give you the storage performance you want without throwing out all of our favourite old technologies.

maxCache Plus – find out more on the Adaptec website.

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Did you know (1) ? …

Having a bit of writers block this morning (“write blog article” appeared in Outlook calendar but brain is not responding in kind), so thought I’d take the easy way out and give a quick technical update on our products with some obscure, bizzare or just not-widely-known bit of information.

“Auto Rebuild” is an option in the bios of our cards about which most people have no idea how it works. This could have been called “the option that helps the forgetful system admin” or “even if you don’t bother we’ll safeguard your data for you if we can” … but I don’t think either of those descriptions would fit I the bios screen. “Auto Rebuild” is so last century (but that’s OK because it’s been around for probably that long).

So what does it do? When enabled (which it is by default) … when the card finds an array that is degraded it will first look for a hot spare. If this is not found then it will look for any unused devices (drives that are not part of an array). If a suitable unused disk is found (right size), then the card will build that disk back into the array.

Why did we need this? We have lots of hot spare options – why bother with an auto feature as well? My theory on this (and I’ll probably never know the full exact reason) is that system builders often send off a new system with a hot spare in place, but when the drive fails the user/customer will replace the drive, but does not know how to, or that they should, or that they need to, make that new drive a hot spare. So in my experience the “new drive” that has replaced the old failed drive, is often just sitting as a raw device … and when the next drive fails it does nothing because it’s not a “hot spare”.

Well in the case of our cards set to their factory defaults … the new drive will kick in and replace a failed drive (and rebuild the array), because of “auto rebuild” or “even if you don’t bother we’ll safeguard your data for you if we can” (as I like to call it).

Did you know that?

Quick update. I might have generalised a bit too much on this one. Note that this works in a ses2 backplane (hot swap backplane) when you change the drive in the same slot. Note also that in a series 7 controller it works if you change the new drive into the same slot as the old drive, but if you put the new drive into a different slot you’ll have to make it a hot spare because an S7 will grab that drive and show it to the OS as a pass-through device. Was trying to keep this simple, but probably went a bit far. Oh the complexities we weave :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

But I told you … I don’t want 16 ports! (are you deaf?) …

So a customer rings up looking for a fast card for RAID 1 and RAID 10 – he’s going to use SSD’s to make his ultimate home workstation/video server/graphics-CAD machine etc etc (standard home user). He’s going full SSD no matter what anyone tells him, so it’s “performance, performance, performance” all the way.

So … what card do I need? I’ve used up all my available fund on drives but I want the fastest possible RAID card to connect these SSDs.

71605E

“Now listen mate. I just told you I only have 4 drives … I’m not forking out for a 16-port controller!” “Typical sales guy … doesn’t listen.” “I said I had 4 drives, I said I wanted RAID 10 and you’re trying to flog me a 16-port controller!”

And so the conversation goes. It’s not until you get someone to look at the pricelist … scroll all the way through to the 71605E that you hear them go “oh!” on the other end of the phone. This is a pretty common scenario in my neck of the woods. The customer has 2 or 4 drives, so is looking for a 2 or 4-port controller … they certainly don’t go looking for a 16-port controller.

So where did this all go wrong? In a way, it was the generosity of the product marketing team that started this (when they read that they’ll think I’ve started being nice to them). The 71605E has less RAM onboard (256Mb) which is fine because it’s only doing RAID 0,1,10,1E,Hybrid … so it doesn’t need a lot of RAM. Combined with the fact that when connecting SSDs we recommend to turn off the cache anyway so it’s pretty pointless putting a lot of the stuff on there … and it lets us get the price down.

However … the chip is the same as all the other 7 series controller (24-port native ROC), so why put only 4 or 8 plastic connectors onto which to connect cables? 16 fit so why not just leave them there? Makes sense to me … whether I need them or not it does the job. In reality it’s a really sensible, good value card that fits the bill for a lot of people … if they knew about it.

But they don’t … because they are not looking for it.

They are looking for a 4 or 8-port controller because that is how many drives they have, and they think that anything with a “16″ on it will be crazy expensive so they don’t start at that end of the cattle-dog (catalog) … and hence never know about this card. So take a look at the “entry level” 7 series controller … even though it has 16 ports it may in fact be just the low-cost, high-performance, entry-level controller you are looking for.

Maybe this is the card they should be using instead of the 6805 as mentioned in one of my previous posts? Now even I’m getting confused :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Getting the balance right …

Ying and yang … the Chinese had it just about right when they coined that phrase (and yes, I used google to check that in fact it is Chinese – though I’m sure some smart soul will tell me otherwise) …

Had several customers recently who have been reading about hybrid RAID. This is where you can mix an SSD and a HDD on an Adaptec Series 6, 7 or 8 and make a mirror out of those two drives. While this sounds crazy it is pretty simple. Writes go to both drives (which is nicely buffered by controller cache so you get good speed), but all reads are directed from the SSD – giving lightning read speed.

Sounds good … and people use the old story of “don’t need RAID 5 so lets go for an entry-level card”. In this case, the 6405E. So far, so good. However … the 6405E is pcie2 x 1 – which limits the throughput to 500mb/sec. That’s probably not going to be too much of a problem on a RAID 1 – one SSD will go close to saturating that but it will be close.

However … make a RAID 10, with 2 x SSD and 2 x HDD and you are starting to stretch the friendship a bit. Reading a relatively large file off that RAID should in theory saturate the pcie2 x 1 bus, making the card the bottleneck. So in this case you need to go to the 6405 controller, not the “E”, which has a pcie2 x 8 connector and can easily handle the throughput of the SSDs.

So yes, you only need an entry level card for a RAID 1 or RAID 10, but if you are doing a hybrid RAID then you probably need to consider the theoretical speed of the SSD you are using and make sure there is no bottleneck in the way of their performance.

Ciao
Neil

 

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment