Did you know (2) …

It’s happened again … the brain stopped working. So instead of writing something new I’m going to tell you about another little quirk of our controllers … just in case you need a bit more technical information. In reality I’m knee deep in the middle of redeveloping all my technical training slides and the brain needs a rest …

Boot order. In the bios of a Series 7 and newer controller these is an option to “Select Boot Device”. Now that sounds pretty simple and surely I can’t be blogging about that for very long? Well there is a bit more to this than meets the eye.

1. You can have multiple arrays on a controller

2. You can have raw devices (pass through) – disks that are shown straight to the OS

3. You can boot from any of the arrays, or any of the raw devices

Hmmm. The age old way of changing the boot order was to go into the raid array configuration utility (card bios) and use control + B to set the boot order. The array on the top of the list is the one that the card will boot from. Unless of course, there is a raw device (uninitialised drive) being presented to the OS … and then, of course, only if that drive has been put in in the boot order.

But wait, you can’t set raw devices using control + B as they don’t appear in the list. That is where the option “Select Boot Device” comes into it’s own. It allows you to choose raw devices (uninitialised disks) from the list and add one to be the boot device. Now what happens to your RAID array if you do that? Simply put the raw disk will precede the RAID array in the boot order.

A little trick here is that since the raw disk has no metadata we can’t store the fact that it’s a boot device on the disk … that information has to be stored in the controller. If that device fails then the boot order will revert to the first array in the list. So we have two ways of setting the boot device because each one is attached to two different types of disks (raw disks and RAID arrays).

Of course, all of this needs to be handled by your motherboard bios … and you need to set the correct device in the motherboard for your system to boot from (but they tend to handle that automatically these days).

I just wouldn’t want you getting confused :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Sometimes I surprise even myself …

I wrote the following back in early 2009. I’ve never really thought too much about how right or wrong I am … that’s for the critics to work out. However read the following in italics, then read on about our latest incarnation of hybridness (is that a word?)

“… (lots of blah which I’ll save you reading) …

What storage technologies will survive into the future and what won’t?

The simple answer to that question which is an extremely safe bet is … they all will.

Will SSD kill off SAS? Did SAS kill off Fibre? Will SATA kill off SSD due to price? No. None of those things will happen for a few simple reasons. Those reasons don’t happen to have much to do with technology … they are more to do with the people behind the technology.

Fibre will survive. SSD will grow in popularity and eat into the 3.5″ SATA market. SATA will move to the 2.5″ market and increase in size to the point where we have insanely huge drives at ridiculously cheap prices. SAS will remain due to the fact that poeple just loved SCSI and can’t bear to give it up, so they’ll accept SAS as it’s heathen reincarnation and live with that because “it just works”.

All of our current technologies will be with us for the forseeable future in the same way your favourite football team will continue to survive. People follow a technology like they follow a football team. Good, bad, indifferent, financially stuffed, incredibly successful … none of that matters. It works for me so I’ll stick with it.

The answer, of course, is that the future will see a combination of the more successful technologies in some sort of hybrid form. The car manufacturers are doing it now and we are all agreeing. Pure electric cars suck. Pure petrol cars suck (petrol that is). Combine the two and you have a winner for everyone. So it is with storage technology.

Pure SATA is too slow and too unreliable. Pure Fibre and pure SAS are too expensive. Pure SSD is nuts (price and capacity). However … combine those technologies into a hybrid creature that uses the best features of each one and you have a winner. So keep an eye out for products that use a little of everything … balancing your performance, capacity and growth factors while keeping your balance sheet in a colour other than red.

Hybrid technology. That’s what will survive. Keep an eye out at a store near you.”

Now either I was ahead of my time, or I’m even more cynical than I had originally thought. Adaptec has just released maxCache Plus with its soon-to-be-available Series 8 RAID controllers. maxCache plus is a tiering technology – where you define what is your fastest down to slowest storage, and the controller/software intelligently stores important data on your fast storage and your unused or unimportant data on the slow stuff.

So now all those different kinds of storage you have in the one box can be made to work together to give you the storage performance you want without throwing out all of our favourite old technologies.

maxCache Plus – find out more on the Adaptec website.

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Did you know (1) ? …

Having a bit of writers block this morning (“write blog article” appeared in Outlook calendar but brain is not responding in kind), so thought I’d take the easy way out and give a quick technical update on our products with some obscure, bizzare or just not-widely-known bit of information.

“Auto Rebuild” is an option in the bios of our cards about which most people have no idea how it works. This could have been called “the option that helps the forgetful system admin” or “even if you don’t bother we’ll safeguard your data for you if we can” … but I don’t think either of those descriptions would fit I the bios screen. “Auto Rebuild” is so last century (but that’s OK because it’s been around for probably that long).

So what does it do? When enabled (which it is by default) … when the card finds an array that is degraded it will first look for a hot spare. If this is not found then it will look for any unused devices (drives that are not part of an array). If a suitable unused disk is found (right size), then the card will build that disk back into the array.

Why did we need this? We have lots of hot spare options – why bother with an auto feature as well? My theory on this (and I’ll probably never know the full exact reason) is that system builders often send off a new system with a hot spare in place, but when the drive fails the user/customer will replace the drive, but does not know how to, or that they should, or that they need to, make that new drive a hot spare. So in my experience the “new drive” that has replaced the old failed drive, is often just sitting as a raw device … and when the next drive fails it does nothing because it’s not a “hot spare”.

Well in the case of our cards set to their factory defaults … the new drive will kick in and replace a failed drive (and rebuild the array), because of “auto rebuild” or “even if you don’t bother we’ll safeguard your data for you if we can” (as I like to call it).

Did you know that?

Quick update. I might have generalised a bit too much on this one. Note that this works in a ses2 backplane (hot swap backplane) when you change the drive in the same slot. Note also that in a series 7 controller it works if you change the new drive into the same slot as the old drive, but if you put the new drive into a different slot you’ll have to make it a hot spare because an S7 will grab that drive and show it to the OS as a pass-through device. Was trying to keep this simple, but probably went a bit far. Oh the complexities we weave :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

But I told you … I don’t want 16 ports! (are you deaf?) …

So a customer rings up looking for a fast card for RAID 1 and RAID 10 – he’s going to use SSD’s to make his ultimate home workstation/video server/graphics-CAD machine etc etc (standard home user). He’s going full SSD no matter what anyone tells him, so it’s “performance, performance, performance” all the way.

So … what card do I need? I’ve used up all my available fund on drives but I want the fastest possible RAID card to connect these SSDs.

71605E

“Now listen mate. I just told you I only have 4 drives … I’m not forking out for a 16-port controller!” “Typical sales guy … doesn’t listen.” “I said I had 4 drives, I said I wanted RAID 10 and you’re trying to flog me a 16-port controller!”

And so the conversation goes. It’s not until you get someone to look at the pricelist … scroll all the way through to the 71605E that you hear them go “oh!” on the other end of the phone. This is a pretty common scenario in my neck of the woods. The customer has 2 or 4 drives, so is looking for a 2 or 4-port controller … they certainly don’t go looking for a 16-port controller.

So where did this all go wrong? In a way, it was the generosity of the product marketing team that started this (when they read that they’ll think I’ve started being nice to them). The 71605E has less RAM onboard (256Mb) which is fine because it’s only doing RAID 0,1,10,1E,Hybrid … so it doesn’t need a lot of RAM. Combined with the fact that when connecting SSDs we recommend to turn off the cache anyway so it’s pretty pointless putting a lot of the stuff on there … and it lets us get the price down.

However … the chip is the same as all the other 7 series controller (24-port native ROC), so why put only 4 or 8 plastic connectors onto which to connect cables? 16 fit so why not just leave them there? Makes sense to me … whether I need them or not it does the job. In reality it’s a really sensible, good value card that fits the bill for a lot of people … if they knew about it.

But they don’t … because they are not looking for it.

They are looking for a 4 or 8-port controller because that is how many drives they have, and they think that anything with a “16″ on it will be crazy expensive so they don’t start at that end of the cattle-dog (catalog) … and hence never know about this card. So take a look at the “entry level” 7 series controller … even though it has 16 ports it may in fact be just the low-cost, high-performance, entry-level controller you are looking for.

Maybe this is the card they should be using instead of the 6805 as mentioned in one of my previous posts? Now even I’m getting confused :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Getting the balance right …

Ying and yang … the Chinese had it just about right when they coined that phrase (and yes, I used google to check that in fact it is Chinese – though I’m sure some smart soul will tell me otherwise) …

Had several customers recently who have been reading about hybrid RAID. This is where you can mix an SSD and a HDD on an Adaptec Series 6, 7 or 8 and make a mirror out of those two drives. While this sounds crazy it is pretty simple. Writes go to both drives (which is nicely buffered by controller cache so you get good speed), but all reads are directed from the SSD – giving lightning read speed.

Sounds good … and people use the old story of “don’t need RAID 5 so lets go for an entry-level card”. In this case, the 6405E. So far, so good. However … the 6405E is pcie2 x 1 – which limits the throughput to 500mb/sec. That’s probably not going to be too much of a problem on a RAID 1 – one SSD will go close to saturating that but it will be close.

However … make a RAID 10, with 2 x SSD and 2 x HDD and you are starting to stretch the friendship a bit. Reading a relatively large file off that RAID should in theory saturate the pcie2 x 1 bus, making the card the bottleneck. So in this case you need to go to the 6405 controller, not the “E”, which has a pcie2 x 8 connector and can easily handle the throughput of the SSDs.

So yes, you only need an entry level card for a RAID 1 or RAID 10, but if you are doing a hybrid RAID then you probably need to consider the theoretical speed of the SSD you are using and make sure there is no bottleneck in the way of their performance.

Ciao
Neil

 

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Are you using your storage the right way? …

Spent a short time yesterday at VMware VMworld 2013 “Defy” Convention. Now maybe that should be “deny” convention but that would be cynical of me. 99.9% of the vendors at the show were showing SAN products – storage, backup, de-dupe, migration, DR … you name it, it was there – all based around SAN product. Now I’m not that dumb that if you hit me over the head with a SAN for long enough I get the point – they greatly benefit the functionality of all things VMware by allowing migration etc (just plain moving stuff around). So VMware focus on, and promote SAN as their primary/preferred storage medium – makes sense to me.

However … (and yes, there is always a gotcha) …

We sell an awful lot of RAID cards to people using VMware on DAS (direct attached storage). Now I could not even find one person who was willing to discuss direct-attached storage – it was basically a no-no discussion point since it does not fit all the functionality and marketing hype that VMware put around their products – after all it’s all stuck in the one box!

The reality is that no matter what a vendor thinks, and how hard they promote a specific use of product, the customer will always come up with an innovative (I call it “left field”) way of using your product, often to a point that you don’t think is very smart or realistic, but you keep that opinion to yourself because right or wrong, you want the customer to buy more product.

In RAID storage it’s akin to the customer running RAID 0 on a bunch of desktop drives all running an old firmware – stability is non-existent but the customer expects that since this is an option in the RAID card, then it should work just as well as every other option in the menu. Or the customer (and yes I hit one of these guys the other day) wanting to build a RAID 10 out of 16 SSDs … a slightly rather too expensive configuration for my taste, but the customer was convinced that this was the right way to go.

So what is the “right way” to use your storage? There really isn’t one, but I’d strongly urge people to talk to a vendor and discuss the pros and cons, risks and benefits, shortcoming and upsides of their configuration – it may just be that while the customer has thought of all the innovative and “left field” ways of using storage, they haven’t considered the fundamental underlying problems they may run into because of their design.

So the lesson is? Talk to your vendor and don’t worry if they laugh/choke/smirk/scoff or otherwise deride your ideas … just listen to their input and balance your enthusiasm with their usual conservatism.

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | 1 Comment

Growing the datacenter …

Adaptec by PMC has some pretty cool products that work really well in datacenters. For datacenter operators who have moved on from the “just throw brand-name hardware at it” to the “lets’ do it a lot cheaper and build our own storage boxes” … we have the RAID cards that provide the performance and density to meet their requirements.

Let me explain the bit in quotes above. Many a wise man has studied the datacenter environment, and found that the startup often goes with the brand name server and storage provider so they can (a) focus on their admin, (b) get service contracts from major vendors and (c) boast their hardware platforms to their prospective customers. This has a lot of benefits and is generally considered the way to start your datacenter. However it comes at a cost … a big cost … in capital outlay and ongoing service contracts.

Then a datacenter starts to grow it generally finds all sorts of cost pressures mounting against the solution of providing high-end brand-name storage … and they start looking to do things a little on the cheaper side. Enter the whitebox storage vendor/product. Nothing wrong with whitebox – Intel and Supermicro for example make excellent product which can be assembled sometimes at a much lower cost that the equivalent brand-name server and capacity (and these companies make some big, big bikkies doing this so we are not talking tin-pot operations here).

So where does Adaptec by PMC fit in. Most commonly a datacenter operator is looking for large scale capacity in their storage in as cheap a platform as possible. Enter the high-density RAID card capable of connecting directly to 24 hard drives in a small environment, or a high-density rack-level environment with your head unit connected out to large numbers of densely packed JBODS. We have products that fit in both of these environments providing the capacity and performance to ensure that the datacenter bottleneck is not in the storage infrastructure.

So we find ourselves living in phase 2 of a datacenter’s life. Phase 3 of that lifecycle is where the customer will start to look at innovative solutions to improve performance, reduce latency and differentiate themselves from the crowd. PMC plays well in this space with intelligent SSD solutions and ASIC embedded solutions for these big players. Customers also look at “can we do this with software?” – where the datacenter starts to look at their application layers and moves to simplified management of hardware via their software applications – and RAID takes a back seat to the humble HBA (and yes, we have those too). There is plenty of scope for transitioning through these phases with modern RAID cards being able to take on different modes of operation and fit across many different platform requirements.

At the top of the tree, in phase 4, is the big end of town where building blocks for the datacenter have moved from servers or racks to cubes or containers, and the scale means that the hardware is completely secondary to the application … and the hardware environment becomes one of “ship it in, run it, then ship it out if it breaks” … with little to no interaction inbetween. The hardware is generally the same as in phase 3, but with greater emphasis on software control and distributed storage/function.

Typically the vast majority of smaller datacenters are at phase 1 or 2 and trying to get their hardware costs under control as their capacities continue to grow. This is not a bad thing – just a phase in the overall life of the datacenter.

So where are you? (and where is your data?)

Ciao
Neil

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Driving me crazy …

I’m constantly asked the question: “what drives should I use?” Well these days I, like many others, am struggling to answer that question.

I talk to drive vendors on a regular basis and they are constantly releasing new drives – but sometimes even they seem to struggle with the marketing naming conventions and the different types of drives being released into the channel. It is true that some drives are released because there is a perceived market segment, and that some drives are built for other customers (eg OEM) and released into Channel because someone thinks it’s a good idea, but in the end the result is a bit of confusion on the part of the poor people trying to work out which drives to use for their day to day server builds.

SSD has a wide range of so-called performance stats and a wide range of prices (even from the one vendor). 5900, 7200, 10K RPM – and that’s just in SATA. Then add SAS to the mix in 7200, 10K and 15K. What about “Hybrid” drives? Oh, by the way, mix in a good dose of 2.5” vs 3.5”, some naming conventions like “NAS, Desktop, Workstation, Datacenter, Audio Video, Cloud, Enterprise, Video” and you have a wonderful mix that confuses the living daylights out of end users and system builders. Did I forget to mention 3gb, 6gb and now 12gb drives hitting the market?

Soon ordering a drive will be like waiting in line at the local café … I’ll have a “triple venti caramel machiatto with whip skim milk and cinnamon”. Now if I hear that in front of me I think “w^%%&er”! But listening to a team of system engineers work out what is the correct drive for the particular customer requirement doesn’t sound too much different.

Try googling “making sense of hard drives” and you won’t get much help. Try working your way through the vendor websites and you are not much better off. So how do you do it? I ring my mates in the industry and even they struggle with all the new models and naming conventions from the marketing teams … so I wonder what the real industry does to work out what you should be using? I’d be interested to hear.

Oh, and by the way, I forgot to take into account the “thickness” of the drive (as opposed to the “thickness” of some of the promotional material :-))

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

So why change the cables?

Adaptec 7 and 8 series controllers have moved to the new mini-SAS-HD (high-density – codename 8643) cable as opposed to the previous “mini-SAS” (codename 8087) connector of the 3, 5 and 6 series controllers.

Why?

The older 8087 has served its purpose and been a very good, robust, reliable and fairly trouble-free connector for quite a few years. So why change it? There are technical and physical reasons why we have done this … read on for the explanation …

Background point: RAID processors were, and have been up until the Adaptec 7 series, 8 port on chip. This means that the series 6 controller for instance, and all of Adaptec’s competitors, only have 8 native ports on their controller to connect to devices. All Adaptec’s 7 series controllers however have 24 native ports on the chip, which means we need more than 8 physical ports on the card to make use of those ports.

The mini-SAS (8087) form factor fitted fairly well on RAID cards because it connects 4 ports from the RAID processor to a single cable. Consequently 2 connectors on the card utilized all the ports on the RAID processor. This worked well for a long time, and created a simple “standard” that RAID cards used because everyone in the industry was working with 8-port chips and 2 connectors. Like any technology, if everyone does the same thing for a while then it becomes a de-facto “standard”.

Accessing more than 8 devices has always required a work around with this style of card as 8 devices is the physical limit. That work-around comes in the form of an expander. That expander can either be on the backplane, connecting an 8-port chip to more than 8 drives, or it can be physically mounted on the RAID card as in the case of the 51645 3gb/sec card.

While expanders work well in most cases, there are situations where they are undesirable – especially in the performance end of the storage spectrum, and specifically when SSDs are utilized. They inhibit performance in SSD environments – and while that was not a problem a few years ago, the SSD has invaded the storage space and are found in all manner of systems these days.

External factors …

Hard drive vendors are focused on 2.5” drives. They promote these as their performance products (even in spinning media), while they promote the 3.5” drive as the “capacity” device. With the proliferation of 2.5” drives, the 2U chassis is starting to become a dominant form factor from the chassis vendors – because it’s big enough for the computing requirements and you can fit up to 24 2.5” drives across the front of a 2U chassis.

So between the drive vendors and chassis vendors, 2U is becoming mainstream, and bringing with it the need to connect more than 8 drives in a small, confined space.

Now that can be resolved with expanders on the backplane – agreed. However, taking into account the previously-defined performance issues with SSDs and expanders, and the fact that SSDs are 2.5” and tempting to put in these boxes in one way or another, and it becomes highly desirable to connect all those drives “without” an expander in the equation. Especially when taking into consideration such technologies as SSD caching, and Hybrid RAID, SSD is becoming a pervasive force in storage in one form or another … and you certainly want your storage infrastructure to make full use of that performance and not hinder it (especially because you paid so much for those drives).

So …

Put all that together:

  1. SSD becoming commonplace
  2. Chassis vendor pushing for 2U 2.5” form factor
  3. Drive vendor pushing towards 2.5” form factor
  4. Expander technology inhibiting SSD performance
  5. A RAID card with 24-native ports on the processor
  6. The desire to fit all that in a low-profile product to fit in 2U chassis

The result is the need to put more than 8 direct connections on a low-profile RAID card. Only problem is that this can’t physically be done with 8087 connectors – they are too big. So Adaptec looked to the future, and looked at the industry-standard connector being introduced in the very near future for the 12gb SAS standard – the mini-SAS-HD 8643 connector.

The 8643 allows us to fit 4 connectors on a low-profile card (16 native port connections). If we want more (24) then we need high-profile because even with the efficient size of the 8643 it’s just not possible to fit 24 port connections on a low-profile card. So we used the 8643 and created a range of the world’s highest density RAID controllers in the process.

Ramifications

Technically – none. Logistically … hmmm. Yes we had to create a new range of cables to connect the 8643 on the controller to the 8087 on the backplane etc, but we made sure we stuck to standards, and that the cable vendors of the world have gone along and created plenty of alternatives to using Adaptec cables (though of course I’m always going to recommend to use our cables).

Reality

System builders have always had to source cables. Either from the card vendor, and often from other 3rd-party manufacturers. With the introduction of the 8643 connector it has become necessary for people to go through the logistics of sourcing new cables (or learning new order numbers), but we believe the benefits associated with the increased density of connectors and the ability to bypass expanders in many more configurations than previous are worth the pain that we have all gone through in learning these new technologies and their associated part numbers :-)

Ciao
Neil

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

To cloud or not to cloud? …

A long time ago I wrote a blog about how to set up a small business server to maximum efficiency for all the different components going on in that box (Window Small Business Server). While reviewing some old documentation I came across this article and thought about updating it to include SSDs, solid-state caching and tiering technologies. However, as I pondered this question and looked a little deeper, the question changed …

It seems the real question today, with the demise of SBS, is … “Do I put my data in the cloud or do I keep it inhouse?”

Now I’m totally isolated to my corporate inhouse network environment being a remote worker, so I look at this with a specialized viewpoint, but I thought I’d cover a few of the questions and seek your input. I hear from people a lot … I’m not putting my data at the mercy of my internet connection … good point – however a closer look tends to dispel a lot of that issue. This of course depends on your work type and environment, but there is a lot of commonality for all workers here so please read on.

I work remotely, while family members work in offices inside corporate environments. I have all my data in my laptop (backed up of course), while the wife has no data in her workstation – it all resides in the server. So what happens when the internet goes down? We both scream blue murder. Sure I can type a few blogs, and probably develop a few power-point presentations, but both my wife and I will surely fade away and collapse if we don’t get access to that next email we so keenly expecting and waiting for.

On the other hand, my wife uses an accounting package that she can spend many hours in without connection to anything except her local machine and merrily work away quite successfully without internet connection.

So what needs to be local and what can afford to be remote? Simply put (imho), if I’m a knowledge worker requiring local data then that data needs to be local so I can access without hinderance in both access and performance from the internet. My email, however, can live out on the internet because it doesn’t matter whether it’s local or remote, if the link is down and I can’t receive (or my email server can’t receive) then it matters little to me what the cause is, I just won’t get email (and can’t send either).

I see this sort of mentality more and more in small business across Australia – put my email in the cloud (normally hosted exchange) but leave me fast access to local data so I can open and close my large files quickly on my fileserver.

While this sounds very simplistic, it actually has quite an impact on the storage needs of the two locations. If the local server is just doing fileserving (unless it is for millions of people) then SATA drives in RAID 6 is fine. On the other hand, if the cloud-based storage is handling large numbers of virtual exchange installations, then it will need some serious grunt to handle that.

Now complicate the matter further and ask where the backup should be. Should that be local or across the internet or both? Likely both is the ideal answer but that again does not require much in the way of performance storage hardware.

So what impact is this having on a company like Adaptec? Simply put we, like everyone else, are rapidly getting our head around the datacenter – the place where all those virtualized exchange servers live in the above story, because that’s where the pressure on storage is today – the local stuff is pretty easy in comparison. So if you are in the datacenter industry – handling other people’s data, then look at our current and upcoming products – they offer some interesting features suited to those specialized applications.

What is your opinion? Data in the cloud or local for Small Business? What makes sense to you?

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment