Did you know (1) ? …

Having a bit of writers block this morning (“write blog article” appeared in Outlook calendar but brain is not responding in kind), so thought I’d take the easy way out and give a quick technical update on our products with some obscure, bizzare or just not-widely-known bit of information.

“Auto Rebuild” is an option in the bios of our cards about which most people have no idea how it works. This could have been called “the option that helps the forgetful system admin” or “even if you don’t bother we’ll safeguard your data for you if we can” … but I don’t think either of those descriptions would fit I the bios screen. “Auto Rebuild” is so last century (but that’s OK because it’s been around for probably that long).

So what does it do? When enabled (which it is by default) … when the card finds an array that is degraded it will first look for a hot spare. If this is not found then it will look for any unused devices (drives that are not part of an array). If a suitable unused disk is found (right size), then the card will build that disk back into the array.

Why did we need this? We have lots of hot spare options – why bother with an auto feature as well? My theory on this (and I’ll probably never know the full exact reason) is that system builders often send off a new system with a hot spare in place, but when the drive fails the user/customer will replace the drive, but does not know how to, or that they should, or that they need to, make that new drive a hot spare. So in my experience the “new drive” that has replaced the old failed drive, is often just sitting as a raw device … and when the next drive fails it does nothing because it’s not a “hot spare”.

Well in the case of our cards set to their factory defaults … the new drive will kick in and replace a failed drive (and rebuild the array), because of “auto rebuild” or “even if you don’t bother we’ll safeguard your data for you if we can” (as I like to call it).

Did you know that?

Quick update. I might have generalised a bit too much on this one. Note that this works in a ses2 backplane (hot swap backplane) when you change the drive in the same slot. Note also that in a series 7 controller it works if you change the new drive into the same slot as the old drive, but if you put the new drive into a different slot you’ll have to make it a hot spare because an S7 will grab that drive and show it to the OS as a pass-through device. Was trying to keep this simple, but probably went a bit far. Oh the complexities we weave :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

But I told you … I don’t want 16 ports! (are you deaf?) …

So a customer rings up looking for a fast card for RAID 1 and RAID 10 – he’s going to use SSD’s to make his ultimate home workstation/video server/graphics-CAD machine etc etc (standard home user). He’s going full SSD no matter what anyone tells him, so it’s “performance, performance, performance” all the way.

So … what card do I need? I’ve used up all my available fund on drives but I want the fastest possible RAID card to connect these SSDs.

71605E

“Now listen mate. I just told you I only have 4 drives … I’m not forking out for a 16-port controller!” “Typical sales guy … doesn’t listen.” “I said I had 4 drives, I said I wanted RAID 10 and you’re trying to flog me a 16-port controller!”

And so the conversation goes. It’s not until you get someone to look at the pricelist … scroll all the way through to the 71605E that you hear them go “oh!” on the other end of the phone. This is a pretty common scenario in my neck of the woods. The customer has 2 or 4 drives, so is looking for a 2 or 4-port controller … they certainly don’t go looking for a 16-port controller.

So where did this all go wrong? In a way, it was the generosity of the product marketing team that started this (when they read that they’ll think I’ve started being nice to them). The 71605E has less RAM onboard (256Mb) which is fine because it’s only doing RAID 0,1,10,1E,Hybrid … so it doesn’t need a lot of RAM. Combined with the fact that when connecting SSDs we recommend to turn off the cache anyway so it’s pretty pointless putting a lot of the stuff on there … and it lets us get the price down.

However … the chip is the same as all the other 7 series controller (24-port native ROC), so why put only 4 or 8 plastic connectors onto which to connect cables? 16 fit so why not just leave them there? Makes sense to me … whether I need them or not it does the job. In reality it’s a really sensible, good value card that fits the bill for a lot of people … if they knew about it.

But they don’t … because they are not looking for it.

They are looking for a 4 or 8-port controller because that is how many drives they have, and they think that anything with a “16″ on it will be crazy expensive so they don’t start at that end of the cattle-dog (catalog) … and hence never know about this card. So take a look at the “entry level” 7 series controller … even though it has 16 ports it may in fact be just the low-cost, high-performance, entry-level controller you are looking for.

Maybe this is the card they should be using instead of the 6805 as mentioned in one of my previous posts? Now even I’m getting confused :-)

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Getting the balance right …

Ying and yang … the Chinese had it just about right when they coined that phrase (and yes, I used google to check that in fact it is Chinese – though I’m sure some smart soul will tell me otherwise) …

Had several customers recently who have been reading about hybrid RAID. This is where you can mix an SSD and a HDD on an Adaptec Series 6, 7 or 8 and make a mirror out of those two drives. While this sounds crazy it is pretty simple. Writes go to both drives (which is nicely buffered by controller cache so you get good speed), but all reads are directed from the SSD – giving lightning read speed.

Sounds good … and people use the old story of “don’t need RAID 5 so lets go for an entry-level card”. In this case, the 6405E. So far, so good. However … the 6405E is pcie2 x 1 – which limits the throughput to 500mb/sec. That’s probably not going to be too much of a problem on a RAID 1 – one SSD will go close to saturating that but it will be close.

However … make a RAID 10, with 2 x SSD and 2 x HDD and you are starting to stretch the friendship a bit. Reading a relatively large file off that RAID should in theory saturate the pcie2 x 1 bus, making the card the bottleneck. So in this case you need to go to the 6405 controller, not the “E”, which has a pcie2 x 8 connector and can easily handle the throughput of the SSDs.

So yes, you only need an entry level card for a RAID 1 or RAID 10, but if you are doing a hybrid RAID then you probably need to consider the theoretical speed of the SSD you are using and make sure there is no bottleneck in the way of their performance.

Ciao
Neil

 

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Are you using your storage the right way? …

Spent a short time yesterday at VMware VMworld 2013 “Defy” Convention. Now maybe that should be “deny” convention but that would be cynical of me. 99.9% of the vendors at the show were showing SAN products – storage, backup, de-dupe, migration, DR … you name it, it was there – all based around SAN product. Now I’m not that dumb that if you hit me over the head with a SAN for long enough I get the point – they greatly benefit the functionality of all things VMware by allowing migration etc (just plain moving stuff around). So VMware focus on, and promote SAN as their primary/preferred storage medium – makes sense to me.

However … (and yes, there is always a gotcha) …

We sell an awful lot of RAID cards to people using VMware on DAS (direct attached storage). Now I could not even find one person who was willing to discuss direct-attached storage – it was basically a no-no discussion point since it does not fit all the functionality and marketing hype that VMware put around their products – after all it’s all stuck in the one box!

The reality is that no matter what a vendor thinks, and how hard they promote a specific use of product, the customer will always come up with an innovative (I call it “left field”) way of using your product, often to a point that you don’t think is very smart or realistic, but you keep that opinion to yourself because right or wrong, you want the customer to buy more product.

In RAID storage it’s akin to the customer running RAID 0 on a bunch of desktop drives all running an old firmware – stability is non-existent but the customer expects that since this is an option in the RAID card, then it should work just as well as every other option in the menu. Or the customer (and yes I hit one of these guys the other day) wanting to build a RAID 10 out of 16 SSDs … a slightly rather too expensive configuration for my taste, but the customer was convinced that this was the right way to go.

So what is the “right way” to use your storage? There really isn’t one, but I’d strongly urge people to talk to a vendor and discuss the pros and cons, risks and benefits, shortcoming and upsides of their configuration – it may just be that while the customer has thought of all the innovative and “left field” ways of using storage, they haven’t considered the fundamental underlying problems they may run into because of their design.

So the lesson is? Talk to your vendor and don’t worry if they laugh/choke/smirk/scoff or otherwise deride your ideas … just listen to their input and balance your enthusiasm with their usual conservatism.

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | 1 Comment

Growing the datacenter …

Adaptec by PMC has some pretty cool products that work really well in datacenters. For datacenter operators who have moved on from the “just throw brand-name hardware at it” to the “lets’ do it a lot cheaper and build our own storage boxes” … we have the RAID cards that provide the performance and density to meet their requirements.

Let me explain the bit in quotes above. Many a wise man has studied the datacenter environment, and found that the startup often goes with the brand name server and storage provider so they can (a) focus on their admin, (b) get service contracts from major vendors and (c) boast their hardware platforms to their prospective customers. This has a lot of benefits and is generally considered the way to start your datacenter. However it comes at a cost … a big cost … in capital outlay and ongoing service contracts.

Then a datacenter starts to grow it generally finds all sorts of cost pressures mounting against the solution of providing high-end brand-name storage … and they start looking to do things a little on the cheaper side. Enter the whitebox storage vendor/product. Nothing wrong with whitebox – Intel and Supermicro for example make excellent product which can be assembled sometimes at a much lower cost that the equivalent brand-name server and capacity (and these companies make some big, big bikkies doing this so we are not talking tin-pot operations here).

So where does Adaptec by PMC fit in. Most commonly a datacenter operator is looking for large scale capacity in their storage in as cheap a platform as possible. Enter the high-density RAID card capable of connecting directly to 24 hard drives in a small environment, or a high-density rack-level environment with your head unit connected out to large numbers of densely packed JBODS. We have products that fit in both of these environments providing the capacity and performance to ensure that the datacenter bottleneck is not in the storage infrastructure.

So we find ourselves living in phase 2 of a datacenter’s life. Phase 3 of that lifecycle is where the customer will start to look at innovative solutions to improve performance, reduce latency and differentiate themselves from the crowd. PMC plays well in this space with intelligent SSD solutions and ASIC embedded solutions for these big players. Customers also look at “can we do this with software?” – where the datacenter starts to look at their application layers and moves to simplified management of hardware via their software applications – and RAID takes a back seat to the humble HBA (and yes, we have those too). There is plenty of scope for transitioning through these phases with modern RAID cards being able to take on different modes of operation and fit across many different platform requirements.

At the top of the tree, in phase 4, is the big end of town where building blocks for the datacenter have moved from servers or racks to cubes or containers, and the scale means that the hardware is completely secondary to the application … and the hardware environment becomes one of “ship it in, run it, then ship it out if it breaks” … with little to no interaction inbetween. The hardware is generally the same as in phase 3, but with greater emphasis on software control and distributed storage/function.

Typically the vast majority of smaller datacenters are at phase 1 or 2 and trying to get their hardware costs under control as their capacities continue to grow. This is not a bad thing – just a phase in the overall life of the datacenter.

So where are you? (and where is your data?)

Ciao
Neil

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

Driving me crazy …

I’m constantly asked the question: “what drives should I use?” Well these days I, like many others, am struggling to answer that question.

I talk to drive vendors on a regular basis and they are constantly releasing new drives – but sometimes even they seem to struggle with the marketing naming conventions and the different types of drives being released into the channel. It is true that some drives are released because there is a perceived market segment, and that some drives are built for other customers (eg OEM) and released into Channel because someone thinks it’s a good idea, but in the end the result is a bit of confusion on the part of the poor people trying to work out which drives to use for their day to day server builds.

SSD has a wide range of so-called performance stats and a wide range of prices (even from the one vendor). 5900, 7200, 10K RPM – and that’s just in SATA. Then add SAS to the mix in 7200, 10K and 15K. What about “Hybrid” drives? Oh, by the way, mix in a good dose of 2.5” vs 3.5”, some naming conventions like “NAS, Desktop, Workstation, Datacenter, Audio Video, Cloud, Enterprise, Video” and you have a wonderful mix that confuses the living daylights out of end users and system builders. Did I forget to mention 3gb, 6gb and now 12gb drives hitting the market?

Soon ordering a drive will be like waiting in line at the local café … I’ll have a “triple venti caramel machiatto with whip skim milk and cinnamon”. Now if I hear that in front of me I think “w^%%&er”! But listening to a team of system engineers work out what is the correct drive for the particular customer requirement doesn’t sound too much different.

Try googling “making sense of hard drives” and you won’t get much help. Try working your way through the vendor websites and you are not much better off. So how do you do it? I ring my mates in the industry and even they struggle with all the new models and naming conventions from the marketing teams … so I wonder what the real industry does to work out what you should be using? I’d be interested to hear.

Oh, and by the way, I forgot to take into account the “thickness” of the drive (as opposed to the “thickness” of some of the promotional material :-))

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

So why change the cables?

Adaptec 7 and 8 series controllers have moved to the new mini-SAS-HD (high-density – codename 8643) cable as opposed to the previous “mini-SAS” (codename 8087) connector of the 3, 5 and 6 series controllers.

Why?

The older 8087 has served its purpose and been a very good, robust, reliable and fairly trouble-free connector for quite a few years. So why change it? There are technical and physical reasons why we have done this … read on for the explanation …

Background point: RAID processors were, and have been up until the Adaptec 7 series, 8 port on chip. This means that the series 6 controller for instance, and all of Adaptec’s competitors, only have 8 native ports on their controller to connect to devices. All Adaptec’s 7 series controllers however have 24 native ports on the chip, which means we need more than 8 physical ports on the card to make use of those ports.

The mini-SAS (8087) form factor fitted fairly well on RAID cards because it connects 4 ports from the RAID processor to a single cable. Consequently 2 connectors on the card utilized all the ports on the RAID processor. This worked well for a long time, and created a simple “standard” that RAID cards used because everyone in the industry was working with 8-port chips and 2 connectors. Like any technology, if everyone does the same thing for a while then it becomes a de-facto “standard”.

Accessing more than 8 devices has always required a work around with this style of card as 8 devices is the physical limit. That work-around comes in the form of an expander. That expander can either be on the backplane, connecting an 8-port chip to more than 8 drives, or it can be physically mounted on the RAID card as in the case of the 51645 3gb/sec card.

While expanders work well in most cases, there are situations where they are undesirable – especially in the performance end of the storage spectrum, and specifically when SSDs are utilized. They inhibit performance in SSD environments – and while that was not a problem a few years ago, the SSD has invaded the storage space and are found in all manner of systems these days.

External factors …

Hard drive vendors are focused on 2.5” drives. They promote these as their performance products (even in spinning media), while they promote the 3.5” drive as the “capacity” device. With the proliferation of 2.5” drives, the 2U chassis is starting to become a dominant form factor from the chassis vendors – because it’s big enough for the computing requirements and you can fit up to 24 2.5” drives across the front of a 2U chassis.

So between the drive vendors and chassis vendors, 2U is becoming mainstream, and bringing with it the need to connect more than 8 drives in a small, confined space.

Now that can be resolved with expanders on the backplane – agreed. However, taking into account the previously-defined performance issues with SSDs and expanders, and the fact that SSDs are 2.5” and tempting to put in these boxes in one way or another, and it becomes highly desirable to connect all those drives “without” an expander in the equation. Especially when taking into consideration such technologies as SSD caching, and Hybrid RAID, SSD is becoming a pervasive force in storage in one form or another … and you certainly want your storage infrastructure to make full use of that performance and not hinder it (especially because you paid so much for those drives).

So …

Put all that together:

  1. SSD becoming commonplace
  2. Chassis vendor pushing for 2U 2.5” form factor
  3. Drive vendor pushing towards 2.5” form factor
  4. Expander technology inhibiting SSD performance
  5. A RAID card with 24-native ports on the processor
  6. The desire to fit all that in a low-profile product to fit in 2U chassis

The result is the need to put more than 8 direct connections on a low-profile RAID card. Only problem is that this can’t physically be done with 8087 connectors – they are too big. So Adaptec looked to the future, and looked at the industry-standard connector being introduced in the very near future for the 12gb SAS standard – the mini-SAS-HD 8643 connector.

The 8643 allows us to fit 4 connectors on a low-profile card (16 native port connections). If we want more (24) then we need high-profile because even with the efficient size of the 8643 it’s just not possible to fit 24 port connections on a low-profile card. So we used the 8643 and created a range of the world’s highest density RAID controllers in the process.

Ramifications

Technically – none. Logistically … hmmm. Yes we had to create a new range of cables to connect the 8643 on the controller to the 8087 on the backplane etc, but we made sure we stuck to standards, and that the cable vendors of the world have gone along and created plenty of alternatives to using Adaptec cables (though of course I’m always going to recommend to use our cables).

Reality

System builders have always had to source cables. Either from the card vendor, and often from other 3rd-party manufacturers. With the introduction of the 8643 connector it has become necessary for people to go through the logistics of sourcing new cables (or learning new order numbers), but we believe the benefits associated with the increased density of connectors and the ability to bypass expanders in many more configurations than previous are worth the pain that we have all gone through in learning these new technologies and their associated part numbers :-)

Ciao
Neil

 

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

To cloud or not to cloud? …

A long time ago I wrote a blog about how to set up a small business server to maximum efficiency for all the different components going on in that box (Window Small Business Server). While reviewing some old documentation I came across this article and thought about updating it to include SSDs, solid-state caching and tiering technologies. However, as I pondered this question and looked a little deeper, the question changed …

It seems the real question today, with the demise of SBS, is … “Do I put my data in the cloud or do I keep it inhouse?”

Now I’m totally isolated to my corporate inhouse network environment being a remote worker, so I look at this with a specialized viewpoint, but I thought I’d cover a few of the questions and seek your input. I hear from people a lot … I’m not putting my data at the mercy of my internet connection … good point – however a closer look tends to dispel a lot of that issue. This of course depends on your work type and environment, but there is a lot of commonality for all workers here so please read on.

I work remotely, while family members work in offices inside corporate environments. I have all my data in my laptop (backed up of course), while the wife has no data in her workstation – it all resides in the server. So what happens when the internet goes down? We both scream blue murder. Sure I can type a few blogs, and probably develop a few power-point presentations, but both my wife and I will surely fade away and collapse if we don’t get access to that next email we so keenly expecting and waiting for.

On the other hand, my wife uses an accounting package that she can spend many hours in without connection to anything except her local machine and merrily work away quite successfully without internet connection.

So what needs to be local and what can afford to be remote? Simply put (imho), if I’m a knowledge worker requiring local data then that data needs to be local so I can access without hinderance in both access and performance from the internet. My email, however, can live out on the internet because it doesn’t matter whether it’s local or remote, if the link is down and I can’t receive (or my email server can’t receive) then it matters little to me what the cause is, I just won’t get email (and can’t send either).

I see this sort of mentality more and more in small business across Australia – put my email in the cloud (normally hosted exchange) but leave me fast access to local data so I can open and close my large files quickly on my fileserver.

While this sounds very simplistic, it actually has quite an impact on the storage needs of the two locations. If the local server is just doing fileserving (unless it is for millions of people) then SATA drives in RAID 6 is fine. On the other hand, if the cloud-based storage is handling large numbers of virtual exchange installations, then it will need some serious grunt to handle that.

Now complicate the matter further and ask where the backup should be. Should that be local or across the internet or both? Likely both is the ideal answer but that again does not require much in the way of performance storage hardware.

So what impact is this having on a company like Adaptec? Simply put we, like everyone else, are rapidly getting our head around the datacenter – the place where all those virtualized exchange servers live in the above story, because that’s where the pressure on storage is today – the local stuff is pretty easy in comparison. So if you are in the datacenter industry – handling other people’s data, then look at our current and upcoming products – they offer some interesting features suited to those specialized applications.

What is your opinion? Data in the cloud or local for Small Business? What makes sense to you?

Ciao
Neil

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

flexConfig … so what’s in a name?

Howdy folks,

Been a while since I put fingers to keyboard in this application because I’ve been getting my head around some fancy new changes we made to our RAID cards in our Series 7 range.

In years gone by (and for a long-long time) we hid all drives from customers that were not part of RAID arrays. Basically, if you wanted to use the disk you had to do something with it (make it part of a RAID array or make it a volume/JBOD etc). This meant configuration requirements for those disks – not a bad thing if you are a system admin trying to keep your job, but some customers just don’t want to have to touch the raid card or it’s management software all the time.

Hence flexConfig.

This is a new name for a change to our RAID code, thought up by the lads in our product marketing team (I have to be nice, my Editor is one of those “lads” :-)). However it’s much more than a name change – it’s a bit of a game changer for RAID cards once you get your head around it.

We have RAID cards and we have HBA cards. RAID cards do fancy things with redundancy, performance, capacity etc, while HBA cards do very little/none to any of those functions – they just present cards to the OS. So why would you want one or the other? That’s totally up to your system and software design, and we know that lots of people want each of those card types. We also know that some (many) customers want a bit of both – ie the flexibility to do one or the other, or both at the same time, in the one card.

A RAID card AND an HBA? (and at the same time?) …

I’m not going to try and explain all the functionality here because that’s the job I do at the PMC University where we have revived the ACSP Guides from years gone by and tried to bring them into the 21st century.

Suffice to say:

1. There are now three modes a card can run in: RAID mode, HBA mode, Auto Volume mode
2. There are now three modes a disk can exist in (RAW, READY and MEMBER)

We’re working on complete explanation to go up on our University site (http://www.adaptecuniversity.com/) … and no that’s not me sitting back looking smug on the front page.

So register yourself (free) in the Uni and take a look at the information there – especially the “flexConfig” module that will be up in a week or two.

Ciao
N

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management | Leave a comment

When is a RAID card not a RAID card?

Answer: when it’s an HBA … or … when it’s half a RAID card and half an HBA … hmmm.

In the storage industry there are RAID cards and HBAs (host bus adapters). A lot of people might think of HBAs as simple tools to connect tape drives to a server, but there is a lot more to the humble HBA and it covers a broad spectrum of the storage industry. A RAID card will take a bunch of individual physical disks, group them together into a “logical disk” (RAID array) and show that bigger, faster, redundant disk to the operating system. An HBA on the other hand doesn’t do any of that fancy stuff – it just takes the individual drives attached to the card and shows them to the operating system – sounds simple doesn’t it.

So why would you want to do that? Why single disks instead of RAID arrays? Well there are a few reasons why people want HBA functionality rather than RAID functionality. An example would be the ZFS file system where the filesystem will take individual drives and build redundant data across those drives. At the other end of town the large datacenters don’t want RAID either – they make their data redundant by load balancing across multiple disks at their application level. But what if you want both? What if you want a mirror for your operating system, then a bunch of individual drives for your fancy features? If you have to make each disk a volume or JBOD then the data flows through the RAID function of the card, utilizes cache on the card and has to be configured – a time-consuming process with a lot of drives.

An HBA on the other hand doesn’t have any configuration issues – you simply see the drives that are attached to the card. Adaptec’s Series 7 has the ability to do both. The card thinks of drives in three different formats … raw, ready and member drives. Raw drives are brand new out of the box drives that have nothing written to them (metadata) – these drives can’t be used for raid arrays. Ready drives have been initialized – a blank metadata structure has been placed on the head and foot of the drive – this drive is seen as a drive ready to go into an array (and can thus only be used for that purpose). Member drives are drives that have been consumed into RAID arrays already.

So if you have 8 drives connected to your card, you can have any combination of drives in and out of arrays, and those drives out of arrays can be presented automatically to the operating system (or hidden from the operating system). It sounds complicated but it’s not really that bad. It just opens up the possibilities for system builders to tailor solutions for their customers … and that can’t be a bad thing.

Ciao
Neil

Posted in General | Leave a comment