Growing the datacenter …

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

Adaptec by PMC has some pretty cool products that work really well in datacenters. For datacenter operators who have moved on from the “just throw brand-name hardware at it” to the “lets’ do it a lot cheaper and build our own storage boxes” … we have the RAID cards that provide the performance and density to meet their requirements.

Let me explain the bit in quotes above. Many a wise man has studied the datacenter environment, and found that the startup often goes with the brand name server and storage provider so they can (a) focus on their admin, (b) get service contracts from major vendors and (c) boast their hardware platforms to their prospective customers. This has a lot of benefits and is generally considered the way to start your datacenter. However it comes at a cost … a big cost … in capital outlay and ongoing service contracts.

Then a datacenter starts to grow it generally finds all sorts of cost pressures mounting against the solution of providing high-end brand-name storage … and they start looking to do things a little on the cheaper side. Enter the whitebox storage vendor/product. Nothing wrong with whitebox – Intel and Supermicro for example make excellent product which can be assembled sometimes at a much lower cost that the equivalent brand-name server and capacity (and these companies make some big, big bikkies doing this so we are not talking tin-pot operations here).

So where does Adaptec by PMC fit in. Most commonly a datacenter operator is looking for large scale capacity in their storage in as cheap a platform as possible. Enter the high-density RAID card capable of connecting directly to 24 hard drives in a small environment, or a high-density rack-level environment with your head unit connected out to large numbers of densely packed JBODS. We have products that fit in both of these environments providing the capacity and performance to ensure that the datacenter bottleneck is not in the storage infrastructure.

So we find ourselves living in phase 2 of a datacenter’s life. Phase 3 of that lifecycle is where the customer will start to look at innovative solutions to improve performance, reduce latency and differentiate themselves from the crowd. PMC plays well in this space with intelligent SSD solutions and ASIC embedded solutions for these big players. Customers also look at “can we do this with software?” – where the datacenter starts to look at their application layers and moves to simplified management of hardware via their software applications – and RAID takes a back seat to the humble HBA (and yes, we have those too). There is plenty of scope for transitioning through these phases with modern RAID cards being able to take on different modes of operation and fit across many different platform requirements.

At the top of the tree, in phase 4, is the big end of town where building blocks for the datacenter have moved from servers or racks to cubes or containers, and the scale means that the hardware is completely secondary to the application … and the hardware environment becomes one of “ship it in, run it, then ship it out if it breaks” … with little to no interaction inbetween. The hardware is generally the same as in phase 3, but with greater emphasis on software control and distributed storage/function.

Typically the vast majority of smaller datacenters are at phase 1 or 2 and trying to get their hardware costs under control as their capacities continue to grow. This is not a bad thing – just a phase in the overall life of the datacenter.

So where are you? (and where is your data?)



Driving me crazy …

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

I’m constantly asked the question: “what drives should I use?” Well these days I, like many others, am struggling to answer that question.

I talk to drive vendors on a regular basis and they are constantly releasing new drives – but sometimes even they seem to struggle with the marketing naming conventions and the different types of drives being released into the channel. It is true that some drives are released because there is a perceived market segment, and that some drives are built for other customers (eg OEM) and released into Channel because someone thinks it’s a good idea, but in the end the result is a bit of confusion on the part of the poor people trying to work out which drives to use for their day to day server builds.

SSD has a wide range of so-called performance stats and a wide range of prices (even from the one vendor). 5900, 7200, 10K RPM – and that’s just in SATA. Then add SAS to the mix in 7200, 10K and 15K. What about “Hybrid” drives? Oh, by the way, mix in a good dose of 2.5” vs 3.5”, some naming conventions like “NAS, Desktop, Workstation, Datacenter, Audio Video, Cloud, Enterprise, Video” and you have a wonderful mix that confuses the living daylights out of end users and system builders. Did I forget to mention 3gb, 6gb and now 12gb drives hitting the market?

Soon ordering a drive will be like waiting in line at the local café … I’ll have a “triple venti caramel machiatto with whip skim milk and cinnamon”. Now if I hear that in front of me I think “w^%%&er”! But listening to a team of system engineers work out what is the correct drive for the particular customer requirement doesn’t sound too much different.

Try googling “making sense of hard drives” and you won’t get much help. Try working your way through the vendor websites and you are not much better off. So how do you do it? I ring my mates in the industry and even they struggle with all the new models and naming conventions from the marketing teams … so I wonder what the real industry does to work out what you should be using? I’d be interested to hear.

Oh, and by the way, I forgot to take into account the “thickness” of the drive (as opposed to the “thickness” of some of the promotional material :-) )


So why change the cables?

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

Adaptec 7 and 8 series controllers have moved to the new mini-SAS-HD (high-density – codename 8643) cable as opposed to the previous “mini-SAS” (codename 8087) connector of the 3, 5 and 6 series controllers.


The older 8087 has served its purpose and been a very good, robust, reliable and fairly trouble-free connector for quite a few years. So why change it? There are technical and physical reasons why we have done this … read on for the explanation …

Background point: RAID processors were, and have been up until the Adaptec 7 series, 8 port on chip. This means that the series 6 controller for instance, and all of Adaptec’s competitors, only have 8 native ports on their controller to connect to devices. All Adaptec’s 7 series controllers however have 24 native ports on the chip, which means we need more than 8 physical ports on the card to make use of those ports.

The mini-SAS (8087) form factor fitted fairly well on RAID cards because it connects 4 ports from the RAID processor to a single cable. Consequently 2 connectors on the card utilized all the ports on the RAID processor. This worked well for a long time, and created a simple “standard” that RAID cards used because everyone in the industry was working with 8-port chips and 2 connectors. Like any technology, if everyone does the same thing for a while then it becomes a de-facto “standard”.

Accessing more than 8 devices has always required a work around with this style of card as 8 devices is the physical limit. That work-around comes in the form of an expander. That expander can either be on the backplane, connecting an 8-port chip to more than 8 drives, or it can be physically mounted on the RAID card as in the case of the 51645 3gb/sec card.

While expanders work well in most cases, there are situations where they are undesirable – especially in the performance end of the storage spectrum, and specifically when SSDs are utilized. They inhibit performance in SSD environments – and while that was not a problem a few years ago, the SSD has invaded the storage space and are found in all manner of systems these days.

External factors …

Hard drive vendors are focused on 2.5” drives. They promote these as their performance products (even in spinning media), while they promote the 3.5” drive as the “capacity” device. With the proliferation of 2.5” drives, the 2U chassis is starting to become a dominant form factor from the chassis vendors – because it’s big enough for the computing requirements and you can fit up to 24 2.5” drives across the front of a 2U chassis.

So between the drive vendors and chassis vendors, 2U is becoming mainstream, and bringing with it the need to connect more than 8 drives in a small, confined space.

Now that can be resolved with expanders on the backplane – agreed. However, taking into account the previously-defined performance issues with SSDs and expanders, and the fact that SSDs are 2.5” and tempting to put in these boxes in one way or another, and it becomes highly desirable to connect all those drives “without” an expander in the equation. Especially when taking into consideration such technologies as SSD caching, and Hybrid RAID, SSD is becoming a pervasive force in storage in one form or another … and you certainly want your storage infrastructure to make full use of that performance and not hinder it (especially because you paid so much for those drives).

So …

Put all that together:

  1. SSD becoming commonplace
  2. Chassis vendor pushing for 2U 2.5” form factor
  3. Drive vendor pushing towards 2.5” form factor
  4. Expander technology inhibiting SSD performance
  5. A RAID card with 24-native ports on the processor
  6. The desire to fit all that in a low-profile product to fit in 2U chassis

The result is the need to put more than 8 direct connections on a low-profile RAID card. Only problem is that this can’t physically be done with 8087 connectors – they are too big. So Adaptec looked to the future, and looked at the industry-standard connector being introduced in the very near future for the 12gb SAS standard – the mini-SAS-HD 8643 connector.

The 8643 allows us to fit 4 connectors on a low-profile card (16 native port connections). If we want more (24) then we need high-profile because even with the efficient size of the 8643 it’s just not possible to fit 24 port connections on a low-profile card. So we used the 8643 and created a range of the world’s highest density RAID controllers in the process.


Technically – none. Logistically … hmmm. Yes we had to create a new range of cables to connect the 8643 on the controller to the 8087 on the backplane etc, but we made sure we stuck to standards, and that the cable vendors of the world have gone along and created plenty of alternatives to using Adaptec cables (though of course I’m always going to recommend to use our cables).


System builders have always had to source cables. Either from the card vendor, and often from other 3rd-party manufacturers. With the introduction of the 8643 connector it has become necessary for people to go through the logistics of sourcing new cables (or learning new order numbers), but we believe the benefits associated with the increased density of connectors and the ability to bypass expanders in many more configurations than previous are worth the pain that we have all gone through in learning these new technologies and their associated part numbers :-)



To cloud or not to cloud? …

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

A long time ago I wrote a blog about how to set up a small business server to maximum efficiency for all the different components going on in that box (Window Small Business Server). While reviewing some old documentation I came across this article and thought about updating it to include SSDs, solid-state caching and tiering technologies. However, as I pondered this question and looked a little deeper, the question changed …

It seems the real question today, with the demise of SBS, is … “Do I put my data in the cloud or do I keep it inhouse?”

Now I’m totally isolated to my corporate inhouse network environment being a remote worker, so I look at this with a specialized viewpoint, but I thought I’d cover a few of the questions and seek your input. I hear from people a lot … I’m not putting my data at the mercy of my internet connection … good point – however a closer look tends to dispel a lot of that issue. This of course depends on your work type and environment, but there is a lot of commonality for all workers here so please read on.

I work remotely, while family members work in offices inside corporate environments. I have all my data in my laptop (backed up of course), while the wife has no data in her workstation – it all resides in the server. So what happens when the internet goes down? We both scream blue murder. Sure I can type a few blogs, and probably develop a few power-point presentations, but both my wife and I will surely fade away and collapse if we don’t get access to that next email we so keenly expecting and waiting for.

On the other hand, my wife uses an accounting package that she can spend many hours in without connection to anything except her local machine and merrily work away quite successfully without internet connection.

So what needs to be local and what can afford to be remote? Simply put (imho), if I’m a knowledge worker requiring local data then that data needs to be local so I can access without hinderance in both access and performance from the internet. My email, however, can live out on the internet because it doesn’t matter whether it’s local or remote, if the link is down and I can’t receive (or my email server can’t receive) then it matters little to me what the cause is, I just won’t get email (and can’t send either).

I see this sort of mentality more and more in small business across Australia – put my email in the cloud (normally hosted exchange) but leave me fast access to local data so I can open and close my large files quickly on my fileserver.

While this sounds very simplistic, it actually has quite an impact on the storage needs of the two locations. If the local server is just doing fileserving (unless it is for millions of people) then SATA drives in RAID 6 is fine. On the other hand, if the cloud-based storage is handling large numbers of virtual exchange installations, then it will need some serious grunt to handle that.

Now complicate the matter further and ask where the backup should be. Should that be local or across the internet or both? Likely both is the ideal answer but that again does not require much in the way of performance storage hardware.

So what impact is this having on a company like Adaptec? Simply put we, like everyone else, are rapidly getting our head around the datacenter – the place where all those virtualized exchange servers live in the above story, because that’s where the pressure on storage is today – the local stuff is pretty easy in comparison. So if you are in the datacenter industry – handling other people’s data, then look at our current and upcoming products – they offer some interesting features suited to those specialized applications.

What is your opinion? Data in the cloud or local for Small Business? What makes sense to you?


flexConfig … so what’s in a name?

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

Howdy folks,

Been a while since I put fingers to keyboard in this application because I’ve been getting my head around some fancy new changes we made to our RAID cards in our Series 7 range.

In years gone by (and for a long-long time) we hid all drives from customers that were not part of RAID arrays. Basically, if you wanted to use the disk you had to do something with it (make it part of a RAID array or make it a volume/JBOD etc). This meant configuration requirements for those disks – not a bad thing if you are a system admin trying to keep your job, but some customers just don’t want to have to touch the raid card or it’s management software all the time.

Hence flexConfig.

This is a new name for a change to our RAID code, thought up by the lads in our product marketing team (I have to be nice, my Editor is one of those “lads” :-) ). However it’s much more than a name change – it’s a bit of a game changer for RAID cards once you get your head around it.

We have RAID cards and we have HBA cards. RAID cards do fancy things with redundancy, performance, capacity etc, while HBA cards do very little/none to any of those functions – they just present cards to the OS. So why would you want one or the other? That’s totally up to your system and software design, and we know that lots of people want each of those card types. We also know that some (many) customers want a bit of both – ie the flexibility to do one or the other, or both at the same time, in the one card.

A RAID card AND an HBA? (and at the same time?) …

I’m not going to try and explain all the functionality here because that’s the job I do at the PMC University where we have revived the ACSP Guides from years gone by and tried to bring them into the 21st century.

Suffice to say:

1. There are now three modes a card can run in: RAID mode, HBA mode, Auto Volume mode
2. There are now three modes a disk can exist in (RAW, READY and MEMBER)

We’re working on complete explanation to go up on our University site ( … and no that’s not me sitting back looking smug on the front page.

So register yourself (free) in the Uni and take a look at the information there – especially the “flexConfig” module that will be up in a week or two.


When is a RAID card not a RAID card?

Posted in General by Neil

Answer: when it’s an HBA … or … when it’s half a RAID card and half an HBA … hmmm.

In the storage industry there are RAID cards and HBAs (host bus adapters). A lot of people might think of HBAs as simple tools to connect tape drives to a server, but there is a lot more to the humble HBA and it covers a broad spectrum of the storage industry. A RAID card will take a bunch of individual physical disks, group them together into a “logical disk” (RAID array) and show that bigger, faster, redundant disk to the operating system. An HBA on the other hand doesn’t do any of that fancy stuff – it just takes the individual drives attached to the card and shows them to the operating system – sounds simple doesn’t it.

So why would you want to do that? Why single disks instead of RAID arrays? Well there are a few reasons why people want HBA functionality rather than RAID functionality. An example would be the ZFS file system where the filesystem will take individual drives and build redundant data across those drives. At the other end of town the large datacenters don’t want RAID either – they make their data redundant by load balancing across multiple disks at their application level. But what if you want both? What if you want a mirror for your operating system, then a bunch of individual drives for your fancy features? If you have to make each disk a volume or JBOD then the data flows through the RAID function of the card, utilizes cache on the card and has to be configured – a time-consuming process with a lot of drives.

An HBA on the other hand doesn’t have any configuration issues – you simply see the drives that are attached to the card. Adaptec’s Series 7 has the ability to do both. The card thinks of drives in three different formats … raw, ready and member drives. Raw drives are brand new out of the box drives that have nothing written to them (metadata) – these drives can’t be used for raid arrays. Ready drives have been initialized – a blank metadata structure has been placed on the head and foot of the drive – this drive is seen as a drive ready to go into an array (and can thus only be used for that purpose). Member drives are drives that have been consumed into RAID arrays already.

So if you have 8 drives connected to your card, you can have any combination of drives in and out of arrays, and those drives out of arrays can be presented automatically to the operating system (or hidden from the operating system). It sounds complicated but it’s not really that bad. It just opens up the possibilities for system builders to tailor solutions for their customers … and that can’t be a bad thing.


So who needs 735MB per second (per drive)?

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

Howdy folks,

I just spent the last few days standing around IDF in Beijing showing off Adaptec’s latest 7 series controller. We also had on display our new range of HBAs. Yes, Adaptec is back in the HBA game … it’s been a long time since we had one, but SAS HBA is back on the menu at Adaptec and we are pretty excited about getting back into this business, especially in light of the growth of datacenters and operating systems that make good use of HBAs (as well as RAID).

That’s all great, but where does the 735MB per second per drive come from in the heading?

In the corner of the booth we had a system (lent to us by our good friends at Chenbro) that has a 12Gb SAS backplane. Installed in that were 4 Seagate (as yet unreleased I believe) 12Gb SAS SSDs. All that was connected to a prototype 12GB SAS RAID controller that our backroom boffins are working on (and I was lucky enough to borrow for a few days).

A simple Iometer script running a 1MB streaming read off the 4 drives set in RAID 0 (on top of a Windows filesystem) produced 2950Mb per second. Sit down and do the maths and you’ll see a speed from each drive that is way, way faster than anything we’ve ever seen (or in fact is possible) from 6GB technology.

So I stood there wondering … who actually needs this sort of amazing speed? Well datacenter is an obvious choice. Along with the amazing MB/sec speed comes some pretty crazy IOP numbers which is really what the datacenter operators are looking for, but the streaming speed will also excite (I think) a lot of the video world.

Whichever way you look at it, these new drives, and the associated technology that goes along with them, are an exciting step forward in performance for disk-based systems. Yeah yeah yeah … I know you can get some amazing speed from your flash drive, but that’s a finite single source of storage – whereas SSD disk-based storage can grow exponentially, which simplifies management and allows for massive implementations of high-speed storage.

So I really wonder … who is looking for this stuff?


More on backplanes …

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

There are two different types of backplanes in this world – passive and active. Passive are just really devices that allow a connection between the drive and card – there is no expander involved.

Active backplanes on the other hand have an expander – a chip device made by third-party vendors to the backplane vendor. These expanders have firmware and intelligence and allow connection of many drives from a much smaller number of ports on a raid card.

So what is the issue here? … compatibility.

Sometimes you’ll plug a card into a backplane, plug in the drives, boot up the system and not see any drives. If this happens to you don’t panic. Contact your nearest Adaptec Tech Support office or go to, describe your card and backplane model and ask the Adaptec Tech’s if they have an updated firmware for the card to get around the problem – they almost always do.

Why does this happen? Well, there are standards and there are standards, and there are companies that do some odd little things just outside the standard so that a competitor who adheres to the standard won’t work with their product. I’m not talking Adaptec here – we’re the ones on the side of the standard. Sometimes you could suspect people trying to stop our products from working with certain products because they know we stick to the standard … no wait, that would be ridiculously suspicious and paranoid of me wouldn’t it?

So if you have an issue with a backplane, contact us and let us help.

(special award for the person who can tell me where that name comes from – in relation to my paranoidness/depression :-) )

Back to the future …

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

I wonder if Steven Spielberg had any idea what a phrase that would become …

Adaptec have taken up the idea and released a new series of HBAs. So what is an HBA I hear you ask out there in Channel storage land? A Host Bust Adapter of course. Think back to the 90s … Adaptec made SCSI controllers – simple devices that connected hard drives to your computer so that you could do things with them. They didn’t do RAID, they just showed a disk to the computer and you did what you wanted from there.

Step forward 20 years and we’re back in the same game. In fact, it never really went away but we just didn’t bother with it. Well now we have bothered and have produced a thumping powerful HBA product range that does … well in fact it does just what the old SCSI HBA did 20 years ago – it presents a bunch of single disks to the computer, and you do what you want from there.

So who would want one of these things? Surely you want a RAID card so you can do all that wonderful technical stuff in the background and not have to bother with it at OS or application level? Well no, not really. The datacenters of this world love these things. The big players in the world do their own redundancy and performance work at a much higher level than a RAID card – often even across system or datacenter levels, so they love these things.

Products like ZFS don’t mind HBA too much either. Take a bunch of disks and do your own storage configs – you don’t need a RAID card, you just need connecting to lots of different types of drives – something the HBA does perfectly.

So are we the only players in this game? Did we just stumble across this and think … hmm, can we make a buck of out this business. This is big business with big competition, but that just happens to be what we love.


So who is responsible for big data? …

Posted in Advisor - Neil, Application Environments, General, Platforms, Storage Applications, Storage Interconnects & RAID, Storage Management by Neil

All I ever hear about these days is “big data”. It’s a like saying “old Neil” – pretty much a natural consequence of getting out of bed each day.

Big data is a nice term trying to point out that we have lots of data, and we are more and more often putting it in the cloud (that invisible thing out there in la-la land). I do, and while pushing that data out there I’ve made an interesting (and somewhat obvious) observation.

Big data in my place is photography. I have a mac (great machine), which stores all my family data. These days that consists of music (not as dominant as it used to be) and photos. Once upon a time video was the big driver of space in my systems, but now it’s still photos. How do I know all this? The mac has a fantastic component called “time machine” which is very similar in nature to say ShadowProtect (which many of you server lads would be familiar with) – the ability to roll back to a certain point in time etc, while keeping a copy of all your data on an external device (in my case a USB drive) in case the hard drive in the mac takes a powder.

But is that safe enough? Not for me, so I purchase some space up in the cloud and make a copy of my data to that location. Because I’m anal and somewhat organised (at least I like to think so), I’ve filed everything into year folders, then each folder has a separate folder within it for each “event” throughout that year.

I then push a year at a time up to the cloud repository, so that when I’m finished I only need to update the last year, then make a new year folder on the mac and start putting files in that. OK some clever person is going to tell me there is some wonderful software available to do all this but you know what? I actually like being in control of what is happening and what is going were, so like Frank said “I’ll do it my way thank you very much”.

So the interesting thing has been looking at the data that is in each of these folders. 6-8 years ago it was grainy video from dedicated video cameras that we lugged around from BMX event to BMX event. Those days are gone. With the price reduction of quality Digital SLR cameras I know find three of them lying around the house (none of which are mind and who knows how they work out whose is whose), but these babies are the generators of storage requirements big time for me.

With the speed of the cameras, the size of the memory cards and the resolution of the photos – it’s a perfect storm of data being dumped into the mac. Yes it is supposedly possible to delete unwanted copies of horrible photos that will never be printed, but that never seems to happy in my house. So I have to be organised, manage the proliferation of data happening on the machine in the other corner of my office and have a regularly checked system to keep track of and copy of, these photos. You can bet your bottom dollar that I’ll be the one in trouble if something does and something is lost (I am, after all, the “computer person” in the house).

All this is great business for computer companies. The average business is starting to put it’s compute power in the cloud, store it’s data in the cloud etc, but I believe the real driver of cloud data is not business, but social media and personal data. At Adaptec we are just going along for the ride – happy to sell product to whichever cloud vendor (often called datacenter) needs more storage so no complaints from a corporate perspective, but … and here’s what I want to know …

What is your “Big Data”? … and how are you making sure it’s safe.