Frequently Asked Questions & Definitions
Redundant Array of Independent Disks
In computing, a redundant array of independent disks (more commonly known as a RAID) is a system of using multiple hard drives for sharing or replicating data among the drives. Depending on the version chosen the benefit of RAID is a one or more of increased data integrity, fault-tolerance, throughput or capacity compared to single drives. In its original implementations (in which it was an abbreviation for "Redundant Array of Inexpensive Disks"), its key advantage was the ability to combine multiple low-cost devices using older technology into an array that together offered greater capacity, reliability, and/or speed than was affordably available in singular devices using the newest technology.
At the very simplest level, RAID is one of many ways to combine multiple hard drives into one single logical unit. Thus, instead of seeing several different hard drives, the operating system sees only one. RAID is typically used on server computers, and is usually implemented with identically-sized disk drives. With decreases in hard drive prices and wider availability of RAID options built into motherboard chipsets, RAID is also being found and offered as an option in higher-end end user computers, especially computers dedicated to storage-intensive tasks, such as video and audio editing.
The original RAID specification suggested a number of prototype "RAID Levels", or combinations of disks. Each had theoretical advantages and disadvantages. Over the years, different implementations of the RAID concept have appeared. Most differ substantially from the original idealized RAID levels, but the numbered names have remained. This can be confusing, since one implementation of RAID 5, for example, can differ substantially from another. RAID 3 and RAID 4 are often confused and even used interchangeably.
The very definition of RAID has been argued over the years. The use of the term redundant leads many to split hairs over whether RAID 0 is "real" RAID. Similarly, the change from inexpensive to independent confuses many as to the intended purpose of RAID. There are even some single-disk implementations of the RAID concept. For the purpose of this article, we will say that any system which employs the basic RAID concepts to recombine physical disk space for purposes of reliability or performance is a RAID system.
What is a Network Attached Storage (NAS)?
Network Attached Storage
Network attached storage is simply about storing data using a method by which it can be made available to clients on the network. Over the years, the storage of data has evolved through various phases. This evolution has been driven partly by the changing ways in which we use technology, and in part by the exponential increase in the volume of data we need to store. It has also been driven by new technologies, which allow us to store and manage data in a more effective manner.
In the days of mainframes, data was stored physically separate from the actual processing unit, but was still only accessible through the processing units. As PC based servers became more commonplace, storage devices went 'inside the box' or in external boxes that were connected directly to the system. Each of these approaches was valid in its time, but as our need to store increasing volumes of data and our need to make it more accessible grew, other alternatives were needed. Enter network storage.
Network storage is a generic term used to describe network based data storage, but there are many technologies within it which all go to make the magic happen. Here is a rundown of some of the basic terminology that you might happen across when reading about network storage.
Small Computer Systems Interface
This acronym is pronounced "scuzzy" and stands for Small Computer Systems Interface. There are now three types of interfaces for hard drives, CD-ROM drives, etc. One is SCSI, another is IDE (also called PATA, Paralle ATA) and the newest one which replaced IDE is called SATA (Serial ATA). IDE and SATA are much more common and less expensive. SCSI is more expensive and also more flexible and generally faster. With a single SCSI card you can have 15 or more devices whereas you are only allowed to have 4 devices with an IDE system and a single device per SATA port. The fastest hard drives (and generally CD-ROM drives too) are SCSI-based. Examples are the 15,000 rpm Seagate Cheetah hard drives. The fastest IDE and SATA drives run at 7,200 rpm. To have a SCSI-based computer, you have to have a SCSI card (or an onboard SCSI interface), SCSI hard drive, etc. SCSI is more complicated to configure and should not be taken on by amateurs. There is a variety of connections such as 25, 50, 68, 68 LVD, 80 SCA, etc. (where the numbers represent the types of connections.)
What are Rack Units (RU) - 1U, 2U, 3U, 5U, ...?
1U (or 1RU) = 1.75" (inches)
A unit = 1¾ inches, indicating the amount of space taken up by a piece of electronic equipment in the mounting system described below. It is based on the height of the equipment's front panel; the width is standardized at 19 inches (482.6 mm). (Standard racks with widths of 23 inches (584 mm) and 30 inches (762 mm) are also made, but rarely if ever encountered by the consumer.) The nominal height of the panel is a multiple of 1¾″ ; the size of the equipment is described by the number of 1¾″ units the equipment takes up. Symbol, U, or sometimes RU. A 3U panel would be (1¾″ × 3 = ) 5¼″ high.
This mounting system originated with the telephone company, who needed a standard for housing the millions of relays at one time used in the telephone system. Its adaptability, and the high quality and ready availability of components lead to the system's use in housing electronic equipment in industry and research, and in the 1980s by manufacturers of consumer audio equipment. The latter unfortunately sometimes made nonstandard sizes.
The main parts of the rack itself are two uprights, usually steel. Down the center of each runs a series of holes tapped to accept 12-24 machine screws. A clear space 17⅜ inches wide is left between the uprights. To mount a piece of equipment, its front panel is held against the uprights and screws passed through the panel into the holes in the uprights. Very heavy equipment may need additional support, such as angle brackets mounted to the rear of the uprights, or shelf supports running to additional uprights in the rear.
The holes in the upright occur in pairs with their centers ¼ inch apart. The pairs are spaced with 1¼ inches from the center of the bottom hole in a pair to the center of the uppermost hole in the pair beneath. Some manufacturers add additional holes.
The 19″-wide front panels are generally aluminum 3⁄16″ thick or steel ⅛″ thick. The actual height of the panel is only a nominal multiple of 1¾″, because 1⁄64″ is taken off both top and bottom to provide a bit of clearance.
The sides of the panels are notched to accommodate the screws; these notches are ¼″ wide and end in a ¼″ hole whose center is 5⁄32" from the panel edge. The placement of the notches depends on the panel's height. The positions of the notches are always measured from the horizontal centerline of the panel.
1U and 2U panels have two notches on each side. In a 1U panel the centers of the notches are ⅝″ above and below the centerline, and in a 2U panel, 1½″ above and below the centerline.
The 3U, 4U, and 5U panels also have a single pair of notches on each side, but they are much farther from the top and bottom edges, about 1½". The centers of the notches are the following distances above and below the centerline: 3U, 1⅛″; 4U, 2″; 5U, 2⅞″.
The standard 6U panel has 4 notches on each side. The centers of the notches nearest the centerline are 1½ inches above and below the centerline. The centers of the other two notches are 2¼″ from the centers of the previously mentioned notches, placing their centers about 1½″ from the edge of the panel.
The 7U is 12¼″ high and has 6 notches on each end: the first 1⅛″ above and below the centerline, and then two more on 1¾″ centers. The centers of the highest and lowest notches are thus about 1½″ from the panel's edge.
Secure location for web hosting servers
A data center is a facility used for housing a large amount of electronic equipment, typically computers and communications equipment. As the name implies, a data center is usually maintained by an organization for the purpose of handling the data necessary for its operations. A bank for example may have a data center, where all its customers' account information is maintained and transactions involving this data are carried out. Practically every company that is mid-sized or larger has some kind of data center with the larger companies often having dozens of data centers. Most large cities have many purpose-built data center buildings in secure locations close to telecommunications services. Most colocation centers and Internet peering points are located in these kinds of facilities.
As data is a crucial aspect of most organizational operations, organizations tend to be very protective of their data. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is depicted in its physical and logical layout.
During the dot com crash, millions of square meters of general-purpose data centers were built in the hope of filling them with servers for web hosting and application service providers. However this demand never come true.
SAS - Serial Attached SCSI
Ultra320 SCSI has been released allowing for a 320MB/sec data transfer rate. The SCSI specifications allow for the future release of Ultra640 as well doubling the transfer rate again. Ultra 640 is however will not eventuate due to issues with "data skewing" and the release of Serial Attached SCSI.
The release of Serial Attached SCSI (SAS) overcomes the problem of "skewing" of parallel data bits on the cable by running all communications via a Serial interface. The SCSI committees are working on a future development of the Serial standard.
Serial Attached SCSI (SAS) is pin compatible with SATA allowing SATA drive caddies to be used in Serial SCSI. SAS controllers can also use SATA drives. Current SAS controllers operate at 3Gb/sec to 6Gb/sec and will scale up to 12Gb/sec in the next few years . A SAS controller can have up to 128 devices attached. There can be multiple controllers in a SAS configuration allowing for both redundancy in case of failure and increased performance.
The main difference between SAS and SATA continues to be the higher spindle speeds, faster access times, and higher transfer rates available in SAS drives. SAS continues to be the high performance storage system, while SATA arrays provide cost effective mass storage.
In another area is the release of iSCSI. This allows for the SCSI protocol to be used over an IP network. This means that iSCSI drive arrays can be consolidated at central points and accessed by servers anywhere in the room, building, country or world. It allows a SAN equivalent without the cost of Fibre networking and Fibre switches. Some of the iSCSI releases actually use SATA or SAS drives in the array, with the unit providing a iSCSI interface to the network. This allows for large amounts of cheap storage to be implemented anywhere on the network. it does however require some form of storage management system which can add significantly to the cost.