19.4 RAID

19.4.1 Software RAID Concatenated Disk Driver (CCD) Configuration

Original work by Christopher Shumway. Revised by Jim Brown.

When choosing a mass storage solution, the most important factors to consider are speed, reliability, and cost. It is rare to have all three in balance. Normally a fast, reliable mass storage device is expensive, and to cut back on cost either speed or reliability must be sacrificed.

In designing the system described below, cost was chosen as the most important factor, followed by speed, then reliability. Data transfer speed for this system is ultimately constrained by the network. While reliability is very important, the CCD drive described below serves online data that is already fully backed up and which can easily be replaced.

Defining the requirements is the first step in choosing a mass storage solution. If the requirements prefer speed or reliability over cost, the solution will differ from the system described in this section. Installing the Hardware

In addition to the IDE system disk, three Western Digital 30GB, 5400 RPM IDE disks form the core of the CCD disk described below, providing approximately 90GB of online storage. Ideally, each IDE disk would have its own IDE controller and cable, but to minimize cost, additional IDE controllers were not used. Instead, the disks were configured with jumpers so that each IDE controller has one master, and one slave.

Upon reboot, the system BIOS was configured to automatically detect the disks attached. More importantly, FreeBSD detected them on reboot:

ad0: 19574MB <WDC WD205BA> [39770/16/63] at ata0-master UDMA33
ad1: 29333MB <WDC WD307AA> [59598/16/63] at ata0-slave UDMA33
ad2: 29333MB <WDC WD307AA> [59598/16/63] at ata1-master UDMA33
ad3: 29333MB <WDC WD307AA> [59598/16/63] at ata1-slave UDMA33

Note: If FreeBSD does not detect all the disks, consult the drive documentation for proper setup and verify that the controller is supported by FreeBSD. Setting Up the CCD

The ccd(4) driver takes several identical disks and concatenates them into one logical file system. In order to use ccd(4), its kernel module must be loaded using ccd(4). When using a custom kernel, ensure that this line is compiled in:

device   ccd

Before configuring ccd(4), use bsdlabel(8) to label the disks:

bsdlabel -w ad1 auto
bsdlabel -w ad2 auto
bsdlabel -w ad3 auto

This example creates a bsdlabel for ad1c, ad2c and ad3c that spans the entire disk.

The next step is to change the disk label type. Use bsdlabel(8) to edit the disks:

bsdlabel -e ad1
bsdlabel -e ad2
bsdlabel -e ad3

This opens up the current disk label on each disk with the editor specified by the EDITOR environment variable, typically vi(1).

An unmodified disk label will look something like this:

8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  c: 60074784        0    unused        0     0     0   # (Cyl.    0 - 59597)

Add a new e partition for ccd(4) to use. This can usually be copied from the c partition, but the fstype must be 4.2BSD. The disk label should now look something like this:

8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  c: 60074784        0    unused        0     0     0   # (Cyl.    0 - 59597)
  e: 60074784        0    4.2BSD        0     0     0   # (Cyl.    0 - 59597) Building the File System

Now that all the disks are labeled, build the ccd(4) using ccdconfig(8), with options similar to the following:

ccdconfig ccd0(1) 32(2) 0(3) /dev/ad1e(4) /dev/ad2e /dev/ad3e

The use and meaning of each option is described below:

The first argument is the device to configure, in this case, /dev/ccd0c. The /dev/ portion is optional.
The interleave for the file system, which defines the size of a stripe in disk blocks, each normally 512 bytes. So, an interleave of 32 would be 16,384 bytes.
Flags for ccdconfig(8). For example, to enable drive mirroring, specify a flag. This configuration does not provide mirroring for ccd(4), so it is set at 0 (zero).
The final arguments to ccdconfig(8) are the devices to place into the array. Use the complete path name for each device.

After running ccdconfig(8) the ccd(4) is configured and a file system can be installed. Refer to newfs(8) for options, or run:

newfs /dev/ccd0c Making it All Automatic

Generally, ccd(4) should be configured to automount upon each reboot. To do this, write out the current configuration to /etc/ccd.conf using the following command:

ccdconfig -g > /etc/ccd.conf

During reboot, the script /etc/rc runs ccdconfig -C if /etc/ccd.conf exists. This automatically configures the ccd(4) so it can be mounted.

Note: When booting into single user mode, the following command must be issued to configure the array before the ccd(4) can be mounted:

ccdconfig -C

To automatically mount the ccd(4), place an entry for the ccd(4) in /etc/fstab so it will be mounted at boot time:

/dev/ccd0c              /media       ufs     rw      2       2 The Vinum Volume Manager

The Vinum Volume Manager is a block device driver which implements virtual disk drives. It isolates disk hardware from the block device interface and maps data in ways which result in an increase in flexibility, performance and reliability compared to the traditional slice view of disk storage. vinum(4) implements the RAID-0, RAID-1 and RAID-5 models, both individually and in combination.

Refer to for more information about vinum(4).

19.4.2 Hardware RAID

FreeBSD also supports a variety of hardware RAID controllers. These devices control a RAID subsystem without the need for FreeBSD specific software to manage the array.

Using an on-card BIOS, the card controls most of the disk operations. The following is a brief setup description using a Promise IDE RAID controller. When this card is installed and the system is started up, it displays a prompt requesting information. Follow the instructions to enter the card's setup screen and to combine all the attached drives. After doing so, the disks will look like a single drive to FreeBSD. Other RAID levels can be set up accordingly.

19.4.3 Rebuilding ATA RAID1 Arrays

FreeBSD supports the ability to hot-replace a failed disk in an array.

An error indicating a failed disk will appear in /var/log/messages or in the dmesg(8) output:

ad6 on monster1 suffered a hard error.
ad6: READ command timeout tag=0 serv=0 - resetting
ad6: trying fallback to PIO mode
ata3: resetting devices .. done
ad6: hard error reading fsbn 1116119 of 0-7 (ad6 bn 1116119; cn 1107 tn 4 sn 11)\\
status=59 error=40
ar0: WARNING - mirror lost

Use atacontrol(8) to check for further information:

# atacontrol list
ATA channel 0:
	Master:      no device present
	Slave:   acd0 <HL-DT-ST CD-ROM GCR-8520B/1.00> ATA/ATAPI rev 0

ATA channel 1:
	Master:      no device present
	Slave:       no device present

ATA channel 2:
	Master:  ad4 <MAXTOR 6L080J4/A93.0500> ATA/ATAPI rev 5
	Slave:       no device present

ATA channel 3:
	Master:  ad6 <MAXTOR 6L080J4/A93.0500> ATA/ATAPI rev 5
	Slave:       no device present

# atacontrol status ar0
ar0: ATA RAID1 subdisks: ad4 ad6 status: DEGRADED
  1. First, detach the ata channel with the failed disk so that it can be safely removed:

    # atacontrol detach ata3
  2. Replace the disk.

  3. Reattach the ata channel:

    # atacontrol attach ata3
    Master:  ad6 <MAXTOR 6L080J4/A93.0500> ATA/ATAPI rev 5
    Slave:   no device present
  4. Add the new disk to the array as a spare:

    # atacontrol addspare ar0 ad6
  5. Rebuild the array:

    # atacontrol rebuild ar0
  6. It is possible to check on the progress by issuing the following command:

    # dmesg | tail -10
    [output removed]
    ad6: removed from configuration
    ad6: deleted from ar0 disk1
    ad6: inserted into ar0 disk1 as spare
    # atacontrol status ar0
    ar0: ATA RAID1 subdisks: ad4 ad6 status: REBUILDING 0% completed
  7. Wait until this operation completes.