Linux: Software RAID mit MDADM verwalten

Software-RAID
mdadm – Tipps & Tricks
Auto Mounting RAID Arrays on Linux Server Startup
Software Raid0 doesn’t mount successfully in fstab
Wikipedia: mdadm
Software RAID mit MDADM verwalten
mdadm mounting
How can I make mdadm auto-assemble RAID after each boot?
How to get an inactive RAID device working again?
How To Set Up Software RAID1 On A Running System – Part 1
How To Set Up Software RAID1 On A Running System – Part 2
RAID 5 mdadm superblock and mount on boot help!
Soft Raid 1 on Ubuntu 12.04 with GPT disks
Software RAID and LVM

fstab
Best practice /etc/fstab mount
Automatic mount ext4 hard disk on boot problem

Create RAID 1

# Install mdadm
apt-get update && apt-get install mdadm

# Partition the disk
fdisk /dev/sdb
fdisk /dev/sdc

# Create RAID
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2

# Create partition on RAID
mkfs.ext3 /dev/md0

# Mount /dev/md0
mkdir /raid
mount /dev/md0 /raid
df -H

# Add to /etc/fstab
/dev/md0 /raid1 ext3 noatime,rw 0 0

Maintain RAID

$ mdadm --version
mdadm - v3.3 - 3rd September 2013
$ sudo mdadm --assemble --scan -v
mdadm: looking for devices for /dev/md/0
mdadm: no RAID superblock on /dev/md/0
mdadm: no RAID superblock on /dev/sde5
mdadm: no RAID superblock on /dev/sde2
mdadm: no RAID superblock on /dev/sde1
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sdd1 is busy - skipping
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdc1 is busy - skipping
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdb1 is busy - skipping
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sda1 is busy - skipping
mdadm: no RAID superblock on /dev/sda
$ sudo mdadm -v /dev/md0
/dev/md0: 3725.78GiB raid10 4 devices, 0 spares. Use mdadm --detail for more detail.

$ sudo mdadm --detail /dev/md0
[sudo] password for fabmin: 
/dev/md0:
        Version : 1.2
  Creation Time : Thu Nov 13 15:47:46 2014
     Raid Level : raid10
     Array Size : 3906763776 (3725.78 GiB 4000.53 GB)
  Used Dev Size : 1953381888 (1862.89 GiB 2000.26 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Thu Dec 25 18:43:07 2014
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : homeserver:0
           UUID : d11e189b:bf60aac9:536e72e5:3c8655ab
         Events : 375

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync set-A   /dev/sda1
       1       8       17        1      active sync set-B   /dev/sdb1
       2       8       33        2      active sync set-A   /dev/sdc1
       3       8       49        3      active sync set-B   /dev/sdd1
$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid10 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      3906763776 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
unused devices: <none>
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0755 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

ARRAY /dev/md0 metadata=1.2 name=homeserver:0 UUID=d11e189b:bf60aac9:536e72e5:3c8655ab
$ ps aux | grep mdadm
root      1415  0.0  0.1  13492  2028 ?        Ss   17:27   0:00 /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog

UUID of RAID

root@nas:~# blkid
/dev/sda1: LABEL="Wiederherstellung" UUID="EC9A5F809A5F466C" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="03445844-6d0c-4055-a155-6a1442f31f64"
/dev/sda2: UUID="EC60-8F16" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="10226de3-9880-442e-a9e5-d59cdb0ce76d"
/dev/sda4: UUID="C07C7CA27C7C953E" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="63bc6c88-f15f-4001-98cd-5ebb71519308"
/dev/sda5: UUID="e1d7ee86-6489-4389-a936-7552cb2292e8" TYPE="ext4" PARTUUID="9c269108-0270-43e0-b264-b29da816e33a"
/dev/sda6: UUID="47c5ef7e-a68d-4ddd-8aef-feb0a678f05b" TYPE="swap" PARTUUID="e4674559-cd6f-4d08-8bfe-2767821fe661"
/dev/sdb1: LABEL="DATA1" UUID="4C787EC15DDC54A0" TYPE="ntfs" PARTLABEL="DATA1" PARTUUID="45b920c2-1203-48e9-993d-12e70cb6029d"
/dev/sdb2: UUID="0317b14e-ea32-fa89-f352-5cce2ce8839e" UUID_SUB="e2ccf7f5-235a-88dd-e9f8-5bcda5e7983d" LABEL="nas:0" TYPE="linux_raid_member" PARTLABEL="RAID1_1" PARTUUID="170cbcce-7100-43be-8ee9-f8308e41e2cf"
/dev/sdc1: LABEL="DATA2" UUID="01A1558230E30FE7" TYPE="ntfs" PARTLABEL="DATA2" PARTUUID="c637c62a-1b13-496b-96db-bba719110c3b"
/dev/sdc2: UUID="0317b14e-ea32-fa89-f352-5cce2ce8839e" UUID_SUB="2586842b-0c7b-7a8c-1876-5291b55f0b97" LABEL="nas:0" TYPE="linux_raid_member" PARTLABEL="RAID1_2" PARTUUID="3f74418d-5a64-4647-9907-ec24ccf70517"
/dev/md0p1: LABEL="RAID1" UUID="d3150381-ba4a-4a75-8163-dbf9e0a59e33" TYPE="ext4" PARTLABEL="RAID1" PARTUUID="9a8b494a-2f69-48c1-89f8-2cbf0986a991"
/dev/sda3: PARTLABEL="Microsoft reserved partition" PARTUUID="09fa6865-ee2b-4591-ad2f-9d6d3d00b48d"
/dev/md0: PTUUID="d56f81c4-e1ca-4b60-b7df-fc31e89c34d0" PTTYPE="gpt"
$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Jun 24 18:36:32 2016
     Raid Level : raid1
     Array Size : 3370014720 (3213.90 GiB 3450.90 GB)
  Used Dev Size : 3370014720 (3213.90 GiB 3450.90 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Fri Dec 16 22:33:59 2016
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : nas:0  (local to host nas)
           UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e
         Events : 9836

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       34        1      active sync   /dev/sdc2

Hinzufügen von Disks (wenn sie removed worden sind)

$ sudo mdadm --manage /dev/md/0 -a /dev/sdb2
mdadm: re-added /dev/sdb2

$ sudo mdadm --detail /dev/md0
/dev/md0:
[...]
 Rebuild Status : 42% complete
[...]
    Number   Major   Minor   RaidDevice State
       0       8       18        0      spare rebuilding   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
Mar  2 21:47:30 nas kernel: [    0.752405] ata3: SATA max UDMA/133 abar m2048@0xe1540000 port 0xe1540200 irq 25
Mar  2 21:47:30 nas kernel: [    6.115679] ata3: link is slow to respond, please be patient (ready=0)
Mar  2 21:47:30 nas kernel: [   10.763743] ata3: COMRESET failed (errno=-16)
Mar  2 21:47:30 nas kernel: [   16.123820] ata3: link is slow to respond, please be patient (ready=0)
Mar  2 21:47:30 nas kernel: [   20.771889] ata3: COMRESET failed (errno=-16)
Mar  2 21:47:30 nas kernel: [   26.131960] ata3: link is slow to respond, please be patient (ready=0)
Mar  2 21:47:30 nas kernel: [   45.004224] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Mar  2 21:47:30 nas kernel: [   45.093844] ata3.00: ATA-9: WDC WD40EFRX-68WT0N0, 82.00A82, max UDMA/133
Mar  2 21:47:30 nas kernel: [   45.093849] ata3.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Mar  2 21:47:30 nas kernel: [   45.097003] ata3.00: configured for UDMA/133
# dmesg | egrep 'error|fail|bug'
[    0.113512] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.117340] acpi PNP0A08:00: _OSC failed (AE_ERROR); disabling ASPM
[    0.633202] ehci-pci 0000:00:1a.0: debug port 2
[    0.647962] ehci-pci 0000:00:1d.0: debug port 2
[   10.763743] ata3: COMRESET failed (errno=-16)
[   20.771889] ata3: COMRESET failed (errno=-16)
[   53.028988] systemd[1]: Mounting Debug File System...
[   53.034543] systemd[1]: Mounted Debug File System.
[   53.125708] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro
[   53.976282] vboxdrv: module verification failed: signature and/or required key missing - tainting kernel
# sudo lshw -class disk -short
H/W path         Device     Class          Description
======================================================
/0/1/0.0.0       /dev/sda   disk           256GB Samsung SSD 850
/0/2/0.0.0       /dev/sdb   disk           4TB WDC WD40EFRX-68W
/0/3/0.0.0       /dev/sdc   disk           4TB WDC WD40EFRX-68W


HOWTO: Repair a broken Ext4 Superblock in Ubuntu
MDADM Superblock Recovery

Remove failed disk

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 1
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 1

              Name : nas:0  (local to host nas)
              UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e
            Events : 96209

    Number   Major   Minor   RaidDevice

       -       8       18        -        /dev/sdb2


$ sudo mdadm -E /dev/sdb2 
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e
           Name : nas:0  (local to host nas)
  Creation Time : Fri Jun 24 18:36:32 2016
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 6740029440 (3213.90 GiB 3450.90 GB)
     Array Size : 3370014720 (3213.90 GiB 3450.90 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : 2586842b:0c7b7a8c:18765291:b55f0b97

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Jan  2 11:28:52 2022
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : e64dc54f - correct
         Events : 96209


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

$ sudo mdadm -A /dev/md0 /dev/sdb2 
mdadm: /dev/sdb2 is busy - skipping


$ sudo mdadm --manage /dev/md0 --run
mdadm: started array /dev/md/0


$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jun 24 18:36:32 2016
        Raid Level : raid1
        Array Size : 3370014720 (3213.90 GiB 3450.90 GB)
     Used Dev Size : 3370014720 (3213.90 GiB 3450.90 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Jan  2 11:28:52 2022
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : nas:0  (local to host nas)
              UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e
            Events : 96209

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2


$ sudo mount /raid1

$ ls -la
total 8
drwxr-xr-x  2 root root 4096 Aug 15  2018 .
drwxr-xr-x 25 root root 4096 Feb 23 20:18 ..

$ sudo mdadm --manage /dev/md0 --stop
mdadm: stopped /dev/md0

$ sudo mdadm --assemble --scan -v

$ sudo mdadm --grow /dev/md0 --raid-devices=1 --force
raid_disks for /dev/md0 set to 1

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Jun 24 18:36:32 2016
        Raid Level : raid1
        Array Size : 3370014720 (3213.90 GiB 3450.90 GB)
     Used Dev Size : 3370014720 (3213.90 GiB 3450.90 GB)
      Raid Devices : 1
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Feb 23 21:42:48 2022
             State : clean 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : nas:0  (local to host nas)
              UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e
            Events : 96224

    Number   Major   Minor   RaidDevice State
       1       8       18        0      active sync   /dev/sdb2

$ sudo mount /raid1

$ ls -la
drwxrwxrwx  13 andreas andreas     4096 Feb 24  2019 public
drwxrwxrwx  56 root    root        4096 Jun 15  2021 share

Leave a Reply

Your email address will not be published. Required fields are marked *