{"id":3371,"date":"2014-12-16T19:14:02","date_gmt":"2014-12-16T19:14:02","guid":{"rendered":"http:\/\/blog.bachi.net\/?p=3371"},"modified":"2022-02-23T20:46:32","modified_gmt":"2022-02-23T20:46:32","slug":"linux-software-raid-mit-mdadm-verwalten","status":"publish","type":"post","link":"https:\/\/blog.bachi.net\/?p=3371","title":{"rendered":"Linux: Software RAID mit MDADM verwalten"},"content":{"rendered":"<p><a href=\"http:\/\/wiki.ubuntuusers.de\/Software-RAID\">Software-RAID<\/a><br \/>\n<a href=\"http:\/\/linuxwiki.de\/mdadm\">mdadm &#8211; Tipps &#038; Tricks<\/a><br \/>\n<a href=\"http:\/\/www.linuceum.com\/Server\/srvRAIDAuto.php\">Auto Mounting RAID Arrays on Linux Server Startup<\/a><br \/>\n<a href=\"http:\/\/askubuntu.com\/questions\/342768\/software-raid0-doesnt-mount-successfully-in-fstab\">Software Raid0 doesn&#8217;t mount successfully in fstab<\/a><br \/>\n<a href=\"http:\/\/de.wikipedia.org\/wiki\/Mdadm\">Wikipedia: mdadm<\/a><br \/>\n<a href=\"http:\/\/www.thomas-krenn.com\/de\/wiki\/Software_RAID_mit_MDADM_verwalten\">Software RAID mit MDADM verwalten<\/a><br \/>\n<a href=\"http:\/\/forums.whirlpool.net.au\/archive\/1056135\">mdadm mounting<\/a><br \/>\n<a href=\"http:\/\/superuser.com\/questions\/287462\/how-can-i-make-mdadm-auto-assemble-raid-after-each-boot\">How can I make mdadm auto-assemble RAID after each boot?<\/a><br \/>\n<a href=\"http:\/\/superuser.com\/questions\/117824\/how-to-get-an-inactive-raid-device-working-again\">How to get an inactive RAID device working again?<\/a><br \/>\n<a href=\"http:\/\/www.howtoforge.com\/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze\">How To Set Up Software RAID1 On A Running System &#8211; Part 1<\/a><br \/>\n<a href=\"http:\/\/www.howtoforge.com\/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze-p2\">How To Set Up Software RAID1 On A Running System &#8211; Part 2<\/a><br \/>\n<a href=\"http:\/\/ubuntuforums.org\/archive\/index.php\/t-983238.html\">RAID 5 mdadm superblock and mount on boot help!<\/a><br \/>\n<a href=\"http:\/\/blag.felixhummel.de\/admin\/raid.html\">Soft Raid 1 on Ubuntu 12.04 with GPT disks<\/a><br \/>\n<a href=\"https:\/\/wiki.archlinux.org\/index.php\/Software_RAID_and_LVM\">Software RAID and LVM<\/a><\/p>\n<p><a href=\"https:\/\/help.ubuntu.com\/community\/Fstab\">fstab<\/a><br \/>\n<a href=\"http:\/\/ubuntuforums.org\/showthread.php?t=2081255\">Best practice \/etc\/fstab mount <\/a><br \/>\n<a href=\"http:\/\/askubuntu.com\/questions\/432133\/automatic-mount-ext4-hard-disk-on-boot-problem\">Automatic mount ext4 hard disk on boot problem<\/a><\/p>\n<h3>Create RAID 1<\/h3>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n# Install mdadm\r\napt-get update &amp;&amp; apt-get install mdadm\r\n\r\n# Partition the disk\r\nfdisk \/dev\/sdb\r\nfdisk \/dev\/sdc\r\n\r\n# Create RAID\r\nmdadm --create \/dev\/md0 --level=1 --raid-devices=2 \/dev\/sdb2 \/dev\/sdc2\r\n\r\n# Create partition on RAID\r\nmkfs.ext3 \/dev\/md0\r\n\r\n# Mount \/dev\/md0\r\nmkdir \/raid\r\nmount \/dev\/md0 \/raid\r\ndf -H\r\n\r\n# Add to \/etc\/fstab\r\n\/dev\/md0 \/raid1 ext3 noatime,rw 0 0\r\n<\/pre>\n<h3>Maintain RAID<\/h3>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n$ mdadm --version\r\nmdadm - v3.3 - 3rd September 2013\r\n<\/pre>\n<pre class=\"brush: plain; title: Scan Software RAID; notranslate\" title=\"Scan Software RAID\">\r\n$ sudo mdadm --assemble --scan -v\r\nmdadm: looking for devices for \/dev\/md\/0\r\nmdadm: no RAID superblock on \/dev\/md\/0\r\nmdadm: no RAID superblock on \/dev\/sde5\r\nmdadm: no RAID superblock on \/dev\/sde2\r\nmdadm: no RAID superblock on \/dev\/sde1\r\nmdadm: no RAID superblock on \/dev\/sde\r\nmdadm: \/dev\/sdd1 is busy - skipping\r\nmdadm: no RAID superblock on \/dev\/sdd\r\nmdadm: \/dev\/sdc1 is busy - skipping\r\nmdadm: no RAID superblock on \/dev\/sdc\r\nmdadm: \/dev\/sdb1 is busy - skipping\r\nmdadm: no RAID superblock on \/dev\/sdb\r\nmdadm: \/dev\/sda1 is busy - skipping\r\nmdadm: no RAID superblock on \/dev\/sda\r\n<\/pre>\n<pre class=\"brush: plain; title: See Details; notranslate\" title=\"See Details\">\r\n$ sudo mdadm -v \/dev\/md0\r\n\/dev\/md0: 3725.78GiB raid10 4 devices, 0 spares. Use mdadm --detail for more detail.\r\n\r\n$ sudo mdadm --detail \/dev\/md0\r\n&#x5B;sudo] password for fabmin: \r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Thu Nov 13 15:47:46 2014\r\n     Raid Level : raid10\r\n     Array Size : 3906763776 (3725.78 GiB 4000.53 GB)\r\n  Used Dev Size : 1953381888 (1862.89 GiB 2000.26 GB)\r\n   Raid Devices : 4\r\n  Total Devices : 4\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Thu Dec 25 18:43:07 2014\r\n          State : clean \r\n Active Devices : 4\r\nWorking Devices : 4\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : near=2\r\n     Chunk Size : 512K\r\n\r\n           Name : homeserver:0\r\n           UUID : d11e189b:bf60aac9:536e72e5:3c8655ab\r\n         Events : 375\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8        1        0      active sync set-A   \/dev\/sda1\r\n       1       8       17        1      active sync set-B   \/dev\/sdb1\r\n       2       8       33        2      active sync set-A   \/dev\/sdc1\r\n       3       8       49        3      active sync set-B   \/dev\/sdd1\r\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n$ cat \/proc\/mdstat \r\nPersonalities : &#x5B;linear] &#x5B;multipath] &#x5B;raid0] &#x5B;raid1] &#x5B;raid6] &#x5B;raid5] &#x5B;raid4] &#x5B;raid10] \r\nmd0 : active raid10 sda1&#x5B;0] sdd1&#x5B;3] sdc1&#x5B;2] sdb1&#x5B;1]\r\n      3906763776 blocks super 1.2 512K chunks 2 near-copies &#x5B;4\/4] &#x5B;UUUU]\r\n      \r\nunused devices: &lt;none&gt;\r\n<\/pre>\n<pre class=\"brush: plain; title: \/etc\/mdadm\/mdadm.conf; notranslate\" title=\"\/etc\/mdadm\/mdadm.conf\">\r\n# auto-create devices with Debian standard permissions\r\nCREATE owner=root group=disk mode=0755 auto=yes\r\n\r\n# automatically tag new arrays as belonging to the local system\r\nHOMEHOST &lt;system&gt;\r\n\r\n# instruct the monitoring daemon where to send mail alerts\r\nMAILADDR root\r\n\r\nARRAY \/dev\/md0 metadata=1.2 name=homeserver:0 UUID=d11e189b:bf60aac9:536e72e5:3c8655ab\r\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n$ ps aux | grep mdadm\r\nroot      1415  0.0  0.1  13492  2028 ?        Ss   17:27   0:00 \/sbin\/mdadm --monitor --pid-file \/run\/mdadm\/monitor.pid --daemonise --scan --syslog\r\n<\/pre>\n<h3>UUID of RAID<\/h3>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\nroot@nas:~# blkid\r\n\/dev\/sda1: LABEL=&quot;Wiederherstellung&quot; UUID=&quot;EC9A5F809A5F466C&quot; TYPE=&quot;ntfs&quot; PARTLABEL=&quot;Basic data partition&quot; PARTUUID=&quot;03445844-6d0c-4055-a155-6a1442f31f64&quot;\r\n\/dev\/sda2: UUID=&quot;EC60-8F16&quot; TYPE=&quot;vfat&quot; PARTLABEL=&quot;EFI system partition&quot; PARTUUID=&quot;10226de3-9880-442e-a9e5-d59cdb0ce76d&quot;\r\n\/dev\/sda4: UUID=&quot;C07C7CA27C7C953E&quot; TYPE=&quot;ntfs&quot; PARTLABEL=&quot;Basic data partition&quot; PARTUUID=&quot;63bc6c88-f15f-4001-98cd-5ebb71519308&quot;\r\n\/dev\/sda5: UUID=&quot;e1d7ee86-6489-4389-a936-7552cb2292e8&quot; TYPE=&quot;ext4&quot; PARTUUID=&quot;9c269108-0270-43e0-b264-b29da816e33a&quot;\r\n\/dev\/sda6: UUID=&quot;47c5ef7e-a68d-4ddd-8aef-feb0a678f05b&quot; TYPE=&quot;swap&quot; PARTUUID=&quot;e4674559-cd6f-4d08-8bfe-2767821fe661&quot;\r\n\/dev\/sdb1: LABEL=&quot;DATA1&quot; UUID=&quot;4C787EC15DDC54A0&quot; TYPE=&quot;ntfs&quot; PARTLABEL=&quot;DATA1&quot; PARTUUID=&quot;45b920c2-1203-48e9-993d-12e70cb6029d&quot;\r\n\/dev\/sdb2: UUID=&quot;0317b14e-ea32-fa89-f352-5cce2ce8839e&quot; UUID_SUB=&quot;e2ccf7f5-235a-88dd-e9f8-5bcda5e7983d&quot; LABEL=&quot;nas:0&quot; TYPE=&quot;linux_raid_member&quot; PARTLABEL=&quot;RAID1_1&quot; PARTUUID=&quot;170cbcce-7100-43be-8ee9-f8308e41e2cf&quot;\r\n\/dev\/sdc1: LABEL=&quot;DATA2&quot; UUID=&quot;01A1558230E30FE7&quot; TYPE=&quot;ntfs&quot; PARTLABEL=&quot;DATA2&quot; PARTUUID=&quot;c637c62a-1b13-496b-96db-bba719110c3b&quot;\r\n\/dev\/sdc2: UUID=&quot;0317b14e-ea32-fa89-f352-5cce2ce8839e&quot; UUID_SUB=&quot;2586842b-0c7b-7a8c-1876-5291b55f0b97&quot; LABEL=&quot;nas:0&quot; TYPE=&quot;linux_raid_member&quot; PARTLABEL=&quot;RAID1_2&quot; PARTUUID=&quot;3f74418d-5a64-4647-9907-ec24ccf70517&quot;\r\n\/dev\/md0p1: LABEL=&quot;RAID1&quot; UUID=&quot;d3150381-ba4a-4a75-8163-dbf9e0a59e33&quot; TYPE=&quot;ext4&quot; PARTLABEL=&quot;RAID1&quot; PARTUUID=&quot;9a8b494a-2f69-48c1-89f8-2cbf0986a991&quot;\r\n\/dev\/sda3: PARTLABEL=&quot;Microsoft reserved partition&quot; PARTUUID=&quot;09fa6865-ee2b-4591-ad2f-9d6d3d00b48d&quot;\r\n\/dev\/md0: PTUUID=&quot;d56f81c4-e1ca-4b60-b7df-fc31e89c34d0&quot; PTTYPE=&quot;gpt&quot;\r\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n$ sudo mdadm --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Fri Jun 24 18:36:32 2016\r\n     Raid Level : raid1\r\n     Array Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n  Used Dev Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n   Raid Devices : 2\r\n  Total Devices : 1\r\n    Persistence : Superblock is persistent\r\n\r\n  Intent Bitmap : Internal\r\n\r\n    Update Time : Fri Dec 16 22:33:59 2016\r\n          State : clean, degraded \r\n Active Devices : 1\r\nWorking Devices : 1\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n           Name : nas:0  (local to host nas)\r\n           UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e\r\n         Events : 9836\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       0        0        0      removed\r\n       1       8       34        1      active sync   \/dev\/sdc2\r\n\r\n<\/pre>\n<h3>Hinzuf\u00fcgen von Disks (wenn sie removed worden sind)<\/h3>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n$ sudo mdadm --manage \/dev\/md\/0 -a \/dev\/sdb2\r\nmdadm: re-added \/dev\/sdb2\r\n\r\n$ sudo mdadm --detail \/dev\/md0\r\n\/dev\/md0:\r\n&#x5B;...]\r\n Rebuild Status : 42% complete\r\n&#x5B;...]\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       18        0      spare rebuilding   \/dev\/sdb2\r\n       1       8       34        1      active sync   \/dev\/sdc2\r\n<\/pre>\n<pre class=\"brush: plain; title: \/var\/log\/kern.log \/var\/log\/syslog; notranslate\" title=\"\/var\/log\/kern.log \/var\/log\/syslog\">\r\nMar  2 21:47:30 nas kernel: &#x5B;    0.752405] ata3: SATA max UDMA\/133 abar m2048@0xe1540000 port 0xe1540200 irq 25\r\nMar  2 21:47:30 nas kernel: &#x5B;    6.115679] ata3: link is slow to respond, please be patient (ready=0)\r\nMar  2 21:47:30 nas kernel: &#x5B;   10.763743] ata3: COMRESET failed (errno=-16)\r\nMar  2 21:47:30 nas kernel: &#x5B;   16.123820] ata3: link is slow to respond, please be patient (ready=0)\r\nMar  2 21:47:30 nas kernel: &#x5B;   20.771889] ata3: COMRESET failed (errno=-16)\r\nMar  2 21:47:30 nas kernel: &#x5B;   26.131960] ata3: link is slow to respond, please be patient (ready=0)\r\nMar  2 21:47:30 nas kernel: &#x5B;   45.004224] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)\r\nMar  2 21:47:30 nas kernel: &#x5B;   45.093844] ata3.00: ATA-9: WDC WD40EFRX-68WT0N0, 82.00A82, max UDMA\/133\r\nMar  2 21:47:30 nas kernel: &#x5B;   45.093849] ata3.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31\/32), AA\r\nMar  2 21:47:30 nas kernel: &#x5B;   45.097003] ata3.00: configured for UDMA\/133\r\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n# dmesg | egrep 'error|fail|bug'\r\n&#x5B;    0.113512] PCI: Using host bridge windows from ACPI; if necessary, use &quot;pci=nocrs&quot; and report a bug\r\n&#x5B;    0.117340] acpi PNP0A08:00: _OSC failed (AE_ERROR); disabling ASPM\r\n&#x5B;    0.633202] ehci-pci 0000:00:1a.0: debug port 2\r\n&#x5B;    0.647962] ehci-pci 0000:00:1d.0: debug port 2\r\n&#x5B;   10.763743] ata3: COMRESET failed (errno=-16)\r\n&#x5B;   20.771889] ata3: COMRESET failed (errno=-16)\r\n&#x5B;   53.028988] systemd&#x5B;1]: Mounting Debug File System...\r\n&#x5B;   53.034543] systemd&#x5B;1]: Mounted Debug File System.\r\n&#x5B;   53.125708] EXT4-fs (sda5): re-mounted. Opts: errors=remount-ro\r\n&#x5B;   53.976282] vboxdrv: module verification failed: signature and\/or required key missing - tainting kernel\r\n<\/pre>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n# sudo lshw -class disk -short\r\nH\/W path         Device     Class          Description\r\n======================================================\r\n\/0\/1\/0.0.0       \/dev\/sda   disk           256GB Samsung SSD 850\r\n\/0\/2\/0.0.0       \/dev\/sdb   disk           4TB WDC WD40EFRX-68W\r\n\/0\/3\/0.0.0       \/dev\/sdc   disk           4TB WDC WD40EFRX-68W\r\n<\/pre>\n<p><a href=\"https:\/\/ubuntuforums.org\/showthread.php?t=1713528\"><\/a><br \/>\n<a href=\"https:\/\/linuxexpresso.wordpress.com\/2010\/03\/31\/repair-a-broken-ext4-superblock-in-ubuntu\/\">HOWTO: Repair a broken Ext4 Superblock in Ubuntu<\/a><br \/>\n<a href=\"http:\/\/askubuntu.com\/questions\/69086\/mdadm-superblock-recovery\">MDADM Superblock Recovery<\/a><\/p>\n<p><!-- --------------------------------------------------------------------------------------------- --><\/p>\n<h3>Remove failed disk<\/h3>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n$ sudo mdadm --detail \/dev\/md0\r\n\/dev\/md0:\r\n           Version : 1.2\r\n        Raid Level : raid0\r\n     Total Devices : 1\r\n       Persistence : Superblock is persistent\r\n\r\n             State : inactive\r\n   Working Devices : 1\r\n\r\n              Name : nas:0  (local to host nas)\r\n              UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e\r\n            Events : 96209\r\n\r\n    Number   Major   Minor   RaidDevice\r\n\r\n       -       8       18        -        \/dev\/sdb2\r\n\r\n\r\n$ sudo mdadm -E \/dev\/sdb2 \r\n\/dev\/sdb2:\r\n          Magic : a92b4efc\r\n        Version : 1.2\r\n    Feature Map : 0x1\r\n     Array UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e\r\n           Name : nas:0  (local to host nas)\r\n  Creation Time : Fri Jun 24 18:36:32 2016\r\n     Raid Level : raid1\r\n   Raid Devices : 2\r\n\r\n Avail Dev Size : 6740029440 (3213.90 GiB 3450.90 GB)\r\n     Array Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n    Data Offset : 262144 sectors\r\n   Super Offset : 8 sectors\r\n   Unused Space : before=262056 sectors, after=0 sectors\r\n          State : clean\r\n    Device UUID : 2586842b:0c7b7a8c:18765291:b55f0b97\r\n\r\nInternal Bitmap : 8 sectors from superblock\r\n    Update Time : Sun Jan  2 11:28:52 2022\r\n  Bad Block Log : 512 entries available at offset 72 sectors\r\n       Checksum : e64dc54f - correct\r\n         Events : 96209\r\n\r\n\r\n   Device Role : Active device 1\r\n   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)\r\n\r\n$ sudo mdadm -A \/dev\/md0 \/dev\/sdb2 \r\nmdadm: \/dev\/sdb2 is busy - skipping\r\n\r\n\r\n$ sudo mdadm --manage \/dev\/md0 --run\r\nmdadm: started array \/dev\/md\/0\r\n\r\n\r\n$ sudo mdadm --detail \/dev\/md0\r\n\/dev\/md0:\r\n           Version : 1.2\r\n     Creation Time : Fri Jun 24 18:36:32 2016\r\n        Raid Level : raid1\r\n        Array Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n     Used Dev Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n      Raid Devices : 2\r\n     Total Devices : 1\r\n       Persistence : Superblock is persistent\r\n\r\n     Intent Bitmap : Internal\r\n\r\n       Update Time : Sun Jan  2 11:28:52 2022\r\n             State : clean, degraded \r\n    Active Devices : 1\r\n   Working Devices : 1\r\n    Failed Devices : 0\r\n     Spare Devices : 0\r\n\r\nConsistency Policy : bitmap\r\n\r\n              Name : nas:0  (local to host nas)\r\n              UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e\r\n            Events : 96209\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       -       0        0        0      removed\r\n       1       8       18        1      active sync   \/dev\/sdb2\r\n\r\n\r\n$ sudo mount \/raid1\r\n\r\n$ ls -la\r\ntotal 8\r\ndrwxr-xr-x  2 root root 4096 Aug 15  2018 .\r\ndrwxr-xr-x 25 root root 4096 Feb 23 20:18 ..\r\n\r\n$ sudo mdadm --manage \/dev\/md0 --stop\r\nmdadm: stopped \/dev\/md0\r\n\r\n$ sudo mdadm --assemble --scan -v\r\n\r\n$ sudo mdadm --grow \/dev\/md0 --raid-devices=1 --force\r\nraid_disks for \/dev\/md0 set to 1\r\n\r\n$ sudo mdadm --detail \/dev\/md0\r\n\/dev\/md0:\r\n           Version : 1.2\r\n     Creation Time : Fri Jun 24 18:36:32 2016\r\n        Raid Level : raid1\r\n        Array Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n     Used Dev Size : 3370014720 (3213.90 GiB 3450.90 GB)\r\n      Raid Devices : 1\r\n     Total Devices : 1\r\n       Persistence : Superblock is persistent\r\n\r\n     Intent Bitmap : Internal\r\n\r\n       Update Time : Wed Feb 23 21:42:48 2022\r\n             State : clean \r\n    Active Devices : 1\r\n   Working Devices : 1\r\n    Failed Devices : 0\r\n     Spare Devices : 0\r\n\r\nConsistency Policy : bitmap\r\n\r\n              Name : nas:0  (local to host nas)\r\n              UUID : 0317b14e:ea32fa89:f3525cce:2ce8839e\r\n            Events : 96224\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       1       8       18        0      active sync   \/dev\/sdb2\r\n\r\n$ sudo mount \/raid1\r\n\r\n$ ls -la\r\ndrwxrwxrwx  13 andreas andreas     4096 Feb 24  2019 public\r\ndrwxrwxrwx  56 root    root        4096 Jun 15  2021 share\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Software-RAID mdadm &#8211; Tipps &#038; Tricks Auto Mounting RAID Arrays on Linux Server Startup Software Raid0 doesn&#8217;t mount successfully in fstab Wikipedia: mdadm Software RAID mit MDADM verwalten mdadm mounting How can I make mdadm auto-assemble RAID after each boot? How to get an inactive RAID device working again? How To Set Up Software RAID1 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-3371","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/3371","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3371"}],"version-history":[{"count":24,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/3371\/revisions"}],"predecessor-version":[{"id":13026,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=\/wp\/v2\/posts\/3371\/revisions\/13026"}],"wp:attachment":[{"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3371"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3371"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.bachi.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3371"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}