LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Raid5 unable to mount array disk after replacing a faulty disk in RAID5 array (https://www.linuxquestions.org/questions/linux-server-73/raid5-unable-to-mount-array-disk-after-replacing-a-faulty-disk-in-raid5-array-4175734969/)

zoobie78 03-16-2024 08:48 AM

Raid5 unable to mount array disk after replacing a faulty disk in RAID5 array
 
Hi,

I have a RAID5 array with 4 disks - this array was in a clean degraded status, as one of the disk was faulty and automatically removed from the array.

I have purchased a new HDD and then I have added this new disk through webmin administration tool in order to replace the faulty disk sda7 with the new disk sde6.
I have monitored the recovery process through cat/post/mdstat and the whole process took over 6 hours. A the end of the process I switched off my NAS Server (DYI running under Ubuntu 22.04).

Since then, I need to comment the mount point in fstab, as it is impossible to mount the array, each time I got the following error::

Code:

manu29chatel@NAS:/home/deneufchatel_family$ sudo mount /dev/md127 /export/media3
mount: /export/media3: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.


Here is the detail of the array /dev/md127
Code:

manu29chatel@NAS:/home/deneufchatel_family$ sudo mdadm -D /dev/md127
/dev/md127:
          Version : 1.2
    Creation Time : Sat Jan  9 18:15:50 2016
        Raid Level : raid5
        Array Size : 2196869568 (2.05 TiB 2.25 TB)
    Used Dev Size : 732289856 (698.37 GiB 749.86 GB)
      Raid Devices : 4
    Total Devices : 4
      Persistence : Superblock is persistent

      Update Time : Thu Mar 14 07:08:23 2024
            State : clean
    Active Devices : 4
  Working Devices : 4
    Failed Devices : 0
    Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : resync

              Name : NAS:media3  (local to host NAS)
              UUID : c417fb9d:0bc3d299:e6704467:f6cfe34c
            Events : 70249

    Number  Major  Minor  RaidDevice State
      0      8      49        0      active sync  /dev/sdd1
      1      8      35        1      active sync  /dev/sdc3
      5      8      70        2      active sync  /dev/sde6
      4      8      19        3      active sync  /dev/sdb3

I have been clearly an idiot as I didn't do a backup prior adding the new disk to the array in order to replace the faulty one.

Is there a way to mount the array (/dev/md127) and to be able to recover / access the data on the array ?

Thanking you in advance for your support.

Regards,

Zoobie78

lvm_ 03-17-2024 01:11 AM

md think that array is clean, so whatever happened to it has happened and presents interest only from the 'learning from your own mistakes' POV. What we have now is the filesystem that won't mount, you can forget about it being on a RAID and deal with it using the corrupted filesystem scenario - testdisk and such. Though I won't hold too many hopes - RAID issues usually thrash filesystems thoroughly and for good.

michaelk 03-17-2024 10:15 AM

Have you tried specifying a filesystem type with the mount command?

mount -t ext4 /dev/md127 /export/media3 (replace ext4 with the actual filesystem type if different)

syg00 03-17-2024 10:47 PM

Quote:

Originally Posted by zoobie78 (Post 6490070)
I have monitored the recovery process through cat/post/mdstat and the whole process took over 6 hours. A the end of the process I switched off my NAS Server (DYI running under Ubuntu 22.04).

This bit worries me. How long (time-wise) after the process finished (as far as mdstat was concerned) did you physically shut the machine down ?. And how did you do it ?.
What does this return
Code:

sudo file -s /dev/md127

zoobie78 03-20-2024 03:15 PM

Hi sorry for the late feedback, I was on a business strip

I have been able to go back to a previous state by removing the newly added disk (sde6) from the array, with the set faulty and remove command.

I have been able to remount the array that is now under clean degraded status. I'm currently backing up all the data stored on the array partition

I'm under the impression that all these issues are related to the new disk. This new disk is a 8TB disk with advanced format (with physical and logical sector of 4096 ilo 512/12 and 512/4096 for the other disks included in the array.

It seems that generates the following side effect:
  1. The array partition is changing from a 512 octet based to 4096 octet and the result is that the partition is no readable and the OS don't detect the fs and highlight an issue on the superblock
  2. The partitions captured in the array are not aligned

Are there tutorials or documentation on:

1- How to align the single partitions captured in the array
2- How to build an array with a mixture of HDD with standard format and HDD with advanced format

I really believe that this is related to the new disk as I had a similar issue on another array on the same machine by replacing the disk partition from sda disk to sde disk


All times are GMT -5. The time now is 10:52 PM.