Debian Bug report logs -
#495580
Kernel doesn't start a resync after adding a disk.
Reported by: Felix Zielcke <fzielcke@z-51.de>
Date: Mon, 18 Aug 2008 18:45:01 UTC
Severity: normal
Tags: upstream
Found in version mdadm/2.6.7-3
Done: Felix Zielcke <fzielcke@z-51.de>
Bug is archived. No further changes may be made.
Forwarded to neilb@suse.de
Toggle useless messages
Report forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to Felix Zielcke <fzielcke@z-51.de>:
New Bug report received and forwarded. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #5 received at submit@bugs.debian.org (full text, mbox, reply):
Package: mdadm
Version: 2.6.7-3
Severity: important
Hello,
I played recently around with the Linux software RAID for grub2, I had never
before that much to do with it so maybe I did something wrong.
But in any case it shouldn't be that easy to get a broken RAID 10 :)
It's more from my memory and .bash_history now, I didn't do it now exactly that way again.
I made a 4 disk RAID 10 with:
# mdadm -C -l10 -n4 /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
Then I just set one after one faulty and removed them and then added it back
again.
# mdadm -f /dev/sdc1
# mdadm -r /dev/sdc1
# mdadm -a /dev/sdc1
Before I stopped it I luckly did a -Q --detail see below
I can't reassemble it now:
mdadm: /dev/md0 assembled from 1 drive and 3 spares - not enough to start the
array.
I even tried before stopping the RAID, a --update=resync but that didn't change
anything.
mdadm --grow -n4 didn't work either, unfortunately I don't have the output
anymore.
# mdadm -Q --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Mon Aug 18 18:17:19 2008
Raid Level : raid10
Array Size : 16771584 (15.99 GiB 17.17 GB)
Used Dev Size : 8385792 (8.00 GiB 8.59 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Aug 18 20:13:58 2008
State : clean, degraded
Active Devices : 1
Working Devices : 4
Failed Devices : 0
Spare Devices : 3
Layout : near=2, far=1
Chunk Size : 64K
UUID : 2b2e94e0:ec27865c:89ccbef7:ff5abfb0 (local to host fz-vm)
Events : 0.448
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 49 1 active sync /dev/sdd1
2 0 0 2 removed
3 0 0 3 removed
4 8 81 - spare /dev/sdf1
5 8 65 - spare /dev/sde1
6 8 33 - spare /dev/sdc1
-- Package-specific info:
--- mount output
/dev/sda1 on / type ext4dev (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
root@fz:/home/fz on /home/fz type fuse.sshfs (rw,nosuid,nodev,max_read=65536,allow_other)
--- mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# This file was auto-generated on Mon, 28 Jul 2008 21:44:39 +0200
# by mkconf $Id$
--- /proc/mdstat:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : inactive sdd1[1](S) sdc1[6](S) sde1[5](S) sdf1[4](S)
33543168 blocks
unused devices: <none>
--- /proc/partitions:
major minor #blocks name
8 0 8388608 sda
8 1 8385898 sda1
8 16 8388608 sdb
8 17 8385898 sdb1
8 32 8388608 sdc
8 33 8385898 sdc1
8 48 8388608 sdd
8 49 8385898 sdd1
8 64 8388608 sde
8 65 8385898 sde1
8 80 8388608 sdf
8 81 8385898 sdf1
--- initrd.img-2.6.27-rc3:
--- /proc/modules:
--- volume detail:
--- /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-2.6.27-rc3 root=/dev/sda1 ro vga=0x317
-- System Information:
Debian Release: lenny/sid
APT prefers experimental
APT policy: (500, 'experimental'), (500, 'unstable')
Architecture: amd64 (x86_64)
Kernel: Linux 2.6.27-rc3 (SMP w/2 CPU cores)
Locale: LANG=de_DE.UTF-8, LC_CTYPE=de_DE.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Versions of packages mdadm depends on:
ii debconf 1.5.23 Debian configuration management sy
ii libc6 2.8+20080809-1 GNU C Library: Shared libraries
ii lsb-base 3.2-19 Linux Standard Base 3.2 init scrip
ii makedev 2.3.1-88 creates device files in /dev
ii udev 0.125-5 /dev/ and hotplug management daemo
Versions of packages mdadm recommends:
pn mail-transport-agent <none> (no description available)
ii module-init-tools 3.4-1 tools for managing Linux kernel mo
mdadm suggests no packages.
-- debconf information:
mdadm/initrdstart_msg_errexist:
mdadm/initrdstart_msg_intro:
* mdadm/autostart: false
* mdadm/autocheck: false
mdadm/initrdstart_msg_errblock:
mdadm/mail_to: root
mdadm/initrdstart_msg_errmd:
* mdadm/initrdstart: all
mdadm/initrdstart_msg_errconf:
mdadm/initrdstart_notinconf: false
* mdadm/start_daemon: false
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to martin f krafft <madduck@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #10 received at submit@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
tags 495580 moreinfo
severity 495580 normal
thanks
also sprach Felix Zielcke <fzielcke@z-51.de> [2008.08.18.1542 -0300]:
> It's more from my memory and .bash_history now, I didn't do it now
> exactly that way again.
You need to tell me exactly what you did. What you describe is not
possible. I don't contest you are seeing a problem, but I have done
these steps hundreds of times without any problem ever.
--
.''`. martin f. krafft <madduck@debian.org>
: :' : proud Debian developer, author, administrator, and user
`. `'` http://people.debian.org/~madduck - http://debiansystem.info
`- Debian - when you have better things to do than fixing systems
[digital_signature_gpg.asc (application/pgp-signature, inline)]
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to martin f krafft <madduck@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Tags added: moreinfo
Request was from martin f krafft <madduck@debian.org>
to control@bugs.debian.org.
(Mon, 18 Aug 2008 22:57:05 GMT) (full text, mbox, link).
Severity set to `normal' from `important'
Request was from martin f krafft <madduck@debian.org>
to control@bugs.debian.org.
(Mon, 18 Aug 2008 22:57:05 GMT) (full text, mbox, link).
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to Felix Zielcke <fzielcke@z-51.de>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #24 received at 495580@bugs.debian.org (full text, mbox, reply):
tags 495580 - moreinfo
thanks
>
> You need to tell me exactly what you did. What you describe is not
> possible. I don't contest you are seeing a problem, but I have done
> these steps hundreds of times without any problem ever.
Ok, just did it now from scratch.
I removed the 4 virtual disk .vmdk files from my VMware vm.
Made again 4 virtual 8 GB disks like always.
/dev/sda has only one 8 GB big Linux partition, nothing special.
Copied the partition table from my first disk to all of it:
fz-vm:~# sfdisk -d /dev/sda| sfdisk /dev/sdc
fz-vm:~# sfdisk -d /dev/sda| sfdisk /dev/sdd
Repeated for sde and sdf
fz-vm:~# mdadm -C -l10 -n4 /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: array /dev/md0 started.
I waited until it was fully synced:
fz-vm:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid10 sdf1[3] sde1[2] sdd1[1] sdc1[0]
16771584 blocks 64K chunks 2 near-copies [4/4] [UUUU]
fz-vm:~# mdadm -f /dev/md0 /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
fz-vm:~# mdadm -r /dev/md0 /dev/sdc1
mdadm: hot removed /dev/sdc1
fz-vm:~# mdadm -a /dev/md0 /dev/sdc1
mdadm: re-added /dev/sdc1
fz-vm:~# mdadm -f /dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0
fz-vm:~# mdadm -r /dev/md0 /dev/sdd1
mdadm: hot removed /dev/sdd1
fz-vm:~# mdadm -a /dev/md0 /dev/sdd1
mdadm: re-added /dev/sdd1
fz-vm:~# mdadm -f /dev/md0 /dev/sde1
mdadm: set /dev/sde1 faulty in /dev/md0
fz-vm:~# mdadm -r /dev/md0 /dev/sde1
mdadm: hot removed /dev/sde1
fz-vm:~# mdadm -a /dev/md0 /dev/sde1
mdadm: re-added /dev/sde1
fz-vm:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid10 sde1[4](S) sdd1[5](S) sdc1[6](S) sdf1[3]
16771584 blocks 64K chunks 2 near-copies [4/1] [___U]
unused devices: <none>
fz-vm:~# mdadm -Q --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Tue Aug 19 08:14:03 2008
Raid Level : raid10
Array Size : 16771584 (15.99 GiB 17.17 GB)
Used Dev Size : 8385792 (8.00 GiB 8.59 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Aug 19 08:21:29 2008
State : clean, degraded
Active Devices : 1
Working Devices : 4
Failed Devices : 0
Spare Devices : 3
Layout : near=2, far=1
Chunk Size : 64K
UUID : b4b1aeb4:de3036fe:89ccbef7:ff5abfb0 (local to host fz-vm)
Events : 0.38
Number Major Minor RaidDevice State
0 0 0 0 removed
1 0 0 1 removed
2 0 0 2 removed
3 8 81 3 active sync /dev/sdf1
4 8 65 - spare /dev/sde1
5 8 49 - spare /dev/sdd1
6 8 33 - spare /dev/sdc1
Tags removed: moreinfo
Request was from Felix Zielcke <fzielcke@z-51.de>
to control@bugs.debian.org.
(Tue, 19 Aug 2008 06:30:11 GMT) (full text, mbox, link).
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to Felix Zielcke <fzielcke@z-51.de>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #31 received at 495580@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
Am Montag, den 18.08.2008, 19:54 -0300 schrieb martin f krafft:
> > You need to tell me exactly what you did. What you describe is not
> > possible. I don't contest you are seeing a problem, but I have done
> > these steps hundreds of times without any problem ever.
As I found the bug I even didn't know that you have a `call for testers'
on PTS. This was totally by accident.
I never use mdadm much or RAID at all.
With my little understanding of it, I just think that this is wrong.
RAID 10 with 1 fully synced disk and 3 spare.
In the case of `mdadm' I am a stupid user.
I should have said in the report `Probable I did something stupid, non
realistic and wrong' :)
My mail today morning was probable a bit too short, because I assumed
the question is `how do you get into this situation'.
Am Dienstag, den 19.08.2008, 08:29 +0200 schrieb Felix Zielcke:
> Ok, just did it now from scratch.
[...]
> fz-vm:~# mdadm -Q --detail /dev/md0
Attached is now the full output of everything inclusive my little `vm'
ssh alias and logout message :)
I did it now again like that.
After that `mdadm -Q --detail /dev/md0':
fz-vm:~# mdadm -A
--update=resync /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: device /dev/md0 already active - cannot assemble it
fz-vm:~# mdadm --grow -l10 -n4 /dev/md0
mdadm: raid10 array /dev/md0 cannot be reshaped.
fz-vm:~# mdadm -S /dev/md0
mdadm: stopped /dev/md0
fz-vm:~# mdadm -R /dev/md0
mdadm: failed to run array /dev/md0: Invalid argument
fz-vm:~# mdadm -A
--update=resync /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: /dev/md0 assembled from 1 drive and 3 spares - not enough to
start the array.
So maybe I should have used `mdadm --zero-superblock' between `mdadm
-r /dev/sdc1' and mdadm `mdadm -a /dev/sdc1' ?
Or maybe I should have done that `mdadm -A --update=resync' in between?
As far as I remember I did this before I reported the bug after I
removed the first disk (sdc1)
But for a stupid user like me in that case, it looks like something
should be changed.
As you can see I just removed and added the disks and didn't check in
between if they get added as spare.
Only in the end I saw `Oh I shouldn't do this' :D
[mdadm.log (text/x-log, attachment)]
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to martin f krafft <madduck@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #36 received at 495580@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
also sprach Felix Zielcke <fzielcke@z-51.de> [2008.08.19.0829 +0200]:
> fz-vm:~# mdadm -f /dev/md0 /dev/sdc1
> mdadm: set /dev/sdc1 faulty in /dev/md0
> fz-vm:~# mdadm -r /dev/md0 /dev/sdc1
> mdadm: hot removed /dev/sdc1
> fz-vm:~# mdadm -a /dev/md0 /dev/sdc1
> mdadm: re-added /dev/sdc1
> fz-vm:~# mdadm -f /dev/md0 /dev/sdd1
> mdadm: set /dev/sdd1 faulty in /dev/md0
> fz-vm:~# mdadm -r /dev/md0 /dev/sdd1
> mdadm: hot removed /dev/sdd1
Unless you waited for /dev/sdc1 to resync, removing /dev/sdd1 will
basically destroy the array. Not much one can do about that.
Anyway, I think that's the problem: when you (re-)add a disk to an
array, it becomes a spare until it synchronised. If you wait for the
synchronisation between each add/remove step, you should never see
more than 1 spare.
--
.''`. martin f. krafft <madduck@debian.org>
: :' : proud Debian developer, author, administrator, and user
`. `'` http://people.debian.org/~madduck - http://debiansystem.info
`- Debian - when you have better things to do than fixing systems
"man sagt nicht 'nichts!', man sagt dafür 'jenseits' oder 'gott'."
- friedrich nietzsche
[digital_signature_gpg.asc (application/pgp-signature, inline)]
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to Felix Zielcke <fzielcke@z-51.de>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #41 received at 495580@bugs.debian.org (full text, mbox, reply):
Now that I think more about this, I think this is more wishlist then a
bug.
Though I don't have a better bug title yet in mind.
So please feel free to change this.
Am Dienstag, den 19.08.2008, 19:21 +0200 schrieb martin f krafft:
>
> Unless you waited for /dev/sdc1 to resync, removing /dev/sdd1 will
> basically destroy the array. Not much one can do about that.
I looked now again at the FAQ, question 18 looks calculatable.
I haven't bothered to look at the code, I don't understand yet that much
C and so I need a while to get into it.
But you have the numbers `mdadm -Q --detail' prints probable anyway
already.
If one device is removed the superblock is changed, so it could print
out a warning/error that it's now broken.
`mdadm -Q --detail' is telling me `clean,degraded' with one active disk
and 3 spare.
It could say `broken'.
> Anyway, I think that's the problem: when you (re-)add a disk to an
> array, it becomes a spare until it synchronised. If you wait for the
> synchronisation between each add/remove step, you should never see
> more than 1 spare.
Ah, I assumed that a spare is always only used if one disk fails.
Would it be possible to make a `mdadm --add-this-disk-as-active-instead-of-spare' ?
or is this actually a good or bad idea?
I have now added another virtual disk /dev/sdg to the VM.
Recreated the raid10
Then I set faulty and removed sdc1 and added sdg1
Still 1 is removed and 2 spare disks and no resyncing, so I think this idea isn't that bad.
fz-vm:~# mdadm -Q --detail /dev/md0
/dev/md0:
Version : 00.90
Creation Time : Tue Aug 19 19:52:16 2008
Raid Level : raid10
Array Size : 16771584 (15.99 GiB 17.17 GB)
Used Dev Size : 8385792 (8.00 GiB 8.59 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Aug 19 20:22:52 2008
State : active, degraded
Active Devices : 3
Working Devices : 5
Failed Devices : 0
Spare Devices : 2
Layout : near=2, far=1
Chunk Size : 64K
UUID : b3d68384:06515443:89ccbef7:ff5abfb0 (local to host fz-vm)
Events : 0.21
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 49 1 active sync /dev/sdd1
2 8 65 2 active sync /dev/sde1
3 8 81 3 active sync /dev/sdf1
4 8 33 - spare /dev/sdc1
5 8 97 - spare /dev/sdg1
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to martin f krafft <madduck@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #46 received at 495580@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
I don't follow you, sorry. If you want, you can try to restate your
case in German.
--
.''`. martin f. krafft <madduck@debian.org>
: :' : proud Debian developer, author, administrator, and user
`. `'` http://people.debian.org/~madduck - http://debiansystem.info
`- Debian - when you have better things to do than fixing systems
echo '9,J8HD,fDGG8B@?:536FC5=8@I;C5?@H5B0D@5GBIELD54DL>@8L?:5GDEJ8LDG1' |\
sed ss,s50EBsg | tr 0-M 'p.wBt SgiIlxmLhan:o,erDsduv/cyP'
[digital_signature_gpg.asc (application/pgp-signature, inline)]
Information forwarded to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(full text, mbox, link).
Acknowledgement sent to Felix Zielcke <fzielcke@z-51.de>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(full text, mbox, link).
Message #51 received at 495580@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
retitle 495580 Kernel doesn't start a resync after adding a disk.
thanks
Am Dienstag, den 19.08.2008, 20:28 +0200 schrieb martin f krafft:
> I don't follow you, sorry. If you want, you can try to restate your
> case in German.
Thank you very much. You got me now on the right track.
Sorry for the confusion, I really have to work on this.
I wanted to report a bug with the 2.6.27-rc3 kernel.
The current Debian sid kernel 2.6.26-2 does sync the disks if I add
them.
I tried now even the newer 2.6.27-rc3-git6 out, because it has a few MD
changes.
That one adds the disks only as `spare' but does not start a resync.
I don't know if and how you want to handle this.
Probable this has been already noticed upstream, but I failed to find
this out.
So please feel free to just close it.
[debian_sid_2.6.26-2.log (text/x-log, attachment)]
[upstream_2.6.27-rc3-git6.log (text/x-log, attachment)]
Changed Bug title to `Kernel doesn't start a resync after adding a disk.' from `mdadm: 4 disk raid10 with 1 active and 3 spare possible'.
Request was from Felix Zielcke <fzielcke@z-51.de>
to control@bugs.debian.org.
(Wed, 20 Aug 2008 07:42:02 GMT) (full text, mbox, link).
Noted your statement that Bug has been forwarded to neilb@suse.de.
Request was from martin f krafft <madduck@debian.org>
to control@bugs.debian.org.
(Wed, 20 Aug 2008 07:51:03 GMT) (full text, mbox, link).
Tags added: upstream
Request was from martin f. krafft <madduck@debian.org>
to control@bugs.debian.org.
(Mon, 29 Sep 2008 11:54:06 GMT) (full text, mbox, link).
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(Mon, 13 Oct 2008 04:57:05 GMT) (full text, mbox, link).
Acknowledgement sent
to Neil Brown <neilb@suse.de>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Mon, 13 Oct 2008 04:57:05 GMT) (full text, mbox, link).
Message #62 received at 495580@bugs.debian.org (full text, mbox, reply):
On Wednesday August 20, fzielcke@z-51.de wrote:
>
> The current Debian sid kernel 2.6.26-2 does sync the disks if I add
> them.
> I tried now even the newer 2.6.27-rc3-git6 out, because it has a few MD
> changes.
> That one adds the disks only as `spare' but does not start a resync.
I cannot reproduce this.
This is surprising though:
> fz-vm:~# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md0 : active raid10 sdc1[4](S) sdf1[3] sde1[2] sdd1[1]
> 16771584 blocks 64K chunks 2 near-copies [4/3] [_UUU]
>
> unused devices: <none>
Here there is one failed device. There is one spare but the array
isn't resyncing.
And...
> fz-vm:~# mdadm -Q --detail /dev/md0
> /dev/md0:
> Version : 00.90
> Creation Time : Wed Aug 20 09:03:27 2008
> Raid Level : raid10
> Array Size : 16771584 (15.99 GiB 17.17 GB)
> Used Dev Size : 8385792 (8.00 GiB 8.59 GB)
> Raid Devices : 4
> Total Devices : 4
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Wed Aug 20 09:07:32 2008
> State : clean, degraded
> Active Devices : 3
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 1
>
> Layout : near=2, far=1
> Chunk Size : 64K
>
> UUID : 04e72c15:980e57a0:89ccbef7:ff5abfb0 (local to host fz-vm)
> Events : 0.10
>
> Number Major Minor RaidDevice State
> 0 0 0 0 removed
> 1 8 49 1 active sync /dev/sdd1
> 2 8 65 2 active sync /dev/sde1
> 3 8 81 3 active sync /dev/sdf1
>
> 4 8 33 - spare /dev/sdc1
> fz-vm:~# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md0 : active raid10 sdd1[4](S) sdc1[5](S) sdf1[3] sde1[2]
> 16771584 blocks 64K chunks 2 near-copies [4/2] [__UU]
Here there are two failed devices. What happened?
Do the kernel logs show anything between these two "cat /proc/mdstat"s?
NeilBrown
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(Mon, 13 Oct 2008 08:00:06 GMT) (full text, mbox, link).
Acknowledgement sent
to Felix Zielcke <fzielcke@z-51.de>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Mon, 13 Oct 2008 08:00:06 GMT) (full text, mbox, link).
Message #67 received at 495580@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
Hello,
Am Montag, den 13.10.2008, 15:50 +1100 schrieb Neil Brown:
> On Wednesday August 20, fzielcke@z-51.de wrote:
> >
> > The current Debian sid kernel 2.6.26-2 does sync the disks if I add
> > them.
> > I tried now even the newer 2.6.27-rc3-git6 out, because it has a few MD
> > changes.
> > That one adds the disks only as `spare' but does not start a resync.
>
> I cannot reproduce this.
I just tried it out now again, still happens with vanilla 2.6.27-git2
kernel but not with vanilla 2.6.26
Oh and this only happens with RAID 10 and not with RAID 1.
I attached now the 2 kernel configs in case it helps.
This is with VMware Workstation 6.5 build 118166 i.e. newest.
> Here there are two failed devices. What happened?
>
> Do the kernel logs show anything between these two "cat /proc/mdstat"s?
Attached is now the syslog from creating the array until I removed 2
disks sdc1 and sdd1 from the RAID 10.
--
Felix Zielcke
[config-2.6.26 (text/plain, attachment)]
[config-2.6.27-git2 (text/plain, attachment)]
[mdraid10 syslog.txt (text/plain, attachment)]
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#495580; Package mdadm.
(Wed, 05 Nov 2008 23:30:05 GMT) (full text, mbox, link).
Acknowledgement sent
to Graeme <graeme@sudo.ca>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Wed, 05 Nov 2008 23:30:05 GMT) (full text, mbox, link).
Message #72 received at 495580@bugs.debian.org (full text, mbox, reply):
Just a heads up, we're also seeing this downstream in Ubuntu/Intrepid on
2.6.27 as well:
https://bugs.launchpad.net/ubuntu/+bug/285156
Reply sent
to Felix Zielcke <fzielcke@z-51.de>:
You have taken responsibility.
(Sun, 25 Jan 2009 14:51:05 GMT) (full text, mbox, link).
Notification sent
to Felix Zielcke <fzielcke@z-51.de>:
Bug acknowledged by developer.
(Sun, 25 Jan 2009 14:51:05 GMT) (full text, mbox, link).
Message #77 received at 495580-done@bugs.debian.org (full text, mbox, reply):
Closing this bug it has been fixed now.
--
Felix Zielcke
Bug archived.
Request was from Debbugs Internal Request <owner@bugs.debian.org>
to internal_control@bugs.debian.org.
(Mon, 23 Feb 2009 07:33:46 GMT) (full text, mbox, link).
Send a report that this bug log contains spam.
Debian bug tracking system administrator <owner@bugs.debian.org>.
Last modified:
Thu Jan 11 03:46:09 2018;
Machine Name:
beach
Debian Bug tracking system
Debbugs is free software and licensed under the terms of the GNU
Public License version 2. The current version can be obtained
from https://bugs.debian.org/debbugs-source/.
Copyright © 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson,
2005-2017 Don Armstrong, and many other contributors.