Debian Bug report logs -
#518834
mdadm: I still regulary see these mismatches
Reported by: Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>
Date: Sun, 8 Mar 2009 22:21:02 UTC
Severity: wishlist
Tags: confirmed, upstream, wontfix
Merged with 405919
Found in versions mdadm/2.5.6-7, mdadm/2.6.7.2-1
Fixed in version 3.1.2-1
Done: Andreas Beckmann <anbe@debian.org>
Bug is archived. No further changes may be made.
Forwarded to neilb@suse.de
Toggle useless messages
Report forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sun, 08 Mar 2009 22:21:04 GMT) (full text, mbox, link).
Acknowledgement sent
to Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>:
New Bug report received and forwarded. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sun, 08 Mar 2009 22:21:04 GMT) (full text, mbox, link).
Message #5 received at bugs@bugs.debian.org (full text, mbox, reply):
Followup-For: Bug #405919
Package: mdadm
Version: 2.6.7.2-1
I changed the cron job so the job is run every sunday.
And I see those mismatches (128 and other powers of 2) every sunday :(
Executing (after the cron job finished):
# echo repair > /sys/block/md1/md/sync_action
# echo check > /sys/block/md1/md/sync_action
cleans it up. But it takes several hours.
I'm doing lvm over raid1 on 2x300GB disks.
Cheers,
-- Package-specific info:
--- mount output
/dev/md0 on / type ext3 (rw,noatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/mapper/host-var on /var type ext3 (rw,noatime)
/dev/mapper/host-tmp on /tmp type ext3 (rw,noatime)
/dev/mapper/host-usr on /usr type ext3 (rw,noatime)
/dev/mapper/host-home on /home type ext3 (rw,noatime)
debugfs on /sys/kernel/debug type debugfs (rw)
--- mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=691befca:2b99a30e:4f4523f6:284bcf3e
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=806c611c:8160fc42:8e29fe5e:7c77705f
# This file was auto-generated on Thu, 05 Jun 2008 14:39:36 +0000
# by mkconf $Id$
--- /proc/mdstat:
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
486432064 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
1951744 blocks [2/2] [UU]
unused devices: <none>
--- /proc/partitions:
major minor #blocks name
8 0 488386584 sda
8 1 1951866 sda1
8 2 486432135 sda2
8 16 488386584 sdb
8 17 1951866 sdb1
8 18 486432135 sdb2
9 0 1951744 md0
9 1 486432064 md1
253 0 5242880 dm-0
253 1 5242880 dm-1
253 2 5242880 dm-2
253 3 10485760 dm-3
253 4 10485760 dm-4
253 5 10485760 dm-5
253 6 5242880 dm-6
253 7 31457280 dm-7
253 8 52428800 dm-8
253 9 104857600 dm-9
--- initrd.img-2.6.26-1-686-bigmem:
38977 blocks
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/dm-snapshot.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/raid1.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/md-mod.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/dm-log.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/raid456.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/linear.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/dm-mod.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/dm-mirror.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/multipath.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/raid0.ko
lib/modules/2.6.26-1-686-bigmem/kernel/drivers/md/raid10.ko
scripts/local-top/mdadm
sbin/mdadm
etc/mdadm
etc/mdadm/mdadm.conf
--- /proc/modules:
dm_mirror 15872 0 - Live 0xf9375000
dm_log 9220 1 dm_mirror, Live 0xf92b7000
dm_snapshot 15108 0 - Live 0xf9370000
dm_mod 46952 24 dm_mirror,dm_log,dm_snapshot, Live 0xf9330000
raid1 18784 2 - Live 0xf9305000
md_mod 67804 3 raid1, Live 0xf93d3000
--- volume detail:
--- /proc/cmdline
root=/dev/md0 ro vga=795 apm=off irqpoll
--- grub:
kernel /boot/vmlinuz-2.6.26-1-686-bigmem root=/dev/md0 ro vga=795 apm=off irqpoll
kernel /boot/vmlinuz-2.6.26-1-686-bigmem root=/dev/md0 ro vga=795 apm=off irqpoll single
kernel /boot/vmlinuz-2.6.26-1-686 root=/dev/md0 ro vga=795 apm=off
kernel /boot/vmlinuz-2.6.26-1-686 root=/dev/md0 ro vga=795 apm=off single
-- System Information:
Debian Release: 5.0
APT prefers stable
APT policy: (500, 'stable'), (99, 'unstable')
Architecture: i386 (i686)
Kernel: Linux 2.6.26-1-686-bigmem (SMP w/2 CPU cores)
Locale: LANG=C, LC_CTYPE= (charmap=ANSI_X3.4-1968)
Shell: /bin/sh linked to /bin/bash
Versions of packages mdadm depends on:
ii debconf 1.5.24 Debian configuration management sy
ii libc6 2.7-18 GNU C Library: Shared libraries
ii lsb-base 3.2-20 Linux Standard Base 3.2 init scrip
ii makedev 2.3.1-88 creates device files in /dev
ii udev 0.125-7 /dev/ and hotplug management daemo
Versions of packages mdadm recommends:
ii module-init-tools 3.4-1 tools for managing Linux kernel mo
ii sendmail-bin [mail-transport- 8.14.3-5 powerful, efficient, and scalable
mdadm suggests no packages.
-- debconf information:
mdadm/autostart: true
* mdadm/initrdstart: all
mdadm/initrdstart_notinconf: false
mdadm/initrdstart_msg_errexist:
mdadm/initrdstart_msg_intro:
mdadm/initrdstart_msg_errblock:
* mdadm/start_daemon: true
* mdadm/mail_to: root
mdadm/initrdstart_msg_errmd:
mdadm/initrdstart_msg_errconf:
* mdadm/autocheck: true
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Mon, 23 Mar 2009 09:15:05 GMT) (full text, mbox, link).
Acknowledgement sent
to martin f krafft <madduck@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Mon, 23 Mar 2009 09:15:05 GMT) (full text, mbox, link).
Message #10 received at 518834@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
tags 518834 moreinfo
forwarded 518834 neilb@suse.de
thanks
also sprach Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com> [2009.03.08.2318 +0100]:
> I changed the cron job so the job is run every sunday.
> And I see those mismatches (128 and other powers of 2) every sunday :(
Are you using swap on RAID? How exactly do you use md1 (which is
where you see the problem, right?)
--
.''`. martin f. krafft <madduck@d.o> Related projects:
: :' : proud Debian developer http://debiansystem.info
`. `'` http://people.debian.org/~madduck http://vcs-pkg.org
`- Debian - when you have better things to do than fixing systems
love your enemies; they'll go crazy
trying to figure out what you're up to.
[digital_signature_gpg.asc (application/pgp-signature, inline)]
Tags added: moreinfo
Request was from martin f krafft <madduck@debian.org>
to control@bugs.debian.org.
(Mon, 23 Mar 2009 09:15:06 GMT) (full text, mbox, link).
Noted your statement that Bug has been forwarded to neilb@suse.de.
Request was from martin f krafft <madduck@debian.org>
to control@bugs.debian.org.
(Mon, 23 Mar 2009 09:15:07 GMT) (full text, mbox, link).
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sat, 28 Mar 2009 19:54:02 GMT) (full text, mbox, link).
Acknowledgement sent
to Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sat, 28 Mar 2009 19:54:02 GMT) (full text, mbox, link).
Message #19 received at 518834@bugs.debian.org (full text, mbox, reply):
On Mon, 23 Mar 2009, martin f krafft wrote:
> tags 518834 moreinfo
> forwarded 518834 neilb@suse.de
> thanks
>
> also sprach Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com> [2009.03.08.2318 +0100]:
> > I changed the cron job so the job is run every sunday.
> > And I see those mismatches (128 and other powers of 2) every sunday :(
>
> Are you using swap on RAID?
Yes.
> How exactly do you use md1 (which is where you see the problem, right?)
Right. lvm over raid1 on md1.
Cheers,
--
Cristian
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Wed, 08 Apr 2009 21:57:08 GMT) (full text, mbox, link).
Acknowledgement sent
to Arthur de Jong <adejong@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Wed, 08 Apr 2009 21:57:10 GMT) (full text, mbox, link).
Message #24 received at 518834@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
Subject: mdadm: I'm also seeing this since the upgrade to lenny
Followup-For: Bug #518834
Package: mdadm
Version: 2.6.7.2-1
I'm also seeing this problem (increasing values
in /sys/block/md0/md/mismatch_cnt) on my system. I'm seeing it since the
upgrade to lenny. I upgraded my system from etch.
I use mirrorring on /dev/sda1 and /dev/sdb1 and use LVM on top of that.
Swap is also on there (is that a problem?) as well as root, /var, /home
and most everything else.
/dev/sda is a Maxtor 6V080E0 which does about 55 MB/sec, while /dev/sdb
is a MAXTOR STM380211AS which does about 65 MB/sec (speeds are hdparm -t
values). I'm doing S.M.A.R.T. monitoring and there is nothing wrong with
the drives as far as I can determine.
This is from my syslog archive (note that before the Mar 1 check I was
running etch):
Nov 2 01:32:59 bobo mdadm: RebuildFinished event detected on md device /dev/md0
(probably some logs missing due to incomplete archive)
Feb 1 01:28:27 bobo mdadm: RebuildFinished event detected on md device /dev/md0
Mar 1 01:21:06 bobo mdadm[6692]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 3968
Apr 5 01:17:25 bobo mdadm[6692]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 7168
Apr 6 11:40:02 bobo mdadm[6692]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 7168
Apr 6 14:57:53 bobo mdadm[6692]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 7296
Apr 6 20:13:50 bobo mdadm[6734]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 7296
Apr 6 21:07:49 bobo mdadm[6734]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 7424
Apr 6 21:33:03 bobo mdadm[6734]: RebuildFinished event detected on md device /dev/md0
Apr 6 23:01:04 bobo mdadm[6734]: RebuildFinished event detected on md device /dev/md0
Apr 7 09:59:47 bobo mdadm[6734]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 128
Apr 7 13:57:41 bobo mdadm[6734]: RebuildFinished event detected on md device /dev/md0
Apr 7 20:14:15 bobo mdadm[11310]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 256
Apr 7 21:37:02 bobo mdadm[11310]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 640
Apr 8 10:19:19 bobo mdadm[11310]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 384
Before Apr 5 this makes almost one 128 mismatch a day. This is kind of
worrying. I have not experienced (as far as I know) any data loss but
having two disks grow silently out of sync is not good.
I've done some testing and performed a repair after Apr 6 21:07 but as
you can see after that the number still keeps going up. It also goes
down sometimes which is probably due to the problem-block being
re-written.
I have also performed some checks with cmp -lb /dev/sda1 /dev/sdb1 to
confirm that there were actual differences. If this is interesting, I
can provide those files (the last one I did just now was in single-user
mode with only read-only mounted root on the RAID device and the diff
was about 142K).
If you know a way to map these byte offsets to actual logical volumes
inside LVM and maybe even files in the filesystem I could narrow it
down. There always seems to be a difference at the very end of the
device but that is probably the RAID metadata.
If there is any more information that is needed I will do my best to
provide it.
Thanks.
-- Package-specific info:
--- mount output
/dev/mapper/main-root on / type ext3 (rw,noatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/mapper/main-tmp on /tmp type ext3 (rw,nosuid,noatime)
/dev/mapper/main-var on /var type ext3 (rw,noatime)
/dev/mapper/main-home on /home type ext3 (rw,nosuid,nodev,noatime)
/dev/mapper/main-srv on /srv type ext3 (rw,nosuid,nodev,noatime)
/dev/mapper/main-squid on /var/spool/squid type reiserfs (rw,noexec,nosuid,nodev,noatime)
/dev/mapper/main-netsniff on /var/log/netsniff type ext3 (rw,noexec,nosuid,nodev,noatime)
/dev/sdc1 on /backup type ext3 (rw,nosuid,nodev,noatime)
/dev/sdc2 on /local type jfs (rw,nosuid,noatime)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
--- mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=4f391048:6f5bd779:2f0b4b8d:cf2aba3f
# This file was auto-generated on Thu, 03 May 2007 23:08:25 +0200
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
--- /proc/mdstat:
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
58596992 blocks [2/2] [UU]
unused devices: <none>
--- /proc/partitions:
major minor #blocks name
8 0 80043264 sda
8 1 58597056 sda1
8 2 21438742 sda2
8 16 78150744 sdb
8 17 58597056 sdb1
8 18 19551105 sdb2
8 32 244198584 sdc
8 33 67464936 sdc1
8 34 23358510 sdc2
9 0 58596992 md0
253 0 2097152 dm-0
253 1 1572864 dm-1
253 2 2097152 dm-2
253 3 524288 dm-3
253 4 31457280 dm-4
253 5 2097152 dm-5
253 6 6291456 dm-6
253 7 2097152 dm-7
--- initrd.img-2.6.26-1-amd64:
42132 blocks
etc/mdadm
etc/mdadm/mdadm.conf
sbin/mdadm
scripts/local-top/mdadm
lib/modules/2.6.26-1-amd64/kernel/drivers/md/dm-log.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid1.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/dm-mirror.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid456.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/dm-snapshot.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/md-mod.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/dm-mod.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid0.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/multipath.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/raid10.ko
lib/modules/2.6.26-1-amd64/kernel/drivers/md/linear.ko
--- /proc/modules:
dm_mirror 20608 0 - Live 0xffffffffa0140000
dm_log 13956 1 dm_mirror, Live 0xffffffffa013b000
dm_snapshot 19400 0 - Live 0xffffffffa0135000
dm_mod 58864 20 dm_mirror,dm_log,dm_snapshot, Live 0xffffffffa0125000
raid1 24192 1 - Live 0xffffffffa011e000
md_mod 80164 2 raid1, Live 0xffffffffa0109000
--- /var/log/syslog:
--- volume detail:
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : 4f391048:6f5bd779:2f0b4b8d:cf2aba3f
Creation Time : Sun Apr 29 20:01:14 2007
Raid Level : raid1
Used Dev Size : 58596992 (55.88 GiB 60.00 GB)
Array Size : 58596992 (55.88 GiB 60.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Wed Apr 8 22:48:04 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : f990d538 - correct
Events : 345934
Number Major Minor RaidDevice State
this 0 8 1 0 active sync /dev/sda1
0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
--
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 4f391048:6f5bd779:2f0b4b8d:cf2aba3f
Creation Time : Sun Apr 29 20:01:14 2007
Raid Level : raid1
Used Dev Size : 58596992 (55.88 GiB 60.00 GB)
Array Size : 58596992 (55.88 GiB 60.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Wed Apr 8 22:48:04 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : f990d54a - correct
Events : 345934
Number Major Minor RaidDevice State
this 1 8 17 1 active sync /dev/sdb1
0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
--
--- /proc/cmdline
auto BOOT_IMAGE=Linux ro root=fd00
--- lilo:
root=/dev/mapper/main-root
-- System Information:
Debian Release: 5.0
APT prefers stable
APT policy: (500, 'stable')
Architecture: i386 (x86_64)
Kernel: Linux 2.6.26-1-amd64 (SMP w/2 CPU cores)
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968)
Shell: /bin/sh linked to /bin/bash
Versions of packages mdadm depends on:
ii debconf 1.5.24 Debian configuration management sy
ii libc6 2.7-18 GNU C Library: Shared libraries
ii lsb-base 3.2-20 Linux Standard Base 3.2 init scrip
ii makedev 2.3.1-88 creates device files in /dev
ii udev 0.125-7 /dev/ and hotplug management daemo
Versions of packages mdadm recommends:
ii module-init-tools 3.4-1 tools for managing Linux kernel mo
ii postfix [mail-transport-agent 2.5.5-1.1 High-performance mail transport ag
mdadm suggests no packages.
-- debconf information:
* mdadm/autostart: true
* mdadm/initrdstart: all
* mdadm/initrdstart_notinconf: false
mdadm/initrdstart_msg_errexist:
mdadm/initrdstart_msg_intro:
mdadm/initrdstart_msg_errblock:
* mdadm/start_daemon: true
* mdadm/mail_to: root
mdadm/initrdstart_msg_errmd:
mdadm/initrdstart_msg_errconf:
* mdadm/autocheck: true
--
-- arthur - adejong@debian.org - http://people.debian.org/~adejong --
[signature.asc (application/pgp-signature, inline)]
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Mon, 27 Apr 2009 08:57:05 GMT) (full text, mbox, link).
Acknowledgement sent
to Berni Elbourn <berni@elbournb.fsnet.co.uk>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Mon, 27 Apr 2009 08:57:05 GMT) (full text, mbox, link).
Message #29 received at 518834@bugs.debian.org (full text, mbox, reply):
I see these on my recently upgraded Lenny system which does not have lvm
on plain old pata:-
Apr 25 10:22:29 queeg mdadm[4704]: RebuildStarted event detected on md
device /dev/md0
Apr 25 10:25:29 queeg mdadm[4704]: Rebuild20 event detected on md device
/dev/md0
Apr 25 10:28:29 queeg mdadm[4704]: Rebuild40 event detected on md device
/dev/md0
Apr 25 10:31:29 queeg mdadm[4704]: Rebuild60 event detected on md device
/dev/md0
Apr 25 10:33:29 queeg mdadm[4704]: Rebuild80 event detected on md device
/dev/md0
Apr 25 10:36:04 queeg mdadm[4704]: RebuildFinished event detected on md
device /dev/md0, component device mismatches found: 512
Filesystem Size Used Avail Use% Mounted on
/dev/md0 46G 29G 16G 66% /
tmpfs 998M 16K 998M 1% /lib/init/rw
udev 10M 152K 9.9M 2% /dev
tmpfs 998M 0 998M 0% /dev/shm
/dev/md2 404G 187G 197G 49% /home
swapon -s
Filename Type Size Used Priority
/dev/md1 partition 9767416 0 -1
Linux queeg 2.6.26-1-686 #1 SMP Fri Mar 13 18:08:45 UTC 2009 i686 GNU/Linux
mdadm --version
mdadm - v2.6.7.2 - 14th November 2008
queeg:/var/log# smartctl -l error /dev/hda
smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged
queeg:/var/log# smartctl -l error /dev/hdd
smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged
Can anyone explain what these mismatches are...is my data at risk?
ta
Berni
Forcibly Merged 405919 518834.
Request was from martin f krafft <madduck@madduck.net>
to control@bugs.debian.org.
(Wed, 29 Apr 2009 13:51:04 GMT) (full text, mbox, link).
Severity set to `wishlist' from `normal'
Request was from martin f krafft <madduck@madduck.net>
to control@bugs.debian.org.
(Wed, 29 Apr 2009 13:51:05 GMT) (full text, mbox, link).
Tags set to: confirmed, upstream
Request was from martin f krafft <madduck@madduck.net>
to control@bugs.debian.org.
(Wed, 29 Apr 2009 13:51:06 GMT) (full text, mbox, link).
Message sent on
to Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>:
Bug#518834.
(Wed, 29 Apr 2009 13:51:12 GMT) (full text, mbox, link).
Message #38 received at 518834-submitter@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
forcemerge 405919 518834
retitle 405919 please explain mismatch_cnt so I can sleep better at night
severity 405919 wishlist
tags 405919 = confirmed upstream
thanks
The following explanation from upstream might help you get better
sleep at night. I am also forwarding it so that I can present people
asking on IRC with a link. We'll work on folding this into
documentation for an upcoming release.
I'll start with the last point from Neil's reply before getting to
the details:
> Can anyone explain what these mismatches are...is my data at risk?
Data is not at risk.
\o/
----- Forwarded message from Neil Brown <neilb@suse.de> -----
> I keep getting more and more requests about those mismatches from
> people, and I cannot quite explain it correctly. I think
> I understand what they are -- redundancy that isn't anymore, but
> I don't know how they come to be, and I don't know why md doesn't
> fix them automatically, or try to be louder about them.
>
> They're bad, aren't they? And how does one recover from them?
> I mean, if you have a RAID5 and the bits should be [110] across the
> components, but they end up as, say [100], how do you recover from
> them? [100] is a mismatch, just as much as [010] or [111] or [001]
> would be, no? Isn't the original data then lost since you don't
> actually know which bit flipped?
>
> It would be great if you could help me understand. I'd then take the
> time to write it up for everyone...
For RAID5, unexpected mismatches would be a problem, yes.
But that is not what is being reported. All the reports related to
raid1, though raid10 could be affected as well.
These mismatches can happen for a number of different reasons, but are
most likely when swap is on the array.
It goes like this:
- we have a page of memory that hasn't been changed in a while.
The VM notices and decided to write it out to the swap device so
that the memory can be freed more easily.
- The write is sent to the raid1, which creates to write request for
that page, one to each device.
- These write requests sit on the queue for a little while and
eventually get processed. Only when their turn arrives is the data
copied (probably by DMA) out of the page and into a buffer in the
controller.
These two copies will almost certainly happen at different times.
- While the requests are sitting in the queue, the application that
owns the page happens to wake up and starts writing to the page.
If is entirely possible that it will make some change to the page
between the two copies (DMAs) out to the controller(s). So
the two pages that are written are different.
- The VM notices that the page has been changed, and so forgets about
that fact that the data has been written to swap. If it ever
decide that page is suitable for swap-out again, it will write the
data out again. In particular it will never try to read in that
pages which is different on the two devices. If it never decided
to write any page out to the location again, then the location
stays out-of-sync.
- as no-one will ever read either of the block that are different,
the fact that they are different isn't really important. Except
that md check/repair will notice and report it.
This can conceivably happen with out swap being part of the picture,
if memory mapped files are used. However in this case it is less
likely to remain out-of-sync and dirty file data will be written out
soon, where as there is no guarantee that dirty anonymous pages will
be written to swap in any particular hurry, or at any particular
location.
md/raid1 could avoid this by copying the page once in to a temporary
buffer, then doing all writes from this buffer. That is effectively
what raid5 does which is why raid5 doesn't suffer from the symptom.
However that would cause a measurable performance decrease with no
significant value.
A slightly less intrusive 'fix' could be to check each page when an IO
completes. If the page is 'dirty', schedule a 'repair' for just that
page of the array. This might work, but feels like a layering
violation (block device driver has not business looking at any of the
page flags) and would be extra complexity for minimal gain.
Is that sufficiently clear? Maybe it should go in md.4.
NeilBrown
----- End forwarded message -----
--
martin | http://madduck.net/ | http://two.sentenc.es/
"everyone smiles as you drift past the flower
that grows so incredibly high."
-- the beatles
spamtraps: madduck.bogus@madduck.net
[digital_signature_gpg.asc (application/pgp-signature, inline)]
Information stored
:
Bug#518834; Package mdadm.
(Thu, 30 Apr 2009 06:39:03 GMT) (full text, mbox, link).
Acknowledgement sent
to Arthur de Jong <adejong@debian.org>:
Extra info received and filed, but not forwarded.
(Thu, 30 Apr 2009 06:39:03 GMT) (full text, mbox, link).
Message #43 received at 518834-quiet@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
On Wed, 2009-04-29 at 15:46 +0200, martin f krafft wrote:
> The following explanation from upstream might help you get better
> sleep at night. I am also forwarding it so that I can present people
> asking on IRC with a link. We'll work on folding this into
> documentation for an upcoming release.
>
> I'll start with the last point from Neil's reply before getting to
> the details:
[...]
> It goes like this:
> - we have a page of memory that hasn't been changed in a while.
> The VM notices and decided to write it out to the swap device so
> that the memory can be freed more easily.
> - The write is sent to the raid1, which creates to write request for
> that page, one to each device.
> - These write requests sit on the queue for a little while and
> eventually get processed. Only when their turn arrives is the data
> copied (probably by DMA) out of the page and into a buffer in the
> controller.
> These two copies will almost certainly happen at different times.
> - While the requests are sitting in the queue, the application that
> owns the page happens to wake up and starts writing to the page.
> If is entirely possible that it will make some change to the page
> between the two copies (DMAs) out to the controller(s). So
> the two pages that are written are different.
> - The VM notices that the page has been changed, and so forgets about
> that fact that the data has been written to swap. If it ever
> decide that page is suitable for swap-out again, it will write the
> data out again. In particular it will never try to read in that
> pages which is different on the two devices. If it never decided
> to write any page out to the location again, then the location
> stays out-of-sync.
> - as no-one will ever read either of the block that are different,
> the fact that they are different isn't really important. Except
> that md check/repair will notice and report it.
[...]
> Is that sufficiently clear? Maybe it should go in md.4.
It's clear to me, thanks for this explanation. I think this should
indeed be documented somewhere.
An additional question would be: is there an easy way to perform a
consistency check that excludes these unused blocks? I would like to be
able to tell these mismatches apart from "real" corruptions so the
monthly checks are meaningful again to me.
The most ideal check would only check consistency for blocks on
filesystems and swap areas that are actually allocated but I imagine
that would probably be very difficult (e.g. I use LVM on top of my RAID
device).
An easier way to map mismatches to filesystems or to files would be very
welcome but that should probably be implemented in other packages. One
aspect of this that affects the RAID-code is to log more details as to
which blocks contain mismatches when performing a check.
Thanks.
--
-- arthur - adejong@debian.org - http://people.debian.org/~adejong --
[signature.asc (application/pgp-signature, inline)]
Information stored
:
Bug#518834; Package mdadm.
(Thu, 30 Apr 2009 08:18:08 GMT) (full text, mbox, link).
Acknowledgement sent
to martin f krafft <madduck@debian.org>:
Extra info received and filed, but not forwarded.
(Thu, 30 Apr 2009 08:18:09 GMT) (full text, mbox, link).
Message #48 received at 518834-quiet@bugs.debian.org (full text, mbox, reply):
[Message part 1 (text/plain, inline)]
also sprach Arthur de Jong <adejong@debian.org> [2009.04.30.0835 +0200]:
> An additional question would be: is there an easy way to perform a
> consistency check that excludes these unused blocks? I would like to be
> able to tell these mismatches apart from "real" corruptions so the
> monthly checks are meaningful again to me.
>
> The most ideal check would only check consistency for blocks on
> filesystems and swap areas that are actually allocated but I imagine
> that would probably be very difficult (e.g. I use LVM on top of my RAID
> device).
>
> An easier way to map mismatches to filesystems or to files would be very
> welcome but that should probably be implemented in other packages. One
> aspect of this that affects the RAID-code is to log more details as to
> which blocks contain mismatches when performing a check.
Might I suggest you bring up this question on
linux-raid@vger.kernel.org, instead of letting me proxy? Please keep
the bug report on CC though.
--
.''`. martin f. krafft <madduck@d.o> Related projects:
: :' : proud Debian developer http://debiansystem.info
`. `'` http://people.debian.org/~madduck http://vcs-pkg.org
`- Debian - when you have better things to do than fixing systems
"all i know is that i'm being sued for unfair business
practices by micro$oft. hello pot? it's kettle on line two."
-- michael robertson
[digital_signature_gpg.asc (application/pgp-signature, inline)]
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sun, 03 May 2009 08:48:02 GMT) (full text, mbox, link).
Acknowledgement sent
to Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sun, 03 May 2009 08:48:02 GMT) (full text, mbox, link).
Message #53 received at 518834@bugs.debian.org (full text, mbox, reply):
On Wed, 29 Apr 2009, martin f krafft wrote:
> The following explanation from upstream might help you get better
> sleep at night. I am also forwarding it so that I can present people
> asking on IRC with a link. We'll work on folding this into
> documentation for an upcoming release.
>
> I'll start with the last point from Neil's reply before getting to
> the details:
>
> > Can anyone explain what these mismatches are...is my data at risk?
>
> Data is not at risk.
Alright. I guess what you're saying is that mismatch reports may be
safely ignored, isn't it?
Still, there are a few spots in Neil's reply that are still confusing me a
bit, like this:
> For RAID5, unexpected mismatches would be a problem, yes.
> But that is not what is being reported. All the reports related to
^^^^^^^^^^^^^^^^^^^^^^^^^^
> raid1, though raid10 could be affected as well.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Affected? In the direction of "unexpected mismatches would be a problem"
like for raid5?
> These mismatches can happen for a number of different reasons,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> but are most likely when swap is on the array.
Got this whit the swap on the md-array.
> This can conceivably happen with out swap being part of the picture,
> if memory mapped files are used.
Yes. I see such an example right now. 128 mismatches are reported on
md0 (raid1), on a box partitioned like this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg0-root 4128448 239588 3679148 7% /
tmpfs 2041940 0 2041940 0% /lib/init/rw
udev 10240 68 10172 1% /dev
tmpfs 2041940 4 2041936 1% /dev/shm
/dev/md0 505508 26613 452796 6% /boot
/dev/mapper/vg0-home 103212320 32262236 65707204 33% /home
/dev/mapper/vg0-tmp 2064208 78352 1881000 4% /tmp
/dev/mapper/vg0-usr 10321208 2560548 7236372 27% /usr
/dev/mapper/vg0-local
10321208 666104 9130816 7% /usr/local
/dev/mapper/vg0-src 10321208 235028 9561892 3% /usr/src
/dev/mapper/vg0-var 10321208 434184 9362736 5% /var
Filename Type Size Used Priority
/dev/mapper/vg0-swap partition 4194296 72 -1
Please note, md0 here is a partition where there are no other files than
these:
/boot/System.map-2.6.26-1-686-bigmem
/boot/config-2.6.26-1-686-bigmem
/boot/grub/default
/boot/grub/device.map
/boot/grub/e2fs_stage1_5
/boot/grub/fat_stage1_5
/boot/grub/jfs_stage1_5
/boot/grub/menu.lst
/boot/grub/menu.lst.pre_fcopy
/boot/grub/menu.lst~
/boot/grub/minix_stage1_5
/boot/grub/reiserfs_stage1_5
/boot/grub/stage1
/boot/grub/stage2
/boot/grub/xfs_stage1_5
/boot/initrd.img-2.6.26-1-686-bigmem
/boot/initrd.img-2.6.26-1-686-bigmem.bak
/boot/memtest86+.bin
/boot/vmlinuz-2.6.26-1-686-bigmem
AFAICU, none of these files should need to be mmapped at this particular
point in time:
# uptime
10:00:07 up 24 days, 23:07, 8 users, load average: 0.15, 0.03, 0.01
I'm missing something, I can see that :( But what?
Is it just because I'm not a "believer"? Sorry.
The problem is still nagginng me. Ok. I'll repair that, than wait to see
if it comes back. My picture is that mismatches should be even less
probable in the above context. What do you think?
> NeilBrown
Cheers,
--
Cristian
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sun, 10 May 2009 09:33:02 GMT) (full text, mbox, link).
Acknowledgement sent
to Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sun, 10 May 2009 09:33:03 GMT) (full text, mbox, link).
Message #58 received at 518834@bugs.debian.org (full text, mbox, reply):
And here is another example:
# cat /sys/block/md2/md/mismatch_cnt
128
on another box, partitioned in a different manner, with swap outside the
arrays:
# cat /proc/mdstat
Personalities : [raid1]
md8 : active raid1 sda12[0] sdb12[1]
31085632 blocks [2/2] [UU]
md7 : active raid1 sda11[0] sdb11[1]
16008640 blocks [2/2] [UU]
md6 : active raid1 sda10[0] sdb10[1]
16008640 blocks [2/2] [UU]
md5 : active raid1 sda9[0] sdb9[1]
2008000 blocks [2/2] [UU]
md4 : active raid1 sda8[0] sdb8[1]
4008064 blocks [2/2] [UU]
md3 : active raid1 sda7[0] sdb7[1]
4008064 blocks [2/2] [UU]
md2 : active raid1 sda6[0] sdb6[1]
2008000 blocks [2/2] [UU]
md1 : active raid1 sda5[0] sdb5[1]
1003904 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
1003904 blocks [2/2] [UU]
unused devices: <none>
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md0 1003868 671780 332088 67% /
tmpfs 1557604 0 1557604 0% /lib/init/rw
udev 10240 128 10112 2% /dev
tmpfs 65536 4 65532 1% /dev/shm
/dev/md6 16008144 14961904 1046240 94% /backup
/dev/md8 31084676 26304040 4780636 85% /data
/dev/md7 16008144 9332344 6675800 59% /home
/dev/md5 2007932 546084 1461848 28% /opt
/dev/md1 1003868 33504 970364 4% /tmp
/dev/md3 4007936 3361124 646812 84% /usr
/dev/md4 4007936 3416788 591148 86% /usr/local
/dev/md2 2007932 922100 1085832 46% /var
# swapon -s
Filename Type Size Used Priority
/dev/sda2 partition 1004052 128 -1
/dev/sdb2 partition 1004052 0 -2
Cheers,
--
Cristian
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sat, 01 Aug 2009 23:54:02 GMT) (full text, mbox, link).
Acknowledgement sent
to DXPUBLICA@telefonica.net:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sat, 01 Aug 2009 23:54:03 GMT) (full text, mbox, link).
Message #63 received at 518834@bugs.debian.org (full text, mbox, reply):
Hi,
I have nslu2 (armel). I have md2 which I mounted as /var/log and I get
this error:
Aug 2 01:17:01 caixa /USR/SBIN/CRON[3641]: (root) CMD ( cd / && run-
parts --report /etc/cron.hourly)
Aug 2 01:17:01 caixa CRON[3640]: pam_unix(cron:session): session
closed for user root
Aug 2 01:17:04 caixa mdadm[1956]: Rebuild80 event detected on md
device /dev/md0
Aug 2 01:21:46 caixa kernel: [43247416.220000] md: md0: data-check
done.
Aug 2 01:21:46 caixa kernel: [43247416.490000] md: delaying data-
check of md2 until md1 has finished (they share one or more physical
units)
Aug 2 01:21:46 caixa kernel: [43247416.500000] md: data-check of RAID
array md1
Aug 2 01:21:46 caixa kernel: [43247416.510000] md: minimum
_guaranteed_ speed: 1000 KB/sec/disk.
Aug 2 01:21:46 caixa kernel: [43247416.520000] md: using maximum
available idle IO bandwidth (but not more than 200000 KB/sec) for data-
check.
Aug 2 01:21:46 caixa kernel: [43247416.530000] md: using 128k window,
over a total of 3903680 blocks.
Aug 2 01:21:46 caixa kernel: [43247416.870000] RAID1 conf printout:
Aug 2 01:21:46 caixa kernel: [43247416.880000] --- wd:2 rd:2
Aug 2 01:21:46 caixa kernel: [43247416.880000] disk 0, wo:0, o:1,
dev:sda1
Aug 2 01:21:46 caixa kernel: [43247416.890000] disk 1, wo:0, o:1,
dev:sdb1
Aug 2 01:21:46 caixa mdadm[1956]: RebuildFinished event detected on
md device /dev/md0
Aug 2 01:21:46 caixa mdadm[1956]: RebuildStarted event detected on md
device /dev/md1
Aug 2 01:23:46 caixa mdadm[1956]: Rebuild20 event detected on md
device /dev/md1
Aug 2 01:25:46 caixa mdadm[1956]: Rebuild40 event detected on md
device /dev/md1
Aug 2 01:27:46 caixa mdadm[1956]: Rebuild60 event detected on md
device /dev/md1
Aug 2 01:29:46 caixa mdadm[1956]: Rebuild80 event detected on md
device /dev/md1
Aug 2 01:31:03 caixa kernel: [43247974.680000] md: md1: data-check
done.
Aug 2 01:31:03 caixa kernel: [43247974.930000] md: data-check of RAID
array md2
Aug 2 01:31:03 caixa kernel: [43247974.930000] md: minimum
_guaranteed_ speed: 1000 KB/sec/disk.
Aug 2 01:31:03 caixa kernel: [43247974.940000] md: using maximum
available idle IO bandwidth (but not more than 200000 KB/sec) for data-
check.
Aug 2 01:31:03 caixa kernel: [43247974.950000] md: using 128k window,
over a total of 1204800 blocks.
Aug 2 01:31:03 caixa kernel: [43247974.990000] RAID1 conf printout:
Aug 2 01:31:03 caixa kernel: [43247974.990000] --- wd:2 rd:2
Aug 2 01:31:03 caixa kernel: [43247975.000000] disk 0, wo:0, o:1,
dev:sda3
Aug 2 01:31:03 caixa kernel: [43247975.000000] disk 1, wo:0, o:1,
dev:sdb3
Aug 2 01:31:03 caixa mdadm[1956]: RebuildFinished event detected on
md device /dev/md1
Aug 2 01:31:03 caixa mdadm[1956]: RebuildStarted event detected on md
device /dev/md2
Aug 2 01:32:03 caixa mdadm[1956]: Rebuild20 event detected on md
device /dev/md2
Aug 2 01:33:03 caixa mdadm[1956]: Rebuild60 event detected on md
device /dev/md2
Aug 2 01:34:03 caixa mdadm[1956]: Rebuild80 event detected on md
device /dev/md2
Aug 2 01:34:07 caixa kernel: [43248159.200000] md: md2: data-check
done.
Aug 2 01:34:08 caixa kernel: [43248159.480000] RAID1 conf printout:
Aug 2 01:34:08 caixa kernel: [43248159.480000] --- wd:2 rd:2
Aug 2 01:34:08 caixa kernel: [43248159.490000] disk 0, wo:0, o:1,
dev:sda4
Aug 2 01:34:08 caixa kernel: [43248159.490000] disk 1, wo:0, o:1,
dev:sdb4
Aug 2 01:34:08 caixa mdadm[1956]: RebuildFinished event detected on
md device /dev/md2, component device mismatches found: 128
Aug 2 01:37:44 caixa sudo: xan : TTY=pts/0 ; PWD=/home/xan ;
USER=root ; COMMAND=/bin/cat /var/log/total.log
xan@caixa:~$ mount
/dev/md0 on / type ext3 (rw,noatime,nodiratime,commit=7,errors=remount-
ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md1 on /chrut type ext3 (rw,noatime,nodiratime,commit=7)
/dev/md2 on /var/log type ext3 (rw,noexec,nosuid,nodev,noatime,
nodiratime,commit=7)
xan@caixa:~$
So this is not only for swap. I have RAID 1.
Regards,
Xan.
Added tag(s) fixed-upstream.
Request was from martin f. krafft <madduck@debian.org>
to control@bugs.debian.org.
(Thu, 28 Jan 2010 21:33:02 GMT) (full text, mbox, link).
Marked as found in versions mdadm/3.1.4-1+8efb9d1 and reopened.
Request was from Andreas Beckmann <anbe@debian.org>
to control@bugs.debian.org.
(Sat, 02 Nov 2013 15:57:12 GMT) (full text, mbox, link).
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sat, 02 Nov 2013 16:57:09 GMT) (full text, mbox, link).
Acknowledgement sent
to Michael Tokarev <mjt@tls.msk.ru>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sat, 02 Nov 2013 16:57:09 GMT) (full text, mbox, link).
Message #72 received at 518834@bugs.debian.org (full text, mbox, reply):
Control: tag -1 + wontfix - fixed-upstream
Since this is a kernel issue, and the only issue per se is the mismatch_cnt
itself, not data safety, and since there's nothing mdadm can do here, and
since I don't see how this issue has been "fixed upstream", marking this
(newly reopened) bug as "wontfix".
Ths version graph looks quite fun at this time.
Initially the issue has been reported for 2.5.6-7.
In 3.1.2-1 it were reported to be fixed.
And now in 3.1.4-1 it exists again.
So the _kernel_ issue has been fixed in mdadm between
versions 3.1.2 and 3.1.4, earlier and later versions
are buggy. So be it.
(Cc'ing to anbe@ because apparently the tagging were done on his request)
Thanks,
/mjt
Added tag(s) wontfix.
Request was from Michael Tokarev <mjt@tls.msk.ru>
to 518834-submit@bugs.debian.org.
(Sat, 02 Nov 2013 16:57:09 GMT) (full text, mbox, link).
Removed tag(s) fixed-upstream.
Request was from Michael Tokarev <mjt@tls.msk.ru>
to 518834-submit@bugs.debian.org.
(Sat, 02 Nov 2013 16:57:10 GMT) (full text, mbox, link).
Information forwarded
to debian-bugs-dist@lists.debian.org, Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>:
Bug#518834; Package mdadm.
(Sat, 02 Nov 2013 18:12:05 GMT) (full text, mbox, link).
Acknowledgement sent
to Andreas Beckmann <anbe@debian.org>:
Extra info received and forwarded to list. Copy sent to Debian mdadm maintainers <pkg-mdadm-devel@lists.alioth.debian.org>.
(Sat, 02 Nov 2013 18:12:05 GMT) (full text, mbox, link).
Message #81 received at 518834@bugs.debian.org (full text, mbox, reply):
Control: notfound -1 3.1.4-1+8efb9d1
Control: close -1
On 2013-11-02 17:53, Michael Tokarev wrote:
> Since this is a kernel issue, and the only issue per se is the mismatch_cnt
> itself, not data safety, and since there's nothing mdadm can do here, and
> since I don't see how this issue has been "fixed upstream", marking this
> (newly reopened) bug as "wontfix".
Thanks for clarification :-) These two merged bugs were too unclear to
me to properly fix them up for archival.
> Ths version graph looks quite fun at this time.
> Initially the issue has been reported for 2.5.6-7.
> In 3.1.2-1 it were reported to be fixed.
> And now in 3.1.4-1 it exists again.
> So the _kernel_ issue has been fixed in mdadm between
> versions 3.1.2 and 3.1.4, earlier and later versions
> are buggy. So be it.
There was also http://bugs.debian.org/405919#78 that looked like it was
missing a reopen ...
(IIRC the two merged bugs even had disagreeing sets of found+fixed versions)
OK, closing and fixing up the history so that this ancient bug can be
archived finally in 4 weeks :-)
Andreas
No longer marked as found in versions mdadm/3.1.4-1+8efb9d1.
Request was from Andreas Beckmann <anbe@debian.org>
to 518834-submit@bugs.debian.org.
(Sat, 02 Nov 2013 18:12:05 GMT) (full text, mbox, link).
Marked Bug as done
Request was from Andreas Beckmann <anbe@debian.org>
to 518834-submit@bugs.debian.org.
(Sat, 02 Nov 2013 18:12:06 GMT) (full text, mbox, link).
Notification sent
to Cristian Ionescu-Idbohrn <cristian.ionescu-idbohrn@axis.com>:
Bug acknowledged by developer.
(Sat, 02 Nov 2013 18:12:07 GMT) (full text, mbox, link).
Bug archived.
Request was from Debbugs Internal Request <owner@bugs.debian.org>
to internal_control@bugs.debian.org.
(Sun, 01 Dec 2013 07:30:09 GMT) (full text, mbox, link).
Send a report that this bug log contains spam.
Debian bug tracking system administrator <owner@bugs.debian.org>.
Last modified:
Wed Oct 11 23:39:43 2017;
Machine Name:
buxtehude
Debian Bug tracking system
Debbugs is free software and licensed under the terms of the GNU
Public License version 2. The current version can be obtained
from https://bugs.debian.org/debbugs-source/.
Copyright © 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson,
2005-2017 Don Armstrong, and many other contributors.