We’d like to remind Forumites to please avoid political debate on the Forum.
This is to keep it a safe and useful space for MoneySaving discussions. Threads that are – or become – political in nature may be removed in line with the Forum’s rules. Thank you for your understanding.
📨 Have you signed up to the Forum's new Email Digest yet? Get a selection of trending threads sent straight to your inbox daily, weekly or monthly!
Any linux bods out there??
Comments
-
From what has been posted so far, the partitions you listed with fdisk earlier on appear to be part of a raid configuration which are the md# devices and the sda4 you suggested has you data is likely to be part of md3 that is mounted as /var. 'mdadm --detail /dev/md3' would confirm that assuming that command works and would tell you the status of the raid array.
You could probably try looking under /var and see if any of the directories look related to what was shared in case you have simply lost the share points but the data has remained. If the data is still there you can pull it off using sftp or scp.
Failing that I agree with what other people here have said that you will need to remove the drives and mount them on a system with software that can recover data from a linux raid configuration.0 -
Hi Mondez, here's the output from mdadm. Is this what you'd expect to see?
Version : 00.90.03
Creation Time : Wed Jul 15 06:23:56 2009
Raid Level : raid1
Array Size : 987904 (964.91 MiB 1011.61 MB)
Device Size : 987904 (964.91 MiB 1011.61 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Thu Dec 8 23:15:04 2011
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : d8fcca71:7bb5075e:f91bee64:4cc42c0a
Events : 0.5862
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed0 -
Looks like the raid was running as a mirrored set, I assume there are/were two drives in the system. It says that it the state is clean but degraded, that means the array is working, but that it doesn't have redundancy as normally each disk would keep a copy of what was on the other so it can keep going if one fails. In theory your data should still be available if all that happened was one drive failed. What happens if you run 'fdisk -l /dev/sdb'? It should list the partitions on the second disk and it should look similar to just 'fdisk -l'. If it fails try 'dmesg |grep sdb' to see if there is any mention of problems with the second disk.
Edit:
Sorry, just noticed, md3 has /dev/sda3 rather than 4 as I expected and is only 1GB in size, definately not the data partition. Try running mdadm again but on /dev/md4 just to see if it exists and what partitions make it up if it does. Could you also just run 'mount' on its own and tell me what its output is? The stuff on sdb is still useful to know, but not really relevant to seeing if the data is still on there right now.0 -
Mondez
I've tried the sdb stuff, but got no output from either command. As far as I'm aware though the NAS unit only has a single 1tb drive.
As for the others, mount gives me:
/dev/root on / type ext3 (rw,noatime,data=ordered)
proc on /proc type proc (rw)
sys on /sys type sysfs (rw)
/dev/pts on /dev/pts type devpts (rw)
securityfs on /sys/kernel/security type securityfs (rw)
/dev/md3 on /var type ext3 (rw,noatime,data=ordered)
/dev/ram0 on /mnt/ram type tmpfs (rw)
and mdadm /dev/md4 :
/dev/md4: is an md device which is not active0 -
have you physically opened the NAS?
The WD NAS can come in 1 and 2 disk formats.
it's sounding like the drive that holds md4 has failed (or marked as failed by the OS) you might need to open up and remove the drive and see what happens.Laters
Sol
"Have you found the secrets of the universe? Asked Zebade "I'm sure I left them here somewhere"0 -
As yet I've not opened it up, I'm trying to hold that one back as a last resort but I'm quite willing to give it a go at some point0
-
Looks like it either doesn't have a second drive or the second drive has failed and isn't even seen by the OS anymore. When you initially configured the NAS, what capacity did it format to, 1TB or 2TB and do you remember what raid type it was configured as?0
-
The output from mdadm suggests that it had 2 drives in RAID1 when the array was created, so it seems likely that one has failed.
If you do remove the remaining drive and put it in a PC (possibly running Linux temporarily via a LiveCD) you will probably have to overcome the problem that the drive is marked as a RAID device (in the MBR, I ~think~), so you will probably have trouble mounting it.
I had a similar problem (fortunately just with test data when I was assessing whether mdadm was suitable for me to use), and never found a way to unset this flag without destroying the data on the drive. I'm sure there must be a way, though...
[Edit - just noticed that you can "Manage Flags" for each partition with GPARTED, and that one of those flags is "raid", so it seems to be a partition attribute in the partition table. Not sure if the solution is as simple as unsetting that flag.]0 -
What OS do you have on your PC, Windows or Linux?
If Windows you should see the NAS on your network.
If Linux you'll need to mount it either using CIFS/SMB or NFS
I think your getting the drive busy message because it's already mounted as far as the subset of Linux on the NAS is concerned.
mount.cifs //ip.address.of.mybook/public /mnt -o username=admin,password=admin_passwd_on_mybook
or
mount -t cifs -o username=admin,password=... //ip.address.of.mybook/public /mnt
or
mount -t nfs ip.address.of.mybook/nfs/public /mntOne by one the penguins are slowly stealing my sanity.0 -
The OP is connecting directly into the NAS via puTTy. so he should be able to mount the partition but it getting an error.
the problem is either:
If it's a 2 drive NAS then a drive has failed
If it's a single drive NAS then the partition table has been damaged or it's a faulty drive.Laters
Sol
"Have you found the secrets of the universe? Asked Zebade "I'm sure I left them here somewhere"0
This discussion has been closed.
Confirm your email address to Create Threads and Reply

Categories
- All Categories
- 352.1K Banking & Borrowing
- 253.6K Reduce Debt & Boost Income
- 454.2K Spending & Discounts
- 245.2K Work, Benefits & Business
- 600.8K Mortgages, Homes & Bills
- 177.5K Life & Family
- 259K Travel & Transport
- 1.5M Hobbies & Leisure
- 16K Discuss & Feedback
- 37.7K Read-Only Boards