The need for a universal file system format

After the past 24 hours, I’ve come to the conclusion that there needs to be a universal file system format that has the same support on all operating systems.

My main “server” system here at the house is a Dell PowerEdge SC420 with a 2.5Ghz Celeron-D CPU, 1G RAM, 160G SATA HD, and GigE. Since I got the machine (for $250 during one of Dell’s CRAAAZY DEALS last year), I’ve been running Fedora Core 4 on it with no problems. On top of FC4, I use rsnapshot to do nightly backups of my colocated server and some client machines.

I decided a few days ago that it was time to ditch FC4 and put Solaris 10 on the machine now that all the hardware is fully supported. First, however, I needed to get my rsnapshot repository off the machine. That was accomplished with a 250G SATA hard drive and a SATA to USB2 adapter with power supply. Now I had my critical data on an ext2-formatted hard drive.

I proceeded to reinstall the Dell with the latest Solaris Express release. I then installed the ext2fs drivers for Solaris 10, and attempted to rsync the data back off the hard drive. Five minutes in, the system wedges hard and requires a reboot.

Okay, so that’s not going to work. I carry the HD and adapter back into the other room, plug it into the Mac, and install ext2fsx. When I try to mount the drive, it complains about a bad superblock. So, a couple hours of forced-fsck_ext2 later, I can mount the drive.

When I try to rsync from the Mac over the network to the Dell, the Mac gripes about filenames on the ext2 partition. Crap. That’s not going to work either, and I don’t have another Linux box to mount the HD on.

It was then that I realized I didn’t *need* another permanent Linux installation. I downloaded Knoppix, booted it on my AMD64 Windows gaming box, then plugged the USB/SATA HD in. It was detected and mounted right up, and has been happily rsync-ing everything back to the Dell/Solaris system for the past couple of hours.

I know that in my situation, having a couple of big disks sitting on an NFS server would have been the easiest way to do things. Others might have suggested FAT32, however my rsnapshot backup repository makes heavy use of UNIX hard links, and would not be “portable” to FAT32.

This all demonstrates the need for a truly portable filesystem that can be easily transported between operating systems without having to use ugly hacks. I’m hoping that ZFS might eventually be the solution, if Sun ports it to Linux as rumored and even maybe OSX.

I’m wondering if it would be usable on single disks, since everything I’ve seen seems to emphasize its mirroring/redundancy and handling of multi-disk pools over its non-dependency on byte endianness and portability between CPU architectures.

4 thoughts on “The need for a universal file system format

  1. The only problem with UDF is that I’ve got more than 4.7/9G of data to move (57G, to be exact) unless there’s some way to format hard drives UDF.

  2. There will not be a universal filesystem because those features don’t exist everywhere. The best we can hope for is a lowest common denominator that we can use to at least transfer data, but we can’t use it for anything metadata-ish (like hard links) simply because the metadata varies so much. A universal fs will *never* work properly because there are concepts in (eg) the Unix world which don’t exist in the Windows world, such as hard links, and which exist in the Windows world but not in the Unix world, such as multiple streams.

    That said, if you’re using rsnapshot then because of the tools it wraps around you are basically restricting yourself to the Unix world anyway, and so ZFS or NFS would indeed do the trick.

    The problems you had with ext2 sound to me like they might be caused by dodgy ext2 drivers (and perhaps dodgy USB/SATA drivers too), as well as the usual character set brokenness that we get all over the place and which is more an OS configuration problem than a filesystem problem. The filesystem just stores the bits it’s told to store.

  3. Somewhat related – I have the same problem with differing filesystem formats _locally_, but all of my most important data is offloaded to an offsite filesystem at

    I mount it locally with sshfs when I want to, otherwise I just do backups to it with rsync or manipulate individual files with scp/sftp. You should check it out – it’s nice to have one filesystem that never changes and is always safe and available.

Comments are closed.