HFS+ fails to read a filesystem (possibly because it's journaled?)
Project: | JNode FS |
Component: | Code |
Category: | bug report |
Priority: | critical |
Assigned: | galatnm |
Status: | active |
I created a test HFS+ filesystem using Disk Utility, put in a bunch of files with interesting attributes on them to testing, expecting JNode to get the data with maybe some of the metadata.
It logs a warning about going read-only because the device is journaled, which is fine, but then I get an error when opening the root entry:
java.io.IOException: devOffset < 0
at org.jnode.driver.block.BlockDeviceAPIHelper.checkBounds(BlockDeviceAPIHelper.java:39)
at org.jnode.driver.block.BlockDeviceAPIHelper.checkBounds(BlockDeviceAPIHelper.java:55)
at com.acme.data.diskimage.jnode.ByteRegionBlockDevice.read(ByteRegionBlockDevice.java:86)
at org.jnode.fs.hfsplus.catalog.Catalog.getRecord(Catalog.java:163)
at org.jnode.fs.hfsplus.HfsPlusFileSystem.createRootEntry(HfsPlusFileSystem.java:105)
at org.jnode.fs.hfsplus.HfsPlusFileSystem.createRootEntry(HfsPlusFileSystem.java:38)
at org.jnode.fs.spi.AbstractFileSystem.getRootEntry(AbstractFileSystem.java:103)
Attachment | Size |
---|---|
40MB Mac Disk.zip | 55.91 KB |
- Login to post comments
#1
Here's another error from a different disk image:
java.lang.ClassCastException: [Lorg.jnode.fs.hfsplus.tree.NodeRecord; cannot be cast to [Lorg.jnode.fs.hfsplus.tree.LeafRecord;
at org.jnode.fs.hfsplus.catalog.Catalog.getRecords(Catalog.java:215)
at org.jnode.fs.hfsplus.catalog.Catalog.getRecords(Catalog.java:184)
at org.jnode.fs.hfsplus.HFSPlusDirectory.readEntries(HFSPlusDirectory.java:116)
at org.jnode.fs.spi.AbstractFSDirectory.checkEntriesLoaded(AbstractFSDirectory.java:215)
at org.jnode.fs.spi.AbstractFSDirectory.iterator(AbstractFSDirectory.java:93)
Here it seems like the code is working, but someone is trying to cast a NodeRecord[] to LeafRecord[] (which isn't going to work any better than trying to cast an Object[] to String[]...)
#2
For the read-only flag, it's normal because writing methods for HFS+ are not yet implemented but for the second one, i don't understand. I will take a look and test with your disk image.
Fabien L.
#3
It looks like the class cast exception is fixed in r5845, but now there are no files or directories found in the file system. I've attached the sample FS to help you test the problem.
The other "java.io.IOException: devOffset < 0" issue is still happening in the first disk image.
Thanks,
Luke
#4
I think I've found the cause of the problem here. The CatalogKey's keyLength doesn't seem to include the two bytes for the size field in the overall size of the key. Adding a +2 there seems to have fixed the problem for me.
I've attached my fix and some additional tests for HfsPlusFileSystemTest. Sorry its not as a patch, I mangled my checkout a bit...
Cheers,
Luke
#5
A big thanks to you Luke ^_^. I have included your fix and tested it, it works. The fix is committed but i would like to rewrite the test a little bit before committing it.
Fabien L.
#6
I write new unit tests for CatalogKey and HfsUnicodeString and discover that the problem is not the length of the catalog key but the one from the unicode string. Basically, the unicode string contains it length on the first two bytes but the method getLength return the length of the string itself. Normally the problem is fix now but i would like to check the behavior for empty string.
Fabien L.
#7
I've put my code up on github. I think a change is still required, see my comments here and let me know what you think:
https://github.com/tmyroadctfig/jnode/commit/44c9798a6c0e6db1a7dcbca8925...
With those changes all my local tests pass but I'm still in the process of setting up to test the code more thoroughly. Also I've added initial GPT partition support on that github repo.
Cheers,
Luke
#8
Actually, i think that the confusing come from the fact that there are 2 length information.
The structure of the catalog key is the following :
-> minimum length for key is 6 bytes.
The name is represented by the following structure :
So, The method getLength() from HFSUnicodeString return the real length of the string (x/2 characters) and not the length of the structure (2+x bytes).
it's the explanation for this code : this.keyLength = MINIMUM_KEY_LENGTH + (name.getLength() * 2) + 2;
For the constructor with data in a byte array in CatalogKey, the 2 first bytes should contains the correct length of the key (in the unit test -> 6 + 2 + 16 = 24).
So i think the code '+2' is not necessary for this code : keyLength = BigEndian.getInt16(ck, 0);
It's possible that i miss something but if it's the case i don't see what -_-'.
#9
Ah, yes. You're correct about the 'this.keyLength = MINIMUM_KEY_LENGTH + (name.getLength() * 2) + 2;'
But HfsPlusDirectory reads the data from HfsPlusEntry.getData() the offsets are off by two. See below:
Perhaps changing the key length isn't the correct fix but it does allow the folder entry to read the data out correctly.
#10
I will keep my fix for the moment as it implement the specification correctly and try to find the problem with catalog folder. I have already noticed that the type of the leaf record is not correct, probably due to this unattended shift.
#12
I've fixed another problem where the volume can't be read because the catalog file spans more than one extent. Also there seemed to be a few incorrect offsets in the B-Tree header record. Please review my changes here when you have some spare time:
https://github.com/tmyroadctfig/jnode/commits/
revisions 436047e2c8 to b197f93e33
With the above fix I was able to load a 90GB HFS+ image and read all the data out
Thanks,
Luke
#13
I really should check my code twice before committing ^_^. Thanks for your fixes, i've made most of the fixes in my code but i have a little question about your last commit. Shouldn't we use getUInt8 instead of getInt8 because "0xff & xx" represent an unsigned byte and not a signed byte.
Fabien L.
#14
Ah, yes that looks like it should be getUInt8(), I've checked in that change too.
Cheers,
Luke