HFS+ fails to read a filesystem (possibly because it's journaled?)

Project:JNode FS
Component:Code
Category:bug report
Priority:critical
Assigned:galatnm
Status:active
Description

I created a test HFS+ filesystem using Disk Utility, put in a bunch of files with interesting attributes on them to testing, expecting JNode to get the data with maybe some of the metadata.

It logs a warning about going read-only because the device is journaled, which is fine, but then I get an error when opening the root entry:


java.io.IOException: devOffset < 0
at org.jnode.driver.block.BlockDeviceAPIHelper.checkBounds(BlockDeviceAPIHelper.java:39)
at org.jnode.driver.block.BlockDeviceAPIHelper.checkBounds(BlockDeviceAPIHelper.java:55)
at com.acme.data.diskimage.jnode.ByteRegionBlockDevice.read(ByteRegionBlockDevice.java:86)
at org.jnode.fs.hfsplus.catalog.Catalog.getRecord(Catalog.java:163)
at org.jnode.fs.hfsplus.HfsPlusFileSystem.createRootEntry(HfsPlusFileSystem.java:105)
at org.jnode.fs.hfsplus.HfsPlusFileSystem.createRootEntry(HfsPlusFileSystem.java:38)
at org.jnode.fs.spi.AbstractFileSystem.getRootEntry(AbstractFileSystem.java:103)

AttachmentSize
40MB Mac Disk.zip55.91 KB

#1

Here's another error from a different disk image:


java.lang.ClassCastException: [Lorg.jnode.fs.hfsplus.tree.NodeRecord; cannot be cast to [Lorg.jnode.fs.hfsplus.tree.LeafRecord;
at org.jnode.fs.hfsplus.catalog.Catalog.getRecords(Catalog.java:215)
at org.jnode.fs.hfsplus.catalog.Catalog.getRecords(Catalog.java:184)
at org.jnode.fs.hfsplus.HFSPlusDirectory.readEntries(HFSPlusDirectory.java:116)
at org.jnode.fs.spi.AbstractFSDirectory.checkEntriesLoaded(AbstractFSDirectory.java:215)
at org.jnode.fs.spi.AbstractFSDirectory.iterator(AbstractFSDirectory.java:93)

Here it seems like the code is working, but someone is trying to cast a NodeRecord[] to LeafRecord[] (which isn't going to work any better than trying to cast an Object[] to String[]...)

#2

Priority:normal» critical
Assigned to:Anonymous» galatnm

For the read-only flag, it's normal because writing methods for HFS+ are not yet implemented but for the second one, i don't understand. I will take a look and test with your disk image.

Fabien L.

#3

It looks like the class cast exception is fixed in r5845, but now there are no files or directories found in the file system. I've attached the sample FS to help you test the problem.

The other "java.io.IOException: devOffset < 0" issue is still happening in the first disk image.

Thanks,

Luke

AttachmentSize
test.hfsplus.zip85.11 KB

#4

I think I've found the cause of the problem here. The CatalogKey's keyLength doesn't seem to include the two bytes for the size field in the overall size of the key. Adding a +2 there seems to have fixed the problem for me.

I've attached my fix and some additional tests for HfsPlusFileSystemTest. Sorry its not as a patch, I mangled my checkout a bit...

Cheers,

Luke

AttachmentSize
hfs-fix.zip13.78 KB

#5

A big thanks to you Luke ^_^. I have included your fix and tested it, it works. The fix is committed but i would like to rewrite the test a little bit before committing it.

Fabien L.

#6

I write new unit tests for CatalogKey and HfsUnicodeString and discover that the problem is not the length of the catalog key but the one from the unicode string. Basically, the unicode string contains it length on the first two bytes but the method getLength return the length of the string itself. Normally the problem is fix now but i would like to check the behavior for empty string.

Fabien L.

#7

I've put my code up on github. I think a change is still required, see my comments here and let me know what you think:

https://github.com/tmyroadctfig/jnode/commit/44c9798a6c0e6db1a7dcbca8925...

With those changes all my local tests pass but I'm still in the process of setting up to test the code more thoroughly. Also I've added initial GPT partition support on that github repo.

Cheers,

Luke

#8

Actually, i think that the confusing come from the fact that there are 2 length information.

The structure of the catalog key is the following :

  • 2 bytes for the lenght of the key itself.
  • 4 bytes for the catalog node id
  • x bytes for the name

-> minimum length for key is 6 bytes.

The name is represented by the following structure :

  • 2 bytes for the length of the string
  • x bytes for the string itself.

So, The method getLength() from HFSUnicodeString return the real length of the string (x/2 characters) and not the length of the structure (2+x bytes).

it's the explanation for this code : this.keyLength = MINIMUM_KEY_LENGTH + (name.getLength() * 2) + 2;

For the constructor with data in a byte array in CatalogKey, the 2 first bytes should contains the correct length of the key (in the unit test -> 6 + 2 + 16 = 24).

So i think the code '+2' is not necessary for this code : keyLength = BigEndian.getInt16(ck, 0);

It's possible that i miss something but if it's the case i don't see what -_-'.

#9

Ah, yes. You're correct about the 'this.keyLength = MINIMUM_KEY_LENGTH + (name.getLength() * 2) + 2;'

But HfsPlusDirectory reads the data from HfsPlusEntry.getData() the offsets are off by two. See below:

Record offsets: 0x10, 0x78, 0x94, 0x1b0, etc

Data:
00000000 00 00 00 00 00 00 00 00 FF 01 00 08 00 00 

First leaf record:
Offset = 0xe
Key length = 0x10
Parent ID = 0x1
String length = 0x5 wchars
String = "Kenny"

                                                   00 10 ................
00000010 00 00 00 01 00 05 00 4B 00 65 00 6E 00 6E 00 79 .......K.e.n.n.y <-- Data ends here at offset 0x1f, 
                                                                              i.e. 0xe + 0x10 + 0x2
CatalogFolder follows:
Without the 'keyLength = BigEndian.getInt16(ck, 0) + 2' CatalogFolder starts reading two bytes too early;
it starts [00 79 00 01 ... ] instead of [ 00 01 00 00 ... ]
00000020 00 01 00 00 00 00 00 03 00 00 00 02 CA 5F 03 10 ............._..
00000030 CA 5F 03 F8 CA 5F 03 F8 00 00 00 00 00 00 00 00 ._..._..........
00000040 00 00 00 00 00 00 00 00 00 00 41 ED 00 00 00 00 ..........A.....
00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000070 00 00 00 00 00 00 00 00 

                                 00 06 00 00 00 02 00 00 ................
00000080 00 03 00 00 00 00 00 01 00 05 00 4B 00 65 00 6E ...........K.e.n
00000090 00 6E 00 79 
                     00 22 00 00 00 02 00 0E 00 73 00 6F .n.y.".......s.o
000000A0 00 75 00 74 00 68 00 70 00 61 00 72 00 6B 00 2E .u.t.h.p.a.r.k..
000000B0 00 6A 00 70 00 65 00 67 00 02 00 02 00 00 00 00 .j.p.e.g........
000000C0 00 00 00 15 CA 5F 03 F8 CA 5F 03 F8 CA 5F 03 F8 ....._..._..._..
000000D0 CA 5F 03 F8 00 00 00 00 00 00 00 00 00 00 00 00 ._..............
000000E0 00 00 81 A4 00 00 00 01 3F 3F 3F 3F 3F 3F 3F 3F ........????????
000000F0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000100 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................

Perhaps changing the key length isn't the correct fix but it does allow the folder entry to read the data out correctly.

#10

I will keep my fix for the moment as it implement the specification correctly and try to find the problem with catalog folder. I have already noticed that the type of the leaf record is not correct, probably due to this unattended shift.

#12

I've fixed another problem where the volume can't be read because the catalog file spans more than one extent. Also there seemed to be a few incorrect offsets in the B-Tree header record. Please review my changes here when you have some spare time:

https://github.com/tmyroadctfig/jnode/commits/
revisions 436047e2c8 to b197f93e33

With the above fix I was able to load a 90GB HFS+ image and read all the data out Smiling

Thanks,

Luke

#13

I really should check my code twice before committing ^_^. Thanks for your fixes, i've made most of the fixes in my code but i have a little question about your last commit. Shouldn't we use getUInt8 instead of getInt8 because "0xff & xx" represent an unsigned byte and not a signed byte.

Fabien L.

#14

Ah, yes that looks like it should be getUInt8(), I've checked in that change too.

Cheers,

Luke