The good of blogs

An anonymous poster suggested the use of a program I completely missed called 'fileop' that exists in the Iozone distribution. From the source:
 * Usage:  fileop X
*
* X is a force factor. The total number of files will
* be X * X * X ( X ^ 3 )
* The structure of the file tree is:
* X number of Level 1 directorys, with X number of
* level 2 directories, with X number of files in each
* of the level 2 directories.
*
* Example: fileop 2
*
* dir_1 dir_2
* / \ / * sdir_1 sdir_2 sdir_1 sdir_2
* / \ / \ / \ / * file_1 file_2 file_1 file_2 file_1 file_2 file_1 file_2
*
* Each file will be created, and then 1 byte is written to the file.
*
Which happens to be a great way to test whonking huge directory trees. Balanced trees, of course, but still whonking huge. We see a problem with directories involving thousands of directory-entries, and this tool could be used to see if their behavior changes between when viewed over the network and when viewed locally.

Unfortunately, my test blade is currently being used as part of the BlackBoard upgrade project so I won't get that back until June. But I plan on doing some tests to see how nss vs. reiser vs. reiser w/separate journal behaves.