That went well

Remember what I was talking about yesterday? About converting my volume-space tracker to use a database back-end? Well, I did that yesterday, and it worked. It took about three hours to:
  • Reactivate the commented-out database bits in the script
  • Convert the database bits to use dbi:odbc instead of dbi:oracle
  • Handle a lookup for a volume-name/server-name pair
  • Generate a new unique-ID for a new volume-name/server-name pair
  • Get the database table created, indexed, and linked appropriately
I was very happy to see that the problems I was having with the directory quotas didn't happen here. Perhaps it has something to do with only INSERTing 15 rows at a whack instead of 23,000. Who knows. Whatever it was, this works for the scale I'm at.

The thing I worked on this morning was importing the existing CSV file the script has been dumping to, and getting it into the table. That took about 45 minutes to do the required transformations in Perl. I couldn't use the native import tools since I ran smack into some data-type issues, and the CSV file has a servername/volumename pair instead of the uniqueID. Once I got those bugs worked out, it imported just peachy.

Once this all hits production, my boss is going to be s-o-o-o happy. He loves data like this. He has already used the volume-space tracker to justify several thousand dollars worth of infrastructure upgrades to his boss. Yay!