A hash is a numerical representation of some string of bytes. The purpose is to give quick access to data. Ignoring the mechanics for a moment, let's say that we have an algorithm that calculates "orange" as the number 7, and "apple" as 200. We'd store "orange" in record 7 of our file, "apple" in 200, and that would let us quickly find where either of these were stored. A hash in perl (the array type that looks like "%mystuff") is an example of this kind of storage.
Bytes are of course just numbers to start with, but we need some sort of boundary for the numbers of a hash. A 4 byte string is a 32 byte number, which is already getting fairly large. We couldn't easily store the record number represented by "Lawrence, Anthony P.". That's why hash algrorithms always use some sort of folding to make a smaller number. A simplistic hash might just take the first 4 bytes of that string and use that number as our index. However, that introduces the problem of non-unique hash results: "Lawrence, Anthony P." and "Lawrence, Kansas" will produce exactly the same number. Perl and other systems that use hashes have to prepare for such things by having overflow storage locations: the algorithm checks the default location to see if it is what you asked for, but if not goes to a specific other location to look again.
Good design of hash algorithms (the formula that produces the hash number) can minimize clashes (prime numbers come into use here). On Unix systems, when files keyed by a hash function are written to disk, you will often have unused records: no data hashed to match some of the record numbers. This can produce "sparse" files: files that appear to be larger than the actual disk blocks that they consume. That's a function of how Unix file systems work; there's nothing special about the file itself except that sections of it are unallocated.
Got something to add? Send me email.
More Articles by Tony Lawrence © 2011-07-06 Tony Lawrence