A checksum is created by looking at specific bits of data within a file, and putting them through a function. E.g., you might have a "checksum" that was simply the sum of every eighth bit in a file (although that would be a pretty awful one). Then if the file was FF 01 FC 3F 00 00 D2 8E, your "checksum" would be 11010000, which could be written in hex as D0. So you've condensed the significantly larger FF 01 FC 3F 00 00 D2 8E into a much smaller D0 checksum, but any significant changes to the file are also likely to result in changes to the checksum. Of course, my example checksum function was a terrible one for which that isn't really true - there are a lot of plausible ways FF 01 FC 3F 00 00 D2 8E could be corrupted while leaving every eighth bit in it alone. I only chose it because it was an easy one to demonstrate. You can compare the checksum in a copy of a file with the original checksum, and if they don't match you know the two files are not exact copies like they should be. If they do match and you've chosen your checksum function wisely, you can feel reasonably confident that no random error crept into the two files making them distinct (a malicious human could, of course, have deliberately changed one of them in such a way as to ensure they still gave the same checksum; it's not generally a security feature).