The concept of storing data as 'seeds' to be extrapolated into a 'decompressed' file by a complex algorithm isn't new. That being said, the more advanced the algorithm, the more 'arbitrary' the data set that you are ultimately able to render. The technique mentioned here is not unlike the work done by another group that used computer models of the acoustics of instruments to generate music from music scores, rather than store the actual audio file.
In essence, what you have is the DNA 'seed' of a target file, and an organism (albeit a virtual one) that renders that DNA into the target file. The result is a massive amount of data that is actually stored in two places: the DNA of the file and the complex mechanism to render it into a useful form.
Unfortunately, this sort of technology is only of much use where bandwidth is at a premium, but computing power and memory on either end is immense, which is not the current state of computing these days. If we ever hit some sort of limit to satellite bandwidth (planet to planet or system to system, for all you sci-fi fans out there), this sort of technology will be very important. Star Trek transporters, anyone?
