Because true parallel programming (as in utilizing multiple cores) is hard to do: most widely used computer programming languages are ill-suited for parallel programming and not every task is even suitable for parallelization. Programmers have to be extra careful when multiple threads access the same data and have to make sure there are access protocols in place when two cores want to work with the same data.
In most of these cases programmers serialize access in data with semaphores and mutexes. These in turn decrease the gain you get out of parallelization (someone has to wait and do nothing), and in some cases (with small data sets or global locks) may even incur a performance hit in comparison to a single-threaded application because of all the management overhead.
Debugging errors in such programs can be a real pain in the ass because of all the added complexity:
Most game engines process a relatively static data set (textures, geometry, UI) and do little to no heavy computation with that data (except object positioning and maybe inverse kinetics for animation, which may as well be static animations). There is also absolutely no gain in pre-computing additional frames, as most of the heavy lifting is already done on the GPU today.