First of all, sorry for the long delay in posting.
As I mentioned in Multi-threading for improved render times!, I had an i5-4430 with its stock cooler. I have now updated my rig to a more powerful processor i7-7700K (with liquid cooling) and also installed a GTX 1080.
Calculation for a single pixel is a very simple task and outcome for one pixel cannot affect another pixel’s calculation in any way, which makes such a task ideal for a system with a very high number of cores (such as GPUs, which have hundreds of processing cores). Thus, I wanted to shift the computation to my GPU and see how the CPU fares against it.
I want to keep this short, so here is the code: Github – mandelbrot_gpu (uses CUDA 8.0) Note that I enabled maximum optimization during compilation using -O3.
I used two zoom sequences, 1000 frames each; one of them spans a large area and has mostly empty space, so it is easier to calculate . The other one requires heavy computation since it moves mostly near the circumference of the set.
|Platform||Seq 1 average per frame||Seq 2 average per frame|
|CPU||486 ms||2158 ms|
|GPU||15 ms||43 ms|
This is the finished video (with a lot of compression artifacts):
The GPU blew the CPU out of the water. Why do people still have CPUs? Why isnt all computation done on GPUs? That is because most tasks cannot be parallelized to this extent. Most tasks require a very elaborate instruction set and sequential execution of code, which is best done by few, high powered cores as in a CPU. Tasks which can be parallelized though (such as ML tasks), are faster and efficient when run on GPUs.