Automatic parallelization, also auto parallelization, autoparallelization, or parallelization, the last one of which implies automation when used in context, refers to converting sequential code into multi-threaded or vectorized (or even both) code in order to utilize multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine. The goal of automatic parallelization is to relieve programmers from the tedious and error-prone manual parallelization process. Though the quality of automatic parallelization has improved in the past several decades, fully automatic parallelization of sequential programs by compilers remains a grand challenge due to its need for complex program analysis and the unknown factors (such as input data range) during compilation.
The programming control structures on which autoparallelization places the most focus are loops, because, in general, most of the execution time of a program takes place inside some form of loop. A parallelizing compiler tries to split up a loop so that its iterations can be executed on separate processors concurrently.
Read more about Automatic Parallelization: Compiler Parallelization Analysis, Example, Difficulties, Workaround, Historical Parallelizing Compilers
Famous quotes containing the word automatic:
“What we learn for the sake of knowing, we hold; what we learn for the sake of accomplishing some ulterior end, we forget as soon as that end has been gained. This, too, is automatic action in the constitution of the mind itself, and it is fortunate and merciful that it is so, for otherwise our minds would be soon only rubbish-rooms.”
—Anna C. Brackett (18361911)