This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
ffnamespace:architecture [2014/08/14 02:32] aldinuc |
ffnamespace:architecture [2014/08/14 02:33] aldinuc [High-level Patterns] |
||
---|---|---|---|
Line 44: | Line 44: | ||
In the specific case, the only syntactic difference between OpenMP and FastFlow are that FastFlow provide programmers with C++ templates instead of compiler pragmas. It is worth to notice that despite the similar syntax, the implementation of the ''parallel_for'' and all other high-level patterns in FastFlow is quite different from OpenMP and other mainstream programming frameworks (Intel TBB, etc). FastFlow, instead of relying on a general task execution engine, generates a compile time a specific streaming network based on core patterns for each pattern. In the case of ''parallel_for'' this network is a parametric master-worker with active or passive (in memory) task scheduler (more details in the [[http://calvados.di.unipi.it/storage/paper_files/2014_ff_looppar_pdp.pdf|PDP2014 paper]]). | In the specific case, the only syntactic difference between OpenMP and FastFlow are that FastFlow provide programmers with C++ templates instead of compiler pragmas. It is worth to notice that despite the similar syntax, the implementation of the ''parallel_for'' and all other high-level patterns in FastFlow is quite different from OpenMP and other mainstream programming frameworks (Intel TBB, etc). FastFlow, instead of relying on a general task execution engine, generates a compile time a specific streaming network based on core patterns for each pattern. In the case of ''parallel_for'' this network is a parametric master-worker with active or passive (in memory) task scheduler (more details in the [[http://calvados.di.unipi.it/storage/paper_files/2014_ff_looppar_pdp.pdf|PDP2014 paper]]). | ||
- | As in OpenMP, ''parallel_for'' comes in many variants (see [[ffnamespace:usermanual|reference manual]]). Other patterns at this level, to date, are: ''parallel_reduce'', ''mdf'' (macro-data-flow), ''pool evolution'' (genetic algorithm), ''stencil''. They cover most common parallel programming paradigms in data, stream and task parallelism. Notably, FastFlow patterns are C++ class templates and can be extended by end users according to the Object-Oriented methodology. | + | As in OpenMP, ''parallel_for'' comes in many variants (see [[ffnamespace:refman|reference manual]]). Other patterns at this level, to date, are: ''parallel_reduce'', ''mdf'' (macro-data-flow), ''pool evolution'' (genetic algorithm), ''stencil''. They cover most common parallel programming paradigms in data, stream and task parallelism. Notably, FastFlow patterns are C++ class templates and can be extended by end users according to the Object-Oriented methodology. |
Iterative execution of kernels onto GPGPUs are addressed by a single but very flexible pattern, i.e. ''stencil-reduce'', which also takes care of feeding GPGPUs with data and D2H/H2D synchronisations. More details can be found in [[http://calvados.di.unipi.it/storage/talks/2014_S4585-Marco-Aldinucci.pdf|GTC 2014 talk]]. | Iterative execution of kernels onto GPGPUs are addressed by a single but very flexible pattern, i.e. ''stencil-reduce'', which also takes care of feeding GPGPUs with data and D2H/H2D synchronisations. More details can be found in [[http://calvados.di.unipi.it/storage/talks/2014_S4585-Marco-Aldinucci.pdf|GTC 2014 talk]]. |