c++ - Parallelizing loops containing function call -


Is this function equally suitable for call loops, or is it a more convenient circulation of loop which is the original Operation

For example, does it keep the equation instructions as below?

  main () {.. #omp for paralel .. (i = 0; i & lt; 100; i ++) {a [i] = foo (and datatype,. ..); ...} ..} int foo (datatype * A, ...) {// complex operating is doing here // call other functions etc.}}  

Thanks, Will Richard and Fakler, these comments were helpful and I had suggested the book RCHHD that there would be a deeper look. But before the end of the day, I want to make the current C code (actually a big loop that resides on top of the program), if possible, be parallel with OpenMP.

At this point, I need to help to make at least some parts of the loop parallel to some things, rather than to completely subdue the piral material, to simplify things, I How can I create a part

 for  (i) n () (work1 () - (serial) work2 () - (serial) work3 () - (PARALLEL) work 4 () - (Serial)} // Does this work (3 to 0) work for private (PTR) except for # 3, except AMP P parallel Gone is to be added. Single {work1 () - (serial) work2 () - (serial)} work3 (ptr) - (baral) #omp single {work4 () - (serial)}}   

Three bit notifications must be known:

  1. Keeps in mind, are you executing Fu?
  2. Does Fu () affect the shared portion, and is such a locking?
  3. Open without any How long does the loop take to run an MF?

Near work that takes long - many seconds or more - and it can be divided into independent parts (sometimes by refactoring, for example, by dividing the jobs and

the profile!

Incidentally, the results of each job)

Comments

Popular posts from this blog

oracle - The fastest way to check if some records in a database table? -

php - multilevel menu with multilevel array -

jQuery UI: Datepicker month format -