The difference between MapReduce and the map-reduce combination in functional programming -


I read the mapreadaid, how to calculate the "word" in many "documents", understood the example. Although I did not understand the following line:

Thus MapReduce changes the list of values ​​(key, value) pairs in a list of values. This behavior is different from the functional programming map and reduces the combination, which accepts the list of arbitrary values ​​and returns a single value that adds all the values ​​back to the map.

Do any differences again (reduce mapfreads framework VS map and combination)? Especially, what does programming programming do to reduce programming?

Thanks a great deal.

The main difference will be that MapReduce apparently has patents. (not able to help me, sorry ...)

On a more serious note, mapraudues paper, as I remember, described a method of calculating large scale Does parallel fashion this method reduces the mapping / construction which was well known for years ago, but goes forward in the distribution of data etc. In addition, some obstacles are imposed on the structure of the data which is operated like map -like and decrease (key / value pairs of parts of the calculation) Talk about the data in the list), you can say that MapReduce is a huge - map & amp; Parallel-friendly specialization low combination of

The function code of map / decrease is a value per input for Wikipedia commentary in the creation of the build ... well, it ensures But here there is no hurdle on the type of said value Specifically, it may be like a complex data structure, such as a list of things that you can then again use a map < / Code> / reduce changes can be applied to the "count word" example Going back, you can do a function very well, for the given portion of the text, the number of words generated by the data mapping, your documents (or part of the documents, As the case may be) and reduce the results .

In fact, what it really is by Phil Hagelberg is a fun and least example of the calculation such as map-word-word count in which with map in the closure Has been implemented and reducing to ( (apply + (+ (merge-in ...)) bit - merge-in decrease has been implemented in Closer.core). The only difference between this and Wikipedia example is that the computation of objects The items being made are URLs instead of arbitrary words - besides that, you have got a coding word algorithm in which map and have been reduced , MapReduce-style, fix There is no reason to be fully qualified as an example of a Mepredus, it is not that there is no complicated distribution of workloads involved in it, all of which are being done on the same box ... though all the CPUs available in the box Depending on

Reduce to treat the depth of the function - which is fold - see Graham Hutton, it's Haskell based, but if you have a language No idea should be readable even if you are ready to look at Haskell or two ... ... like code ++ = invocations, there is no deep Haskell magic.


Comments

Popular posts from this blog

oracle - The fastest way to check if some records in a database table? -

php - multilevel menu with multilevel array -

jQuery UI: Datepicker month format -