hadoop - STREAM keyword in pig script that runs in Amazon Mapreduce -
I have a pig script, which activates another python program I was able to do this in my own environment But when I run my script on the Amazon map, I always fail.
The log says:
org.apache.pig .backend.executionengine.ExecException: Error 2090: While processing less error received, processing: '' With exit status Failed: 127 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce $ Reduce.runPipeline (PigMapReduce.java: 347) org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce $ Reduce.reduce On org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce $ Reduce.processOnePackageOutput (pigMapReduce.javamore8888) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce $ Reduce.reduce (PigMapReduce Java: 142) at org.apache.hadoop.mapred.ReduceTask.run at (PigMapReduce.java:260) (ReduceTask.java: 321) at org.apache.hadoop.mapred.TaskTracker $ Child.Main (TaskTracker.j Ava month 216)
Any ideas?
Did you make sure that the script is sent with flexible mapreads job?
Comments
Post a Comment