如果我启动pyspark然后运行此命令:
import my_script; spark = my_script.Sparker(sc); spark.collapse('./data/')
一切都很好.但是,如果我尝试通过命令行和spark-submit执行相同的操作,则会收到错误消息:
Command: /usr/local/spark/bin/spark-submit my_script.py collapse ./data/
File "/usr/local/spark/python/pyspark/rdd.py", line 352, in func
return f(iterator)
File "/usr/local/spark/python/pyspark/rdd.py", line 1576, in combineLocally
merger.mergeValues(iterator)
File "/usr/local/spark/python/pyspark/shuffle.py", line 245, in mergeValues
for k, v in iterator:
File "/.../my_script.py", line 173, in _json_args_to_arr
js = cls._json(line)
RuntimeError: uninitialized staticmethod object
my_script:
...
if __name__ == "__main__":
args = sys.argv[1:]
if args[0] == 'collapse':
directory = args[1]
from pyspark import SparkContext
sc = SparkContext(appName="Collapse")
spark = Sparker(sc)
spark.collapse(directory)
sc.stop()
为什么会这样?运行pyspark和运行spark-submit会导致这种分歧有什么区别?我怎样才能在spark-submit中完成这项工作?
编辑:我尝试通过执行pyspark my_script.py collapse ./data/从bash shell运行它,我得到了同样的错误.一切正常的唯一时间是我在python shell中并导入脚本.
解决方法:
>如果您构建了一个spark应用程序,则需要使用spark-submit来运行该应用程序
>代码可以用python / scala编写
>模式可以是本地/集群
>如果您只想测试/运行几个单独的命令,则可以使用spark提供的shell
> pyspark(用于python中的spark)
>火花壳(用于scala中的火花)