python – 在PySpark中爆炸

我想从包含单词列表的DataFrame转换为DataFrame,每个单词都在自己的行中.

如何在DataFrame中的列上进行爆炸?

下面是我的一些尝试示例,您可以在其中取消注释每个代码行并获取以下注释中列出的错误.我在Python 2.7中使用PySpark和Spark 1.6.1.

from pyspark.sql.functions import split, explode
DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat', )], ['word'])
print 'Dataset:'
DF.show()
print '\n\n Trying to do explode: \n'
DFsplit_explode = (
 DF
 .select(split(DF['word'], ' '))
#  .select(explode(DF['word']))  # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;"
#   .map(explode)  # AttributeError: 'PipelinedRDD' object has no attribute 'show'
#   .explode()  # AttributeError: 'DataFrame' object has no attribute 'explode'
).show()

# Trying without split
print '\n\n Only explode: \n'

DFsplit_explode = (
 DF 
 .select(explode(DF['word']))  # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;"
).show()

请指教

解决方法:

爆炸和拆分是SQL函数.两者都在SQL Column上运行. split将Java正则表达式作为第二个参数.如果你想在任意空格上分隔数据,你需要这样的东西:

df = sqlContext.createDataFrame(
    [('cat \n\n elephant rat \n rat cat', )], ['word']
)

df.select(explode(split(col("word"), "\s+")).alias("word")).show()

## +--------+
## |    word|
## +--------+
## |     cat|
## |elephant|
## |     rat|
## |     rat|
## |     cat|
## +--------+
上一篇:34岁程序员年薪50w,java虚拟机下载资源


下一篇:python – 当列表值与Pyspark数据帧中的列值的子字符串匹配时,填充新列