mysql – 使用for循环数组时使用INSERT INTO表ON DUPLICATE KEY时出错

我正在使用pyspark框架更新mysql数据库,并在AWS Glue服务上运行.

我有一个数据帧如下:

df2= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 55","NY"),("xxx2","81A01","TERR NAME 55","NY"),("x103","81A01","TERR NAME 01","NJ")], ["zip_code","territory_code","territory_name","state"])

# Print out information about this data
df2.show()
+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 55|   NY|
|    xxx2|         81A01|  TERR NAME 55|   NY|
|    x103|         81A01|  TERR NAME 01|   NJ|
+---------------------------------------------

我有一个主键ZIP_CODE,我需要确保,没有重复键或主键异常,因此我使用INSERT INTO ….在DUPLICATE键.

因为我有多行要插入/更新,所以我在python中使用了数组来遍历记录,并在数据库中执行INSERT.代码如下:

sarry = df2.collect()
for r in sarry:
     db = MySQLdb.connect("xxxx.rds.amazonaws.com", "username", "password", 
      "databasename")
     cursor = db.cursor()
     insertQry=INSERT INTO ZIP_TERR(zip_code, territory_code, territory_name, 
     state) VALUES(r.zip_code, r.territory_code, r.territory_name, r.state) ON 
     DUPLICATE KEY UPDATE territory_name = VALUES(territory_name), state = 
     VALUES(state);"
     n=cursor.execute(insertQry)
     db.commit()
     db.close()

运行上面的插入查询功能时,我收到以下错误信息,无法得到任何关于错误的线索.请帮忙.

Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-2291407229037300959.py", line 367, in <module>
    raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-2291407229037300959.py", line 360, in <module>
    exec(code, _zcUserQueryNameSpace)
  File "<stdin>", line 8, in <module>
  File "/usr/local/lib/python2.7/site-packages/pymysql/cursors.py", line 170, in execute
    result = self._query(query)
  File "/usr/local/lib/python2.7/site-packages/pymysql/cursors.py", line 328, in _query
    conn.query(q)
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 893, in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 1103, in _read_query_result
    result.read()
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 1396, in read
    first_packet = self.connection._read_packet()
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 1059, in _read_packet
    packet.check_error()
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 384, in check_error
    err.raise_mysql_exception(self._data)
  File "/usr/local/lib/python2.7/site-packages/pymysql/err.py", line 109, in raise_mysql_exception
    raise errorclass(errno, errval)
InternalError: (1054, u"Unknown column 'r.zip_code' in 'field list'")

如果我只是尝试打印一行的值,我将按如下方式打印值:

print('zip_code_new: ', r.zip_code, r.territory_code, r.territory_name, r.state)

zip_code_new:  xxx1 81A01 TERR NAME 55 NY

谢谢.我正在使用AWS Glue / Pyspark,所以我需要使用本机python库.

解决方法:

以下插入查询适用于for循环.

insertQry="INSERT INTO ZIP_TERR(zip_code, territory_code, territory_name, state) VALUES(%s, %s, %s, %s) ON DUPLICATE KEY UPDATE territory_name = %s, state = %s;

n=cursor.execute(insertQry, (r.zip_code, r.territory_code, r.territory_name, r.state, r.territory_name, r.state))
print (" CURSOR status :", n)

结果输出:

CURSOR status : 2

谢谢.希望这将参考其他人.

上一篇:python – 向Spark DataFrame添加一个空列


下一篇:python – PySpark DataFrame上的Sum运算在type为fine时给出TypeError