pyspark >>> hiveContext.sql("select from_unixtime(cast(1509672916 as bigint),'yyyy-MM-dd HH:mm:ss.SSS')").show(truncate=False) +---+ |_c0 | +---+ |2017-11-02 21:35:16.000| +---+ pyspark >>>hiveContext.sql("select from_unixtime(cast(<unix-timestamp-column-name> as ...
The output file is csv, because in the future it will be added to the Hive database. The problem occurs when I want to change the type of the "datetime" variable from unix timestamp to the usual date time. Based on this solution: https://community.hortonworks.com...