Fw:flinksql写入hive问题

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Fw:flinksql写入hive问题

潘永克







-------- 转发邮件信息 --------
发件人:"潘永克" <[hidden email]>
发送日期:2021-02-11 11:12:39
收件人:[hidden email]
主题:flinksql写入hive问题
咨询一个flink问题。flinsql,能写入数据到hive表。但是hive表中的数据,都是基于 ".part,,,,inprogress,,,,"类似的文件。flink1.12.0
基于cdh6.2.0编译的,hive版本是2.1.1、hadoop-3.0.0.  问题截图如下:
创建hive表::::
SET table.sql-dialect=hive;
CREATE TABLE hive_table (
  user_id STRING,
  order_amount DOUBLE
) PARTITIONED BY (dt STRING, hr STRING) STORED AS parquet TBLPROPERTIES (
  'partition.time-extractor.timestamp-pattern'='$dt $hr:00:00',
  'sink.partition-commit.trigger'='partition-time',
  'sink.partition-commit.delay'='1 min',
  'sink.partition-commit.policy.kind'='metastore,success-file'
);
插入数据::::
INSERT INTO TABLE hive_table 
SELECT user_id, order_amount, DATE_FORMAT(log_ts, 'yyyy-MM-dd'), DATE_FORMAT(log_ts, 'HH')
FROM kafka_table;

文件始终不落地,一直都是 ".part-。。。。inprogress。。。"。文件。




 




 

Reply | Threaded
Open this post in threaded view
|

Re: Fw:flinksql写入hive问题

macdoor
有 checkpoint 吗?



--
Sent from: http://apache-flink.147419.n8.nabble.com/