Flink1.11如何实现Tumble Window后基于event time倒序取第一条作统计

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink1.11如何实现Tumble Window后基于event time倒序取第一条作统计

Hush
Hi 大家好


        现在想对5分钟的kafka数据开窗,因为是DTS同步消息数据,会有update 和 delete,所以需要对相同user_id的数据根据事件时间倒序第一条,统计最后一次status(状态字段)共有多少人。


marketingMapDS: DataStream[(String, String, Long)]
|
tEnv.createTemporaryView("test", marketingMapDS,$"status", $"upd_user_id", $"upd_time".rowtime)
    val resultSQL =
      """
        |SELECT t.status,
        |             COUNT(t.upd_user_id) as num
        |FROM (
        |    SELECT  *,
        |                  ROW_NUMBER() OVER (PARTITION BY upd_user_id ORDER BY upd_time DESC) as row_num
        |    FROM test
        |) t
        |WHERE t.row_num = 1
        |GROUP BY t.status, TUMBLE(t.upd_time, INTERVAL '5' MINUTE)
        |""".stripMargin
    val table2 = tEnv.sqlQuery(resultSQL)
    val resultDS = tEnv.toRetractStream[Row](table2)
|


这样写后会报以下错:
| Exception in thread "main" org.apache.flink.table.api.TableException: GroupWindowAggregate doesn't support consuming update and delete changes which is produced by node Rank(strategy=[UndefinedStrategy], rankType=[ROW_NUMBER], rankRange=[rankStart=1, rankEnd=1], partitionBy=[upd_user_id], orderBy=[upd_time DESC], select=[status, upd_user_id, upd_time]) |


所以想实现该需求,请问还可以怎么实现。。。


TABLE API 可以实现 类似 ROW_NUMBER() OVER 这样功能吗?
|
 val table = tEnv.fromDataStream(marketingMapDS, $"status", $"upd_user_id", $"upd_time".rowtime)
      .window(Tumble over 5.millis on $"upd_time" as "w")
      .groupBy($"w")
???
|


Flink新手一个。。。请大佬指点~

Reply | Threaded
Open this post in threaded view
|

Re: Flink1.11如何实现Tumble Window后基于event time倒序取第一条作统计

HunterXHunter
GroupWindowAggregate不支持update或者delete的datasource。



--
Sent from: http://apache-flink.147419.n8.nabble.com/