Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Yang Peng
Hi,咨询各位一个问题我们有个任务,statebackend为rocksdb
增量执行cp,flink读取kafka经过处理然后写入到kafka,producer开启了EOS,最近发现任务有反压,source端日志量有积压,然后准备改一下资源分配多加一些资源(没有修改并行度,代码未做修改)从cp恢复任务,任务被cancel之后然后从cp恢复发现起不来了连续两次都不行,由于客户端日志保存时间太短当时没来得及去查看客户端日志,所以没有找到客户端日志,
Reply | Threaded
Open this post in threaded view
|

回复:Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

宇
有没有可能是没分配uid,然后dag发生了变化,导致的恢复不了状态



---原始邮件---
发件人: "Yang Peng"<[hidden email]&gt;
发送时间: 2020年8月14日(周五) 中午1:02
收件人: "user-zh"<[hidden email]&gt;;
主题: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败


Hi,咨询各位一个问题我们有个任务,statebackend为rocksdb
增量执行cp,flink读取kafka经过处理然后写入到kafka,producer开启了EOS,最近发现任务有反压,source端日志量有积压,然后准备改一下资源分配多加一些资源(没有修改并行度,代码未做修改)从cp恢复任务,任务被cancel之后然后从cp恢复发现起不来了连续两次都不行,由于客户端日志保存时间太短当时没来得及去查看客户端日志,所以没有找到客户端日志,
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Yang Peng
我们这个任务的operator没有分配uid,之前也没有分配uid但是从cp恢复过好几次都成功了 就这次没有成功

宇 <[hidden email]> 于2020年8月14日周五 下午1:57写道:

> 有没有可能是没分配uid,然后dag发生了变化,导致的恢复不了状态
>
>
>
> ---原始邮件---
> 发件人: "Yang Peng"<[hidden email]&gt;
> 发送时间: 2020年8月14日(周五) 中午1:02
> 收件人: "user-zh"<[hidden email]&gt;;
> 主题: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败
>
>
> Hi,咨询各位一个问题我们有个任务,statebackend为rocksdb
>
> 增量执行cp,flink读取kafka经过处理然后写入到kafka,producer开启了EOS,最近发现任务有反压,source端日志量有积压,然后准备改一下资源分配多加一些资源(没有修改并行度,代码未做修改)从cp恢复任务,任务被cancel之后然后从cp恢复发现起不来了连续两次都不行,由于客户端日志保存时间太短当时没来得及去查看客户端日志,所以没有找到客户端日志,
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

JasonLee
hi

没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好



--
Sent from: http://apache-flink.147419.n8.nabble.com/
Best Wishes
JasonLee
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Yang Peng
好的 感谢

JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:

> hi
>
> 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Congxian Qiu
Hi
   你还有失败作业的 JM 和 TM
日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
Best,
Congxian


Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:

> 好的 感谢
>
> JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
>
> > hi
> >
> > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> >
> >
> >
> > --
> > Sent from: http://apache-flink.147419.n8.nabble.com/
>
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Yang Peng
在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况

Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午10:39写道:

> Hi
>    你还有失败作业的 JM 和 TM
> 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
> Best,
> Congxian
>
>
> Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:
>
> > 好的 感谢
> >
> > JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
> >
> > > hi
> > >
> > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-flink.147419.n8.nabble.com/
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Congxian Qiu
Hi
   JM/TM 日志如果是 OnYarn 模式,且开了了 log aggreagte 的话[1],应该是能够获取到这个日志的。
   据我所知,暂时没有已知问题会导致增量 checkpoint 不能恢复,如果你遇到的问题确定会导致 增量 checkpoint
恢复失败的话,可以考虑创建一个 Issue

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#log-files
Best,
Congxian


Yang Peng <[hidden email]> 于2020年8月17日周一 上午11:22写道:

> 在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况
>
> Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午10:39写道:
>
> > Hi
> >    你还有失败作业的 JM 和 TM
> > 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
> > Best,
> > Congxian
> >
> >
> > Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:
> >
> > > 好的 感谢
> > >
> > > JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
> > >
> > > > hi
> > > >
> > > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> > > >
> > > >
> > > >
> > > > --
> > > > Sent from: http://apache-flink.147419.n8.nabble.com/
> > >
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Yang Peng
找到了 具体日志如下:2020-08-13 19:45:21,932 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error occurred in the cluster entrypoint.
org.apache.flink.runtime.dispatcher.DispatcherException: Failed to take leadership with session id 98a2a688-266b-4929-9442-1f0b559ade43.
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$null$30(Dispatcher.java:915)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
	at org.apache.flink.runtime.concurrent.FutureUtils$WaitingConjunctFuture.handleCompletedFuture(FutureUtils.java:691)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
	at akka.actor.ActorCell.invoke(ActorCell.scala:561)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
	at akka.dispatch.Mailbox.run(Mailbox.scala:225)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
	at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
	... 4 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
	at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
	at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
	at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
	... 7 more
Caused by: java.io.FileNotFoundException: Cannot find meta data file '_metadata' in directory 'hdfs:xxxxxxxxxx/flink/checkpoints/7226f43179649162e6bae2573a952e60/chk-167'. Please try to load the checkpoint/savepoint directly from the metadata file instead of the directory.
	at org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:258)
	at org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
	at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1129)
	at org.apache.flink.runtime.scheduler.LegacyScheduler.tryRestoreExecutionGraphFromSavepoint(LegacyScheduler.java:237)
	at org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:196)
	at org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
	at org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
	at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:275)
	at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:265)
	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
	at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
	at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
	... 10 more
2020-08-13 19:45:21,941 INFO  org.apache.flink.runtime.blob.BlobServer                      - Stopped BLOB server at 0.0.0.0:39267

上面日志提示hdfs上cp文件找不到但是我在hdfs目录上查找能够发现这个cp文件是存在的 而且里面有子文件


Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午11:36写道:
Hi
   JM/TM 日志如果是 OnYarn 模式,且开了了 log aggreagte 的话[1],应该是能够获取到这个日志的。
   据我所知,暂时没有已知问题会导致增量 checkpoint 不能恢复,如果你遇到的问题确定会导致 增量 checkpoint
恢复失败的话,可以考虑创建一个 Issue

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#log-files
Best,
Congxian


Yang Peng <[hidden email]> 于2020年8月17日周一 上午11:22写道:

> 在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况
>
> Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午10:39写道:
>
> > Hi
> >    你还有失败作业的 JM 和 TM
> > 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
> > Best,
> > Congxian
> >
> >
> > Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:
> >
> > > 好的 感谢
> > >
> > > JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
> > >
> > > > hi
> > > >
> > > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> > > >
> > > >
> > > >
> > > > --
> > > > Sent from: http://apache-flink.147419.n8.nabble.com/
> > >
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Congxian Qiu
Hi
   1 图挂了
    2 你到 hdfs 上能找到 hdfs:*xxxxxxxxxx*/flink/checkpoints/
7226f43179649162e6bae2573a952e60/chk-167/_metadata 这个文件吗?
Best,
Congxian


Yang Peng <[hidden email]> 于2020年8月17日周一 下午5:47写道:

> 找到了 具体日志如下:2020-08-13 19:45:21,932 ERROR
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error
> occurred in the cluster entrypoint.
>
> org.apache.flink.runtime.dispatcher.DispatcherException: Failed to take leadership with session id 98a2a688-266b-4929-9442-1f0b559ade43.
> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$null$30(Dispatcher.java:915)
> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at org.apache.flink.runtime.concurrent.FutureUtils$WaitingConjunctFuture.handleCompletedFuture(FutureUtils.java:691)
> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
> at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
> at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> ... 4 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
> at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> ... 7 more
> Caused by: java.io.FileNotFoundException: Cannot find meta data file '_metadata' in directory 'hdfs:*xxxxxxxxxx*/flink/checkpoints/7226f43179649162e6bae2573a952e60/chk-167'. Please try to load the checkpoint/savepoint directly from the metadata file instead of the directory.
> at org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:258)
> at org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
> at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1129)
> at org.apache.flink.runtime.scheduler.LegacyScheduler.tryRestoreExecutionGraphFromSavepoint(LegacyScheduler.java:237)
> at org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:196)
> at org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> at org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:275)
> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:265)
> at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> ... 10 more
> 2020-08-13 19:45:21,941 INFO  org.apache.flink.runtime.blob.BlobServer                      - Stopped BLOB server at 0.0.0.0:39267
>
>
> 上面日志提示hdfs上cp文件找不到但是我在hdfs目录上查找能够发现这个cp文件是存在的 而且里面有子文件
>
> [image: IMG20200817_174506.png]
>
>
> Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午11:36写道:
>
>> Hi
>>    JM/TM 日志如果是 OnYarn 模式,且开了了 log aggreagte 的话[1],应该是能够获取到这个日志的。
>>    据我所知,暂时没有已知问题会导致增量 checkpoint 不能恢复,如果你遇到的问题确定会导致 增量 checkpoint
>> 恢复失败的话,可以考虑创建一个 Issue
>>
>> [1]
>>
>> https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#log-files
>> Best,
>> Congxian
>>
>>
>> Yang Peng <[hidden email]> 于2020年8月17日周一 上午11:22写道:
>>
>> >
>> 在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况
>> >
>> > Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午10:39写道:
>> >
>> > > Hi
>> > >    你还有失败作业的 JM 和 TM
>> > > 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
>> > > Best,
>> > > Congxian
>> > >
>> > >
>> > > Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:
>> > >
>> > > > 好的 感谢
>> > > >
>> > > > JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
>> > > >
>> > > > > hi
>> > > > >
>> > > > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Sent from: http://apache-flink.147419.n8.nabble.com/
>> > > >
>> > >
>> >
>>
>
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Yang Peng
感谢邱老师,这个我查看了一下没有这个文件的,跟现在运行的相同任务的正常执行的chk目录下的文件相比这个chk-167目录下的文件数少了很多,我们当时是看着cp执行完成之后cancel了任务然后
从hdfs上查到这个目录cp路径去重启的任务

Congxian Qiu <[hidden email]> 于2020年8月19日周三 下午2:39写道:

> Hi
>    1 图挂了
>     2 你到 hdfs 上能找到 hdfs:*xxxxxxxxxx*/flink/checkpoints/
> 7226f43179649162e6bae2573a952e60/chk-167/_metadata 这个文件吗?
> Best,
> Congxian
>
>
> Yang Peng <[hidden email]> 于2020年8月17日周一 下午5:47写道:
>
> > 找到了 具体日志如下:2020-08-13 19:45:21,932 ERROR
> > org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error
> > occurred in the cluster entrypoint.
> >
> > org.apache.flink.runtime.dispatcher.DispatcherException: Failed to take
> leadership with session id 98a2a688-266b-4929-9442-1f0b559ade43.
> >       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$null$30(Dispatcher.java:915)
> >       at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> >       at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> >       at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> >       at
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> >       at
> org.apache.flink.runtime.concurrent.FutureUtils$WaitingConjunctFuture.handleCompletedFuture(FutureUtils.java:691)
> >       at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> >       at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> >       at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> >       at
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
> >       at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
> >       at
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> >       at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> >       at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> >       at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >       at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> >       at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> >       at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> >       at
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> >       at akka.japi.pf
> .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> >       at
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> >       at
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> >       at
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> >       at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> >       at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> >       at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> >       at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> >       at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> >       at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> >       at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> >       at
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >       at
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >       at
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >       at
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobExecutionException: Could not set up
> JobManager
> >       at
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> >       at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> >       at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> >       at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> >       ... 4 more
> > Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
> not set up JobManager
> >       at
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> >       at
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> >       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> >       at
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> >       ... 7 more
> > Caused by: java.io.FileNotFoundException: Cannot find meta data file
> '_metadata' in directory
> 'hdfs:*xxxxxxxxxx*/flink/checkpoints/7226f43179649162e6bae2573a952e60/chk-167'.
> Please try to load the checkpoint/savepoint directly from the metadata file
> instead of the directory.
> >       at
> org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:258)
> >       at
> org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
> >       at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1129)
> >       at
> org.apache.flink.runtime.scheduler.LegacyScheduler.tryRestoreExecutionGraphFromSavepoint(LegacyScheduler.java:237)
> >       at
> org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:196)
> >       at
> org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> >       at
> org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> >       at
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:275)
> >       at
> org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:265)
> >       at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> >       at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> >       at
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> >       ... 10 more
> > 2020-08-13 19:45:21,941 INFO  org.apache.flink.runtime.blob.BlobServer
>                     - Stopped BLOB server at 0.0.0.0:39267
> >
> >
> > 上面日志提示hdfs上cp文件找不到但是我在hdfs目录上查找能够发现这个cp文件是存在的 而且里面有子文件
> >
> > [image: IMG20200817_174506.png]
> >
> >
> > Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午11:36写道:
> >
> >> Hi
> >>    JM/TM 日志如果是 OnYarn 模式,且开了了 log aggreagte 的话[1],应该是能够获取到这个日志的。
> >>    据我所知,暂时没有已知问题会导致增量 checkpoint 不能恢复,如果你遇到的问题确定会导致 增量 checkpoint
> >> 恢复失败的话,可以考虑创建一个 Issue
> >>
> >> [1]
> >>
> >>
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#log-files
> >> Best,
> >> Congxian
> >>
> >>
> >> Yang Peng <[hidden email]> 于2020年8月17日周一 上午11:22写道:
> >>
> >> >
> >>
> 在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况
> >> >
> >> > Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午10:39写道:
> >> >
> >> > > Hi
> >> > >    你还有失败作业的 JM 和 TM
> >> > > 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
> >> > > Best,
> >> > > Congxian
> >> > >
> >> > >
> >> > > Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:
> >> > >
> >> > > > 好的 感谢
> >> > > >
> >> > > > JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
> >> > > >
> >> > > > > hi
> >> > > > >
> >> > > > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Sent from: http://apache-flink.147419.n8.nabble.com/
> >> > > >
> >> > >
> >> >
> >>
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Flink任务写入kafka 开启了EOS,statebackend为rocksdb,任务反压source端日志量堆积从cp恢复失败

Congxian Qiu
Hi
   这个文件不存在的话,应该是这次 checkpoint 没有成功完成,这样从这次 checkpoint 恢复的时候是会失败的。现在社区暂时只支持
stop with savepoint,如果想从 checkpoint 恢复的话,只能够从之前生成的 checkpoint 恢复,如果
checkpoint 生成了有一段时间之后,重放的数据会有些多,之前社区有一个 issue FLINK-12619 尝试做 stop with
checkpoint(这样能够减少重放的数据),如果有需求的话,可以在 issue 上评论
Best,
Congxian


Yang Peng <[hidden email]> 于2020年8月19日周三 下午3:03写道:

>
> 感谢邱老师,这个我查看了一下没有这个文件的,跟现在运行的相同任务的正常执行的chk目录下的文件相比这个chk-167目录下的文件数少了很多,我们当时是看着cp执行完成之后cancel了任务然后
> 从hdfs上查到这个目录cp路径去重启的任务
>
> Congxian Qiu <[hidden email]> 于2020年8月19日周三 下午2:39写道:
>
> > Hi
> >    1 图挂了
> >     2 你到 hdfs 上能找到 hdfs:*xxxxxxxxxx*/flink/checkpoints/
> > 7226f43179649162e6bae2573a952e60/chk-167/_metadata 这个文件吗?
> > Best,
> > Congxian
> >
> >
> > Yang Peng <[hidden email]> 于2020年8月17日周一 下午5:47写道:
> >
> > > 找到了 具体日志如下:2020-08-13 19:45:21,932 ERROR
> > > org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error
> > > occurred in the cluster entrypoint.
> > >
> > > org.apache.flink.runtime.dispatcher.DispatcherException: Failed to take
> > leadership with session id 98a2a688-266b-4929-9442-1f0b559ade43.
> > >       at
> >
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$null$30(Dispatcher.java:915)
> > >       at
> >
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> > >       at
> >
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> > >       at
> >
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> > >       at
> >
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> > >       at
> >
> org.apache.flink.runtime.concurrent.FutureUtils$WaitingConjunctFuture.handleCompletedFuture(FutureUtils.java:691)
> > >       at
> >
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> > >       at
> >
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> > >       at
> >
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> > >       at
> >
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
> > >       at
> >
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
> > >       at
> >
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> > >       at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> > >       at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> > >       at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> > >       at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> > >       at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> > >       at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> > >       at
> > scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> > >       at akka.japi.pf
> > .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> > >       at
> > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> > >       at
> > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> > >       at
> > scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> > >       at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> > >       at
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> > >       at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> > >       at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> > >       at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> > >       at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> > >       at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> > >       at
> > akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> > >       at
> >
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> > >       at
> > akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> > >       at
> >
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > > Caused by: java.lang.RuntimeException:
> > org.apache.flink.runtime.client.JobExecutionException: Could not set up
> > JobManager
> > >       at
> >
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> > >       at
> >
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> > >       at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> > >       at
> >
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> > >       ... 4 more
> > > Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
> > not set up JobManager
> > >       at
> >
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> > >       at
> >
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> > >       at
> >
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> > >       at
> >
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> > >       ... 7 more
> > > Caused by: java.io.FileNotFoundException: Cannot find meta data file
> > '_metadata' in directory
> >
> 'hdfs:*xxxxxxxxxx*/flink/checkpoints/7226f43179649162e6bae2573a952e60/chk-167'.
> > Please try to load the checkpoint/savepoint directly from the metadata
> file
> > instead of the directory.
> > >       at
> >
> org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:258)
> > >       at
> >
> org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
> > >       at
> >
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1129)
> > >       at
> >
> org.apache.flink.runtime.scheduler.LegacyScheduler.tryRestoreExecutionGraphFromSavepoint(LegacyScheduler.java:237)
> > >       at
> >
> org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:196)
> > >       at
> >
> org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> > >       at
> >
> org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> > >       at
> >
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:275)
> > >       at
> > org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:265)
> > >       at
> >
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> > >       at
> >
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> > >       at
> >
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> > >       ... 10 more
> > > 2020-08-13 19:45:21,941 INFO  org.apache.flink.runtime.blob.BlobServer
> >                     - Stopped BLOB server at 0.0.0.0:39267
> > >
> > >
> > > 上面日志提示hdfs上cp文件找不到但是我在hdfs目录上查找能够发现这个cp文件是存在的 而且里面有子文件
> > >
> > > [image: IMG20200817_174506.png]
> > >
> > >
> > > Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午11:36写道:
> > >
> > >> Hi
> > >>    JM/TM 日志如果是 OnYarn 模式,且开了了 log aggreagte 的话[1],应该是能够获取到这个日志的。
> > >>    据我所知,暂时没有已知问题会导致增量 checkpoint 不能恢复,如果你遇到的问题确定会导致 增量 checkpoint
> > >> 恢复失败的话,可以考虑创建一个 Issue
> > >>
> > >> [1]
> > >>
> > >>
> >
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#log-files
> > >> Best,
> > >> Congxian
> > >>
> > >>
> > >> Yang Peng <[hidden email]> 于2020年8月17日周一 上午11:22写道:
> > >>
> > >> >
> > >>
> >
> 在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况
> > >> >
> > >> > Congxian Qiu <[hidden email]> 于2020年8月17日周一 上午10:39写道:
> > >> >
> > >> > > Hi
> > >> > >    你还有失败作业的 JM 和 TM
> > >> > > 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
> > >> > > Best,
> > >> > > Congxian
> > >> > >
> > >> > >
> > >> > > Yang Peng <[hidden email]> 于2020年8月17日周一 上午10:25写道:
> > >> > >
> > >> > > > 好的 感谢
> > >> > > >
> > >> > > > JasonLee <[hidden email]> 于2020年8月14日周五 下午9:22写道:
> > >> > > >
> > >> > > > > hi
> > >> > > > >
> > >> > > > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > > > --
> > >> > > > > Sent from: http://apache-flink.147419.n8.nabble.com/
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> >
>