1.11版本,执行任务报错:Cannot have more than one execute() or executeAsyc() call in a single environment

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

1.11版本,执行任务报错:Cannot have more than one execute() or executeAsyc() call in a single environment

Asahi Lee
hello!
     我的flink应用在ieda可以运行成功,提交集群后运行报错,提示:Cannot have more than one execute() or executeAsyc() call in a single environment,我查看源码,是根据job manager是否高可用做的判断,我不太理解这个和高可用有什么关系?烦请解答一下!谢谢!
    错误位置org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution()方法,139行;
Reply | Threaded
Open this post in threaded view
|

Re: 1.11版本,执行任务报错:Cannot have more than one execute() or executeAsyc() call in a single environment

Yang Wang
你的这个报错应该是使用1.11里面新增的application mode来运行的吧

因为目前application mode不能支持在HA模式下,一个Flink cluster里运行多个任务,所以会报上面的错
原因是用户可能在main方法里面增加了if...else等来实现Job之间的依赖关系,这种情况在HA模式下恢复可能会有问题
所以暂时不支持了。非HA的时候所有任务都会重跑,那就没有这个限制了


Best,
Yang

Asahi Lee <[hidden email]> 于2020年8月19日周三 上午1:37写道:

> hello!
> &nbsp; &nbsp; &nbsp;我的flink应用在ieda可以运行成功,提交集群后运行报错,提示:Cannot have more
> than one execute() or executeAsyc() call in a single
> environment,我查看源码,是根据job manager是否高可用做的判断,我不太理解这个和高可用有什么关系?烦请解答一下!谢谢!
> &nbsp; &nbsp;
> 错误位置org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution()方法,139行;
Reply | Threaded
Open this post in threaded view
|

Re: 1.11版本,执行任务报错:Cannot have more than one execute() or executeAsyc() call in a single environment

Jason Liao
In reply to this post by Asahi Lee
我这边因为混用 StreamTableEnvironment.executeSql 和 StreamExecutionEnvironment.execute 导致了这个报错.
代码逻辑里 executeSql 和 DataStream 会生成独立不相关的sink.

是这么解决这个报错的.
1. 将含有 insert into 语句的StreamTableEnvironment.executeSql 改为:
    final StreamStatementSet statementSet = tEnv.createStatementSet();
    statementSet.addInsertSql("insert into xxx");
    statementSet.attachAsDataStream();

2. 最后统一用StreamExecutionEnvironment.execute执行:
env.execute("DataStreamJob");

旧版本有blink planner 应该就不会有这个问题, 因为根据文档:
The Blink planner will optimize multiple-sinks into one DAG (supported only on TableEnvironment, not on StreamTableEnvironment). The old planner will always optimize each sink into a new DAG, where all DAGs are independent of each other.

https://nightlies.apache.org/flink/flink-docs-release-1.10/dev/table/common.html