hello
我在使用flinksql连接器时当我将flink-sql-connector-elasticsearch6_2.11-1.11.1.jar放在lib下,程序正常执行,但是当我在pom中进行配置时会产生如下报错,同样的问题会产生在hbase、jdbc的connector中,请问下这可能是什么造成的 org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to create a sink for writing table 'default_catalog.default_database.cloud_behavior_sink'. Table options are: 'connector'='elasticsearch-6' 'document-type'='cdbp' 'hosts'='<a href="http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'">http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200' 'index'='flink_sql_test' 'sink.bulk-flush.max-actions'='100' at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302) at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198) at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) Caused by: org.apache.flink.table.api.ValidationException: Unable to create a sink for writing table 'default_catalog.default_database.cloud_behavior_sink'. Table options are: 'connector'='elasticsearch-6' 'document-type'='cdbp' 'hosts'='<a href="http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'">http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200' 'index'='flink_sql_test' 'sink.bulk-flush.max-actions'='100' at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164) at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344) at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204) at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1264) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:700) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:787) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:690) at com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callDML(SqlSubmit.java:97) at com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callCommand(SqlSubmit.java:72) at com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.run(SqlSubmit.java:53) at com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.main(SqlSubmit.java:24) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288) ... 11 more Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option ''connector'='elasticsearch-6''. at org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329) at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157) ... 37 more Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'elasticsearch-6' that implements 'org.apache.flink.table.factories.DynamicTableSinkFactory' in the classpath. Available factory identifiers are: blackhole hbase-1.4 jdbc kafka at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:240) at org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:326) ... 38 more -- Sent from: http://apache-flink.147419.n8.nabble.com/ |
Hi, 奔跑的小飞袁
Flink的class加载原则是child first,所以,尽量避免在pom中自己引入flink相关的依赖,避免跟Flink集群环境造成冲突,建议将安装包放在lib下,由flink去加载。 Best, Robin 奔跑的小飞袁 wrote > hello > 我在使用flinksql连接器时当我将flink-sql-connector-elasticsearch6_2.11-1.11.1.jar放在lib下,程序正常执行,但是当我在pom中进行配置时会产生如下报错,同样的问题会产生在hbase、jdbc的connector中,请问下这可能是什么造成的 > org.apache.flink.client.program.ProgramInvocationException: The main > method > caused an error: Unable to create a sink for writing table > 'default_catalog.default_database.cloud_behavior_sink'. > > Table options are: > > 'connector'='elasticsearch-6' > 'document-type'='cdbp' > 'hosts'='<a href="http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'">http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200' > 'index'='flink_sql_test' > 'sink.bulk-flush.max-actions'='100' > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198) > at > org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) > at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) > Caused by: org.apache.flink.table.api.ValidationException: Unable to > create > a sink for writing table > 'default_catalog.default_database.cloud_behavior_sink'. > > Table options are: > > 'connector'='elasticsearch-6' > 'document-type'='cdbp' > 'hosts'='<a href="http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'">http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200' > 'index'='flink_sql_test' > 'sink.bulk-flush.max-actions'='100' > at > org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164) > at > org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344) > at > org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204) > at > org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) > at > org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1264) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:700) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:787) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:690) > at > com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callDML(SqlSubmit.java:97) > at > com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callCommand(SqlSubmit.java:72) > at > com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.run(SqlSubmit.java:53) > at > com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.main(SqlSubmit.java:24) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288) > ... 11 more > Caused by: org.apache.flink.table.api.ValidationException: Cannot discover > a > connector using option ''connector'='elasticsearch-6''. > at > org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329) > at > org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157) > ... 37 more > Caused by: org.apache.flink.table.api.ValidationException: Could not find > any factory for identifier 'elasticsearch-6' that implements > 'org.apache.flink.table.factories.DynamicTableSinkFactory' in the > classpath. > > Available factory identifiers are: > > blackhole > hbase-1.4 > jdbc > kafka > at > org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:240) > at > org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:326) > ... 38 more > > > > -- > Sent from: http://apache-flink.147419.n8.nabble.com/ -- Sent from: http://apache-flink.147419.n8.nabble.com/ |
现在我的lib下没有ElasticSearch相关的connector,在pom中引用,这样会产生冲突吗,还有这种现象有可能是在哪块冲突了
-- Sent from: http://apache-flink.147419.n8.nabble.com/ |
Hi,奔跑的小飞袁
目前没试过flink集成es,所以细节方面没办法深究太多,但是,可以给你提供个思路: 1. 查看pom中es的dependency是否设置了scope,导致依赖没有成功引入; 2. 如果依赖成功引入了,但是还不行,相反,在lib下放置相同的jar却可以正常执行,基本可以确定就是依赖冲突,具体什么类导致的,这个目前无法确定,期待更好地思路。 Best, Robin 奔跑的小飞袁 wrote > 现在我的lib下没有ElasticSearch相关的connector,在pom中引用,这样会产生冲突吗,还有这种现象有可能是在哪块冲突了 > > > > -- > Sent from: http://apache-flink.147419.n8.nabble.com/ -- Sent from: http://apache-flink.147419.n8.nabble.com/ |
In reply to this post by 奔跑的小飞袁
Hi,
你是直接在 Idea 上运行吗,还是打成一个 Jar 包运行。根据报错,可以看下是否确定把相关的类打进去了。 可以用: `jar -tf xxx.jar |grep xxx.java` 查看是否 jar 中存在某个类。 如果考虑冲突的话,可以加个参数看下类是由哪里加处理的: `-verbose:class` Best, Hailong Wang 在 2020-10-20 13:36:59,"奔跑的小飞袁" <[hidden email]> 写道: >hello >我在使用flinksql连接器时当我将flink-sql-connector-elasticsearch6_2.11-1.11.1.jar放在lib下,程序正常执行,但是当我在pom中进行配置时会产生如下报错,同样的问题会产生在hbase、jdbc的connector中,请问下这可能是什么造成的 >org.apache.flink.client.program.ProgramInvocationException: The main method >caused an error: Unable to create a sink for writing table >'default_catalog.default_database.cloud_behavior_sink'. > >Table options are: > >'connector'='elasticsearch-6' >'document-type'='cdbp' >'hosts'='<a href="http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'">http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200' >'index'='flink_sql_test' >'sink.bulk-flush.max-actions'='100' > at >org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302) > at >org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198) > at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) > at >org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) > at >org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) > at >org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at >org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at >org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) >Caused by: org.apache.flink.table.api.ValidationException: Unable to create >a sink for writing table >'default_catalog.default_database.cloud_behavior_sink'. > >Table options are: > >'connector'='elasticsearch-6' >'document-type'='cdbp' >'hosts'='<a href="http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200'">http://10.2.11.116:9200;http://10.2.11.117:9200;http://10.2.11.118:9200;http://10.2.11.119:9200' >'index'='flink_sql_test' >'sink.bulk-flush.max-actions'='100' > at >org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164) > at >org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344) > at >org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204) > at >org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) > at >org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) > at >scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at >scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at >org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163) > at >org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1264) > at >org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:700) > at >org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:787) > at >org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:690) > at >com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callDML(SqlSubmit.java:97) > at >com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.callCommand(SqlSubmit.java:72) > at >com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.run(SqlSubmit.java:53) > at >com.intsig.flink.streaming.streaming_project.common.sql_submit.SqlSubmit.main(SqlSubmit.java:24) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at >sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at >sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at >org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288) > ... 11 more >Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a >connector using option ''connector'='elasticsearch-6''. > at >org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329) > at >org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157) > ... 37 more >Caused by: org.apache.flink.table.api.ValidationException: Could not find >any factory for identifier 'elasticsearch-6' that implements >'org.apache.flink.table.factories.DynamicTableSinkFactory' in the classpath. > >Available factory identifiers are: > >blackhole >hbase-1.4 >jdbc >kafka > at >org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:240) > at >org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:326) > ... 38 more > > > >-- >Sent from: http://apache-flink.147419.n8.nabble.com/ |
In reply to this post by LiangbinZhang
目前我只能把需要的jar放在lib目录下,我能确定冲突的类已经打在jar中,也能确定这个类是唯一的,但是目前没有发现冲突的点
-- Sent from: http://apache-flink.147419.n8.nabble.com/ |
Hi,
试试解压下 Jar 包,看下 META-INF 下是否有 `org.apache.flink.table.factories.TableFactory` 文件,这个文件里面是否有 `org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6UpsertTableSinkFactory` 值。 Best, Hailong Wang 在 2020-10-21 09:18:54,"奔跑的小飞袁" <[hidden email]> 写道: >目前我只能把需要的jar放在lib目录下,我能确定冲突的类已经打在jar中,也能确定这个类是唯一的,但是目前没有发现冲突的点 > > > >-- >Sent from: http://apache-flink.147419.n8.nabble.com/ |
Free forum by Nabble | Edit this page |