flink1.10升级到flink1.11 提交到yarn失败

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

flink1.10升级到flink1.11 提交到yarn失败

Zhou Zach
hi all,
原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
yarn log:
Log Type: jobmanager.err


Log Upload Time: Thu Jul 09 21:02:48 +0800 2020


Log Length: 785


SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.


Log Type: jobmanager.out


Log Upload Time: Thu Jul 09 21:02:48 +0800 2020


Log Length: 0




Log Type: prelaunch.err


Log Upload Time: Thu Jul 09 21:02:48 +0800 2020


Log Length: 0




Log Type: prelaunch.out


Log Upload Time: Thu Jul 09 21:02:48 +0800 2020


Log Length: 70


Setting up env variables
Setting up job resources
Launching container








本地log:
2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend                      [] - --------------------------------------------------------------------------------
2020-07-09 21:02:41,020 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: jobmanager.rpc.address, localhost
2020-07-09 21:02:41,020 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: jobmanager.rpc.port, 6123
2020-07-09 21:02:41,021 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: jobmanager.memory.process.size, 1600m
2020-07-09 21:02:41,021 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: taskmanager.memory.process.size, 1728m
2020-07-09 21:02:41,021 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2020-07-09 21:02:41,021 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: parallelism.default, 1
2020-07-09 21:02:41,021 INFO  org.apache.flink.configuration.GlobalConfiguration           [] - Loading configuration property: jobmanager.execution.failover-strategy, region
2020-07-09 21:02:41,164 INFO  org.apache.flink.runtime.security.modules.HadoopModule       [] - Hadoop user set to hdfs (auth:SIMPLE)
2020-07-09 21:02:41,172 INFO  org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas file will be created as /tmp/jaas-2213111423022415421.conf.
2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend                      [] - Running 'run-application' command.
2020-07-09 21:02:41,194 INFO  org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer [] - Submitting application in 'Application Mode'.
2020-07-09 21:02:41,201 WARN  org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The configuration directory ('/opt/flink-1.11.0/conf') already contains a LOG4J config file.If you want to use logback, then please delete or rename the log configuration file.
2020-07-09 21:02:41,537 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2020-07-09 21:02:41,665 INFO  org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing over to rm220
2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration                         [] - resource-types.xml not found
2020-07-09 21:02:41,718 INFO  org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Unable to find 'resource-types.xml'.
2020-07-09 21:02:41,755 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster specification: ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096, slotsPerTaskManager=1}
2020-07-09 21:02:42,723 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting application master application_1594271580406_0010
2020-07-09 21:02:42,969 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted application application_1594271580406_0010
2020-07-09 21:02:42,969 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for the cluster to be allocated
2020-07-09 21:02:42,971 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying cluster, current state ACCEPTED
2020-07-09 21:02:47,619 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - YARN application has been deployed successfully.
2020-07-09 21:02:47,620 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Found Web Interface cdh003:38716 of application 'application_1594271580406_0010'
Reply | Threaded
Open this post in threaded view
|

Re: flink1.10升级到flink1.11 提交到yarn失败

Congxian Qiu
Hi

这个看上去是提交到 Yarn 了,具体的原因需要看下 JM log 是啥原因。另外是否是日志没有贴全,这里只看到本地 log,其他的就只有小部分
jobmanager.err 的 log。

Best,
Congxian


Zhou Zach <[hidden email]> 于2020年7月9日周四 下午9:23写道:

> hi all,
> 原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
> yarn log:
> Log Type: jobmanager.err
>
>
> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>
>
> Log Length: 785
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> log4j:WARN No appenders could be found for logger
> (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
>
>
> Log Type: jobmanager.out
>
>
> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>
>
> Log Length: 0
>
>
>
>
> Log Type: prelaunch.err
>
>
> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>
>
> Log Length: 0
>
>
>
>
> Log Type: prelaunch.out
>
>
> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>
>
> Log Length: 70
>
>
> Setting up env variables
> Setting up job resources
> Launching container
>
>
>
>
>
>
>
>
> 本地log:
> 2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend
>                 [] -
> --------------------------------------------------------------------------------
> 2020-07-09 21:02:41,020 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.rpc.address, localhost
> 2020-07-09 21:02:41,020 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.rpc.port, 6123
> 2020-07-09 21:02:41,021 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.memory.process.size, 1600m
> 2020-07-09 21:02:41,021 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: taskmanager.memory.process.size, 1728m
> 2020-07-09 21:02:41,021 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 2020-07-09 21:02:41,021 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: parallelism.default, 1
> 2020-07-09 21:02:41,021 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.execution.failover-strategy, region
> 2020-07-09 21:02:41,164 INFO
> org.apache.flink.runtime.security.modules.HadoopModule       [] - Hadoop
> user set to hdfs (auth:SIMPLE)
> 2020-07-09 21:02:41,172 INFO
> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas file
> will be created as /tmp/jaas-2213111423022415421.conf.
> 2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend
>                 [] - Running 'run-application' command.
> 2020-07-09 21:02:41,194 INFO
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer
> [] - Submitting application in 'Application Mode'.
> 2020-07-09 21:02:41,201 WARN
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The
> configuration directory ('/opt/flink-1.11.0/conf') already contains a LOG4J
> config file.If you want to use logback, then please delete or rename the
> log configuration file.
> 2020-07-09 21:02:41,537 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - No path for the flink jar passed. Using the location
> of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2020-07-09 21:02:41,665 INFO
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] -
> Failing over to rm220
> 2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration
>                  [] - resource-types.xml not found
> 2020-07-09 21:02:41,718 INFO
> org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Unable to
> find 'resource-types.xml'.
> 2020-07-09 21:02:41,755 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - Cluster specification:
> ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096,
> slotsPerTaskManager=1}
> 2020-07-09 21:02:42,723 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - Submitting application master
> application_1594271580406_0010
> 2020-07-09 21:02:42,969 INFO
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted
> application application_1594271580406_0010
> 2020-07-09 21:02:42,969 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - Waiting for the cluster to be allocated
> 2020-07-09 21:02:42,971 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - Deploying cluster, current state ACCEPTED
> 2020-07-09 21:02:47,619 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - YARN application has been deployed successfully.
> 2020-07-09 21:02:47,620 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>                 [] - Found Web Interface cdh003:38716 of application
> 'application_1594271580406_0010'
Reply | Threaded
Open this post in threaded view
|

Re:Re: flink1.10升级到flink1.11 提交到yarn失败

Zhou Zach
日志贴全了的,这是从yarn ui贴的full log,用yarn logs命令也是这些log,太简短,看不出错误在哪。。。


我又提交了另外之前用flink1.10跑过的任务,现在用flink1.11跑,报了异常:


SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/flink-1.11.0/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]


------------------------------------------------------------
 The program finished with the following exception:


org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: findAndCreateTableSource failed.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
Caused by: org.apache.flink.table.api.TableException: findAndCreateTableSource failed.
at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:49)
at org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:190)
at org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:89)
at org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
at org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:747)
at cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV$.main(FromKafkaSinkJdbcForUserUV.scala:78)
at cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV.main(FromKafkaSinkJdbcForUserUV.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
... 11 more
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.


Reason: Required context properties mismatch.


The following properties are requested:
connector.properties.bootstrap.servers=cdh1:9092,cdh2:9092,cdh3:9092
connector.properties.group.id=user_flink
connector.properties.zookeeper.connect=cdh1:2181,cdh2:2181,cdh3:2181
connector.startup-mode=latest-offset
connector.topic=user
connector.type=kafka
connector.version=universal
format.derive-schema=true
format.type=json
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=uid
schema.1.data-type=VARCHAR(2147483647)
schema.1.name=sex
schema.2.data-type=INT
schema.2.name=age
schema.3.data-type=TIMESTAMP(3)
schema.3.name=created_time
schema.4.data-type=TIMESTAMP(3) NOT NULL
schema.4.expr=PROCTIME()
schema.4.name=proctime
schema.watermark.0.rowtime=created_time
schema.watermark.0.strategy.data-type=TIMESTAMP(3)
schema.watermark.0.strategy.expr=`created_time` - INTERVAL '3' SECOND


The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.filesystem.FileSystemTableFactory
at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
at org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
at org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
... 37 more












我把maven依赖的provide范围全部去掉了:
<properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <flink.version>1.11.0</flink.version>
        <hive.version>2.1.1</hive.version>
        <java.version>1.8</java.version>
        <scala.version>2.11.12</scala.version>
        <scala.binary.version>2.11</scala.binary.version>
        <maven.compiler.source>${java.version}</maven.compiler.source>
        <maven.compiler.target>${java.version}</maven.compiler.target>
    </properties>


    <repositories>
        <repository>
            <id>maven-net-cn</id>
            <name>Maven China Mirror</name>
            <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
            <releases>
                <enabled>true</enabled>
            </releases>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
        </repository>


        <repository>
            <id>apache.snapshots</id>
            <name>Apache Development Snapshot Repository</name>
            <url>https://repository.apache.org/content/repositories/snapshots/</url>
            <releases>
                <enabled>false</enabled>
            </releases>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
    </repositories>


    <dependencies>
        <!-- Apache Flink dependencies -->
        <!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_2.11</artifactId>
            <version>${flink.version}</version>
<!--            <scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_2.11</artifactId>
            <version>${flink.version}</version>
<!--            <scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_2.11</artifactId>
            <version>${flink.version}</version>
<!--            <scope>provided</scope>-->
        </dependency>


        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-common</artifactId>
            <version>${flink.version}</version>
<!--            <scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner-blink_2.11</artifactId>
            <version>${flink.version}</version>
<!--            <scope>provided</scope>-->
        </dependency>




        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-sql-connector-kafka_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-avro</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-csv</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-json</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>




        <dependency>
            <groupId>org.apache.bahir</groupId>
            <artifactId>flink-connector-redis_2.11</artifactId>
            <version>1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
            <version>2.8.0</version>
        </dependency>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.3.0</version>
        </dependency>


        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-hbase_2.11</artifactId>
            <version>1.11-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>2.1.0</version>
        </dependency>


        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
            <scope>provided</scope>
        </dependency>


        <dependency>
            <groupId>io.lettuce</groupId>
            <artifactId>lettuce-core</artifactId>
            <version>5.3.1.RELEASE</version>
        </dependency>


        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.13</version>
            <!--<scope>test</scope>-->
        </dependency>


        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-email</artifactId>
            <version>1.5</version>
        </dependency>


        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>3.0.0-cdh6.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>3.0.0-cdh6.3.2</version>
        </dependency>




        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-hive_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-exec</artifactId>
            <version>${hive.version}</version>
            <scope>provided</scope>
        </dependency>


        <!-- Add logging framework, to produce console output when running in the IDE. -->
        <!-- These dependencies are excluded from the application JAR by default. -->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.7</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
            <scope>runtime</scope>
        </dependency>


        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.68</version>
        </dependency>


        <dependency>
            <groupId>com.jayway.jsonpath</groupId>
            <artifactId>json-path</artifactId>
            <version>2.4.0</version>
        </dependency>


        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-jdbc_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.46</version>
        </dependency>
        <dependency>
            <groupId>io.vertx</groupId>
            <artifactId>vertx-core</artifactId>
            <version>3.9.1</version>
        </dependency>
        <dependency>
            <groupId>io.vertx</groupId>
            <artifactId>vertx-jdbc-client</artifactId>
            <version>3.9.1</version>
        </dependency>


    </dependencies>






集群节点flink-1.11.0/lib/:
-rw-r--r-- 1 root root    197597 6月  30 10:28 flink-clients_2.11-1.11.0.jar
-rw-r--r-- 1 root root     90782 6月  30 17:46 flink-csv-1.11.0.jar
-rw-r--r-- 1 root root 108349203 6月  30 17:52 flink-dist_2.11-1.11.0.jar
-rw-r--r-- 1 root root     94863 6月  30 17:45 flink-json-1.11.0.jar
-rw-r--r-- 1 root root   7712156 6月  18 10:42 flink-shaded-zookeeper-3.4.14.jar
-rw-r--r-- 1 root root  33325754 6月  30 17:50 flink-table_2.11-1.11.0.jar
-rw-r--r-- 1 root root     47333 6月  30 10:38 flink-table-api-scala-bridge_2.11-1.11.0.jar
-rw-r--r-- 1 root root  37330521 6月  30 17:50 flink-table-blink_2.11-1.11.0.jar
-rw-r--r-- 1 root root    754983 6月  30 12:29 flink-table-common-1.11.0.jar
-rw-r--r-- 1 root root     67114 4月  20 20:47 log4j-1.2-api-2.12.1.jar
-rw-r--r-- 1 root root    276771 4月  20 20:47 log4j-api-2.12.1.jar
-rw-r--r-- 1 root root   1674433 4月  20 20:47 log4j-core-2.12.1.jar
-rw-r--r-- 1 root root     23518 4月  20 20:47 log4j-slf4j-impl-2.12.1.jar


把table相关的包都下载下来了,还是报同样的错,好奇怪。。。

















在 2020-07-10 10:24:02,"Congxian Qiu" <[hidden email]> 写道:

>Hi
>
>这个看上去是提交到 Yarn 了,具体的原因需要看下 JM log 是啥原因。另外是否是日志没有贴全,这里只看到本地 log,其他的就只有小部分
>jobmanager.err 的 log。
>
>Best,
>Congxian
>
>
>Zhou Zach <[hidden email]> 于2020年7月9日周四 下午9:23写道:
>
>> hi all,
>> 原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
>> yarn log:
>> Log Type: jobmanager.err
>>
>>
>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>
>>
>> Log Length: 785
>>
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> log4j:WARN No appenders could be found for logger
>> (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
>> log4j:WARN Please initialize the log4j system properly.
>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
>> more info.
>>
>>
>> Log Type: jobmanager.out
>>
>>
>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>
>>
>> Log Length: 0
>>
>>
>>
>>
>> Log Type: prelaunch.err
>>
>>
>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>
>>
>> Log Length: 0
>>
>>
>>
>>
>> Log Type: prelaunch.out
>>
>>
>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>
>>
>> Log Length: 70
>>
>>
>> Setting up env variables
>> Setting up job resources
>> Launching container
>>
>>
>>
>>
>>
>>
>>
>>
>> 本地log:
>> 2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend
>>                 [] -
>> --------------------------------------------------------------------------------
>> 2020-07-09 21:02:41,020 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: jobmanager.rpc.address, localhost
>> 2020-07-09 21:02:41,020 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: jobmanager.rpc.port, 6123
>> 2020-07-09 21:02:41,021 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: jobmanager.memory.process.size, 1600m
>> 2020-07-09 21:02:41,021 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: taskmanager.memory.process.size, 1728m
>> 2020-07-09 21:02:41,021 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: taskmanager.numberOfTaskSlots, 1
>> 2020-07-09 21:02:41,021 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: parallelism.default, 1
>> 2020-07-09 21:02:41,021 INFO
>> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
>> configuration property: jobmanager.execution.failover-strategy, region
>> 2020-07-09 21:02:41,164 INFO
>> org.apache.flink.runtime.security.modules.HadoopModule       [] - Hadoop
>> user set to hdfs (auth:SIMPLE)
>> 2020-07-09 21:02:41,172 INFO
>> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas file
>> will be created as /tmp/jaas-2213111423022415421.conf.
>> 2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend
>>                 [] - Running 'run-application' command.
>> 2020-07-09 21:02:41,194 INFO
>> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer
>> [] - Submitting application in 'Application Mode'.
>> 2020-07-09 21:02:41,201 WARN
>> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The
>> configuration directory ('/opt/flink-1.11.0/conf') already contains a LOG4J
>> config file.If you want to use logback, then please delete or rename the
>> log configuration file.
>> 2020-07-09 21:02:41,537 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - No path for the flink jar passed. Using the location
>> of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
>> 2020-07-09 21:02:41,665 INFO
>> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] -
>> Failing over to rm220
>> 2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration
>>                  [] - resource-types.xml not found
>> 2020-07-09 21:02:41,718 INFO
>> org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Unable to
>> find 'resource-types.xml'.
>> 2020-07-09 21:02:41,755 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - Cluster specification:
>> ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096,
>> slotsPerTaskManager=1}
>> 2020-07-09 21:02:42,723 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - Submitting application master
>> application_1594271580406_0010
>> 2020-07-09 21:02:42,969 INFO
>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted
>> application application_1594271580406_0010
>> 2020-07-09 21:02:42,969 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - Waiting for the cluster to be allocated
>> 2020-07-09 21:02:42,971 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - Deploying cluster, current state ACCEPTED
>> 2020-07-09 21:02:47,619 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - YARN application has been deployed successfully.
>> 2020-07-09 21:02:47,620 INFO  org.apache.flink.yarn.YarnClusterDescriptor
>>                 [] - Found Web Interface cdh003:38716 of application
>> 'application_1594271580406_0010'
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink1.10升级到flink1.11 提交到yarn失败

Congxian Qiu
Hi

从异常看,可能是某个包没有引入导致的,和这个[1]比较像,可能你需要对比一下需要的是哪个包没有引入。

PS 从栈那里看到是 csv 相关的,可以优先考虑下 cvs 相关的包

```
The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.filesystem.FileSystemTableFactory
at
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
at
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
at
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
at
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
at
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
... 37 more
```

[1] http://apache-flink.147419.n8.nabble.com/flink-1-11-td4471.html
Best,
Congxian


Zhou Zach <[hidden email]> 于2020年7月10日周五 上午10:39写道:

> 日志贴全了的,这是从yarn ui贴的full log,用yarn logs命令也是这些log,太简短,看不出错误在哪。。。
>
>
> 我又提交了另外之前用flink1.10跑过的任务,现在用flink1.11跑,报了异常:
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/opt/flink-1.11.0/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type
> [org.apache.logging.slf4j.Log4jLoggerFactory]
>
>
> ------------------------------------------------------------
>  The program finished with the following exception:
>
>
> org.apache.flink.client.program.ProgramInvocationException: The main
> method caused an error: findAndCreateTableSource failed.
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
> at
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
> at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> at
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
> Caused by: org.apache.flink.table.api.TableException:
> findAndCreateTableSource failed.
> at
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:49)
> at
> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:190)
> at
> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:89)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
> at
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
> at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org
> $apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
> at
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
> at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
> at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
> at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
> at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
> at
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
> at
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:747)
> at
> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV$.main(FromKafkaSinkJdbcForUserUV.scala:78)
> at
> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV.main(FromKafkaSinkJdbcForUserUV.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
> ... 11 more
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
> Could not find a suitable table factory for
> 'org.apache.flink.table.factories.TableSourceFactory' in
> the classpath.
>
>
> Reason: Required context properties mismatch.
>
>
> The following properties are requested:
> connector.properties.bootstrap.servers=cdh1:9092,cdh2:9092,cdh3:9092
> connector.properties.group.id=user_flink
> connector.properties.zookeeper.connect=cdh1:2181,cdh2:2181,cdh3:2181
> connector.startup-mode=latest-offset
> connector.topic=user
> connector.type=kafka
> connector.version=universal
> format.derive-schema=true
> format.type=json
> schema.0.data-type=VARCHAR(2147483647)
> schema.0.name=uid
> schema.1.data-type=VARCHAR(2147483647)
> schema.1.name=sex
> schema.2.data-type=INT
> schema.2.name=age
> schema.3.data-type=TIMESTAMP(3)
> schema.3.name=created_time
> schema.4.data-type=TIMESTAMP(3) NOT NULL
> schema.4.expr=PROCTIME()
> schema.4.name=proctime
> schema.watermark.0.rowtime=created_time
> schema.watermark.0.strategy.data-type=TIMESTAMP(3)
> schema.watermark.0.strategy.expr=`created_time` - INTERVAL '3' SECOND
>
>
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.filesystem.FileSystemTableFactory
> at
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
> at
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
> at
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
> at
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
> at
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
> ... 37 more
>
>
>
>
>
>
>
>
>
>
>
>
> 我把maven依赖的provide范围全部去掉了:
> <properties>
>         <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
>         <flink.version>1.11.0</flink.version>
>         <hive.version>2.1.1</hive.version>
>         <java.version>1.8</java.version>
>         <scala.version>2.11.12</scala.version>
>         <scala.binary.version>2.11</scala.binary.version>
>         <maven.compiler.source>${java.version}</maven.compiler.source>
>         <maven.compiler.target>${java.version}</maven.compiler.target>
>     </properties>
>
>
>     <repositories>
>         <repository>
>             <id>maven-net-cn</id>
>             <name>Maven China Mirror</name>
>             <url>http://maven.aliyun.com/nexus/content/groups/public/
> </url>
>             <releases>
>                 <enabled>true</enabled>
>             </releases>
>             <snapshots>
>                 <enabled>false</enabled>
>             </snapshots>
>         </repository>
>
>
>         <repository>
>             <id>apache.snapshots</id>
>             <name>Apache Development Snapshot Repository</name>
>             <url>
> https://repository.apache.org/content/repositories/snapshots/</url>
>             <releases>
>                 <enabled>false</enabled>
>             </releases>
>             <snapshots>
>                 <enabled>true</enabled>
>             </snapshots>
>         </repository>
>     </repositories>
>
>
>     <dependencies>
>         <!-- Apache Flink dependencies -->
>         <!-- These dependencies are provided, because they should not be
> packaged into the JAR file. -->
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-scala_2.11</artifactId>
>             <version>${flink.version}</version>
> <!--            <scope>provided</scope>-->
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-streaming-scala_2.11</artifactId>
>             <version>${flink.version}</version>
> <!--            <scope>provided</scope>-->
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-clients_2.11</artifactId>
>             <version>${flink.version}</version>
> <!--            <scope>provided</scope>-->
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-table-common</artifactId>
>             <version>${flink.version}</version>
> <!--            <scope>provided</scope>-->
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
>             <version>${flink.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-table-planner-blink_2.11</artifactId>
>             <version>${flink.version}</version>
> <!--            <scope>provided</scope>-->
>         </dependency>
>
>
>
>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-sql-connector-kafka_2.11</artifactId>
>             <version>${flink.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-connector-kafka_2.11</artifactId>
>             <version>${flink.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-avro</artifactId>
>             <version>${flink.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-csv</artifactId>
>             <version>${flink.version}</version>
>             <scope>provided</scope>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-json</artifactId>
>             <version>${flink.version}</version>
>             <scope>provided</scope>
>         </dependency>
>
>
>
>
>         <dependency>
>             <groupId>org.apache.bahir</groupId>
>             <artifactId>flink-connector-redis_2.11</artifactId>
>             <version>1.0</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.commons</groupId>
>             <artifactId>commons-pool2</artifactId>
>             <version>2.8.0</version>
>         </dependency>
>         <dependency>
>             <groupId>redis.clients</groupId>
>             <artifactId>jedis</artifactId>
>             <version>3.3.0</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-connector-hbase_2.11</artifactId>
>             <version>1.11-SNAPSHOT</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hbase</groupId>
>             <artifactId>hbase-client</artifactId>
>             <version>2.1.0</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.projectlombok</groupId>
>             <artifactId>lombok</artifactId>
>             <version>1.18.12</version>
>             <scope>provided</scope>
>         </dependency>
>
>
>         <dependency>
>             <groupId>io.lettuce</groupId>
>             <artifactId>lettuce-core</artifactId>
>             <version>5.3.1.RELEASE</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>junit</groupId>
>             <artifactId>junit</artifactId>
>             <version>4.13</version>
>             <!--<scope>test</scope>-->
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.apache.commons</groupId>
>             <artifactId>commons-email</artifactId>
>             <version>1.5</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.apache.hadoop</groupId>
>             <artifactId>hadoop-common</artifactId>
>             <version>3.0.0-cdh6.3.2</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hadoop</groupId>
>             <artifactId>hadoop-hdfs</artifactId>
>             <version>3.0.0-cdh6.3.2</version>
>         </dependency>
>
>
>
>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-connector-hive_2.11</artifactId>
>             <version>${flink.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>org.apache.hive</groupId>
>             <artifactId>hive-exec</artifactId>
>             <version>${hive.version}</version>
>             <scope>provided</scope>
>         </dependency>
>
>
>         <!-- Add logging framework, to produce console output when running
> in the IDE. -->
>         <!-- These dependencies are excluded from the application JAR by
> default. -->
>         <dependency>
>             <groupId>org.slf4j</groupId>
>             <artifactId>slf4j-log4j12</artifactId>
>             <version>1.7.7</version>
>             <scope>runtime</scope>
>         </dependency>
>         <dependency>
>             <groupId>log4j</groupId>
>             <artifactId>log4j</artifactId>
>             <version>1.2.17</version>
>             <scope>runtime</scope>
>         </dependency>
>
>
>         <dependency>
>             <groupId>com.alibaba</groupId>
>             <artifactId>fastjson</artifactId>
>             <version>1.2.68</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>com.jayway.jsonpath</groupId>
>             <artifactId>json-path</artifactId>
>             <version>2.4.0</version>
>         </dependency>
>
>
>         <dependency>
>             <groupId>org.apache.flink</groupId>
>             <artifactId>flink-connector-jdbc_2.11</artifactId>
>             <version>${flink.version}</version>
>         </dependency>
>         <dependency>
>             <groupId>mysql</groupId>
>             <artifactId>mysql-connector-java</artifactId>
>             <version>5.1.46</version>
>         </dependency>
>         <dependency>
>             <groupId>io.vertx</groupId>
>             <artifactId>vertx-core</artifactId>
>             <version>3.9.1</version>
>         </dependency>
>         <dependency>
>             <groupId>io.vertx</groupId>
>             <artifactId>vertx-jdbc-client</artifactId>
>             <version>3.9.1</version>
>         </dependency>
>
>
>     </dependencies>
>
>
>
>
>
>
> 集群节点flink-1.11.0/lib/:
> -rw-r--r-- 1 root root    197597 6月  30 10:28 flink-clients_2.11-1.11.0.jar
> -rw-r--r-- 1 root root     90782 6月  30 17:46 flink-csv-1.11.0.jar
> -rw-r--r-- 1 root root 108349203 6月  30 17:52 flink-dist_2.11-1.11.0.jar
> -rw-r--r-- 1 root root     94863 6月  30 17:45 flink-json-1.11.0.jar
> -rw-r--r-- 1 root root   7712156 6月  18 10:42
> flink-shaded-zookeeper-3.4.14.jar
> -rw-r--r-- 1 root root  33325754 6月  30 17:50 flink-table_2.11-1.11.0.jar
> -rw-r--r-- 1 root root     47333 6月  30 10:38
> flink-table-api-scala-bridge_2.11-1.11.0.jar
> -rw-r--r-- 1 root root  37330521 6月  30 17:50
> flink-table-blink_2.11-1.11.0.jar
> -rw-r--r-- 1 root root    754983 6月  30 12:29 flink-table-common-1.11.0.jar
> -rw-r--r-- 1 root root     67114 4月  20 20:47 log4j-1.2-api-2.12.1.jar
> -rw-r--r-- 1 root root    276771 4月  20 20:47 log4j-api-2.12.1.jar
> -rw-r--r-- 1 root root   1674433 4月  20 20:47 log4j-core-2.12.1.jar
> -rw-r--r-- 1 root root     23518 4月  20 20:47 log4j-slf4j-impl-2.12.1.jar
>
>
> 把table相关的包都下载下来了,还是报同样的错,好奇怪。。。
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-07-10 10:24:02,"Congxian Qiu" <[hidden email]> 写道:
> >Hi
> >
> >这个看上去是提交到 Yarn 了,具体的原因需要看下 JM log 是啥原因。另外是否是日志没有贴全,这里只看到本地 log,其他的就只有小部分
> >jobmanager.err 的 log。
> >
> >Best,
> >Congxian
> >
> >
> >Zhou Zach <[hidden email]> 于2020年7月9日周四 下午9:23写道:
> >
> >> hi all,
> >> 原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
> >> yarn log:
> >> Log Type: jobmanager.err
> >>
> >>
> >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> >>
> >>
> >> Log Length: 785
> >>
> >>
> >> SLF4J: Class path contains multiple SLF4J bindings.
> >> SLF4J: Found binding in
> >>
> [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> SLF4J: Found binding in
> >>
> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> >> explanation.
> >> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> >> log4j:WARN No appenders could be found for logger
> >> (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
> >> log4j:WARN Please initialize the log4j system properly.
> >> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
> for
> >> more info.
> >>
> >>
> >> Log Type: jobmanager.out
> >>
> >>
> >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> >>
> >>
> >> Log Length: 0
> >>
> >>
> >>
> >>
> >> Log Type: prelaunch.err
> >>
> >>
> >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> >>
> >>
> >> Log Length: 0
> >>
> >>
> >>
> >>
> >> Log Type: prelaunch.out
> >>
> >>
> >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> >>
> >>
> >> Log Length: 70
> >>
> >>
> >> Setting up env variables
> >> Setting up job resources
> >> Launching container
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> 本地log:
> >> 2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend
> >>                 [] -
> >>
> --------------------------------------------------------------------------------
> >> 2020-07-09 21:02:41,020 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: jobmanager.rpc.address, localhost
> >> 2020-07-09 21:02:41,020 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: jobmanager.rpc.port, 6123
> >> 2020-07-09 21:02:41,021 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: jobmanager.memory.process.size, 1600m
> >> 2020-07-09 21:02:41,021 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: taskmanager.memory.process.size, 1728m
> >> 2020-07-09 21:02:41,021 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: taskmanager.numberOfTaskSlots, 1
> >> 2020-07-09 21:02:41,021 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: parallelism.default, 1
> >> 2020-07-09 21:02:41,021 INFO
> >> org.apache.flink.configuration.GlobalConfiguration           [] -
> Loading
> >> configuration property: jobmanager.execution.failover-strategy, region
> >> 2020-07-09 21:02:41,164 INFO
> >> org.apache.flink.runtime.security.modules.HadoopModule       [] - Hadoop
> >> user set to hdfs (auth:SIMPLE)
> >> 2020-07-09 21:02:41,172 INFO
> >> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas
> file
> >> will be created as /tmp/jaas-2213111423022415421.conf.
> >> 2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend
> >>                 [] - Running 'run-application' command.
> >> 2020-07-09 21:02:41,194 INFO
> >>
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer
> >> [] - Submitting application in 'Application Mode'.
> >> 2020-07-09 21:02:41,201 WARN
> >> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The
> >> configuration directory ('/opt/flink-1.11.0/conf') already contains a
> LOG4J
> >> config file.If you want to use logback, then please delete or rename the
> >> log configuration file.
> >> 2020-07-09 21:02:41,537 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - No path for the flink jar passed. Using the
> location
> >> of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> >> 2020-07-09 21:02:41,665 INFO
> >> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] -
> >> Failing over to rm220
> >> 2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration
> >>                  [] - resource-types.xml not found
> >> 2020-07-09 21:02:41,718 INFO
> >> org.apache.hadoop.yarn.util.resource.ResourceUtils           [] -
> Unable to
> >> find 'resource-types.xml'.
> >> 2020-07-09 21:02:41,755 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - Cluster specification:
> >> ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096,
> >> slotsPerTaskManager=1}
> >> 2020-07-09 21:02:42,723 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - Submitting application master
> >> application_1594271580406_0010
> >> 2020-07-09 21:02:42,969 INFO
> >> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] -
> Submitted
> >> application application_1594271580406_0010
> >> 2020-07-09 21:02:42,969 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - Waiting for the cluster to be allocated
> >> 2020-07-09 21:02:42,971 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - Deploying cluster, current state ACCEPTED
> >> 2020-07-09 21:02:47,619 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - YARN application has been deployed successfully.
> >> 2020-07-09 21:02:47,620 INFO
> org.apache.flink.yarn.YarnClusterDescriptor
> >>                 [] - Found Web Interface cdh003:38716 of application
> >> 'application_1594271580406_0010'
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink1.10升级到flink1.11 提交到yarn失败

Shuiqiang Chen
Hi,
看样子是kafka table source没有成功创建,也许你需要将
<dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-sql-connector-kafka_2.11</artifactId>
            <version>${flink.version}</version>
 </dependency>

这个jar 放到 FLINK_HOME/lib 目录下

Congxian Qiu <[hidden email]> 于2020年7月10日周五 上午10:57写道:

> Hi
>
> 从异常看,可能是某个包没有引入导致的,和这个[1]比较像,可能你需要对比一下需要的是哪个包没有引入。
>
> PS 从栈那里看到是 csv 相关的,可以优先考虑下 cvs 相关的包
>
> ```
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.filesystem.FileSystemTableFactory
> at
>
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
> at
>
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
> at
>
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
> at
>
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
> at
>
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
> ... 37 more
> ```
>
> [1] http://apache-flink.147419.n8.nabble.com/flink-1-11-td4471.html
> Best,
> Congxian
>
>
> Zhou Zach <[hidden email]> 于2020年7月10日周五 上午10:39写道:
>
> > 日志贴全了的,这是从yarn ui贴的full log,用yarn logs命令也是这些log,太简短,看不出错误在哪。。。
> >
> >
> > 我又提交了另外之前用flink1.10跑过的任务,现在用flink1.11跑,报了异常:
> >
> >
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/opt/flink-1.11.0/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > SLF4J: Actual binding is of type
> > [org.apache.logging.slf4j.Log4jLoggerFactory]
> >
> >
> > ------------------------------------------------------------
> >  The program finished with the following exception:
> >
> >
> > org.apache.flink.client.program.ProgramInvocationException: The main
> > method caused an error: findAndCreateTableSource failed.
> > at
> >
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
> > at
> >
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
> > at
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
> > at
> >
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
> > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
> > at
> >
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
> > at
> >
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:422)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
> > at
> >
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
> > Caused by: org.apache.flink.table.api.TableException:
> > findAndCreateTableSource failed.
> > at
> >
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:49)
> > at
> >
> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:190)
> > at
> >
> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:89)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
> > at
> >
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
> > at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org
> >
> $apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
> > at
> >
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
> > at
> >
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
> > at
> >
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
> > at
> >
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
> > at
> >
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
> > at
> >
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
> > at
> >
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
> > at
> >
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:747)
> > at
> >
> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV$.main(FromKafkaSinkJdbcForUserUV.scala:78)
> > at
> >
> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV.main(FromKafkaSinkJdbcForUserUV.scala)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
> > ... 11 more
> > Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
> > Could not find a suitable table factory for
> > 'org.apache.flink.table.factories.TableSourceFactory' in
> > the classpath.
> >
> >
> > Reason: Required context properties mismatch.
> >
> >
> > The following properties are requested:
> > connector.properties.bootstrap.servers=cdh1:9092,cdh2:9092,cdh3:9092
> > connector.properties.group.id=user_flink
> > connector.properties.zookeeper.connect=cdh1:2181,cdh2:2181,cdh3:2181
> > connector.startup-mode=latest-offset
> > connector.topic=user
> > connector.type=kafka
> > connector.version=universal
> > format.derive-schema=true
> > format.type=json
> > schema.0.data-type=VARCHAR(2147483647)
> > schema.0.name=uid
> > schema.1.data-type=VARCHAR(2147483647)
> > schema.1.name=sex
> > schema.2.data-type=INT
> > schema.2.name=age
> > schema.3.data-type=TIMESTAMP(3)
> > schema.3.name=created_time
> > schema.4.data-type=TIMESTAMP(3) NOT NULL
> > schema.4.expr=PROCTIME()
> > schema.4.name=proctime
> > schema.watermark.0.rowtime=created_time
> > schema.watermark.0.strategy.data-type=TIMESTAMP(3)
> > schema.watermark.0.strategy.expr=`created_time` - INTERVAL '3' SECOND
> >
> >
> > The following factories have been considered:
> > org.apache.flink.table.sources.CsvBatchTableSourceFactory
> > org.apache.flink.table.sources.CsvAppendTableSourceFactory
> > org.apache.flink.table.filesystem.FileSystemTableFactory
> > at
> >
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
> > at
> >
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
> > at
> >
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
> > at
> >
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
> > at
> >
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
> > ... 37 more
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > 我把maven依赖的provide范围全部去掉了:
> > <properties>
> >
>  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
> >         <flink.version>1.11.0</flink.version>
> >         <hive.version>2.1.1</hive.version>
> >         <java.version>1.8</java.version>
> >         <scala.version>2.11.12</scala.version>
> >         <scala.binary.version>2.11</scala.binary.version>
> >         <maven.compiler.source>${java.version}</maven.compiler.source>
> >         <maven.compiler.target>${java.version}</maven.compiler.target>
> >     </properties>
> >
> >
> >     <repositories>
> >         <repository>
> >             <id>maven-net-cn</id>
> >             <name>Maven China Mirror</name>
> >             <url>http://maven.aliyun.com/nexus/content/groups/public/
> > </url>
> >             <releases>
> >                 <enabled>true</enabled>
> >             </releases>
> >             <snapshots>
> >                 <enabled>false</enabled>
> >             </snapshots>
> >         </repository>
> >
> >
> >         <repository>
> >             <id>apache.snapshots</id>
> >             <name>Apache Development Snapshot Repository</name>
> >             <url>
> > https://repository.apache.org/content/repositories/snapshots/</url>
> >             <releases>
> >                 <enabled>false</enabled>
> >             </releases>
> >             <snapshots>
> >                 <enabled>true</enabled>
> >             </snapshots>
> >         </repository>
> >     </repositories>
> >
> >
> >     <dependencies>
> >         <!-- Apache Flink dependencies -->
> >         <!-- These dependencies are provided, because they should not be
> > packaged into the JAR file. -->
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-scala_2.11</artifactId>
> >             <version>${flink.version}</version>
> > <!--            <scope>provided</scope>-->
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-streaming-scala_2.11</artifactId>
> >             <version>${flink.version}</version>
> > <!--            <scope>provided</scope>-->
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-clients_2.11</artifactId>
> >             <version>${flink.version}</version>
> > <!--            <scope>provided</scope>-->
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-table-common</artifactId>
> >             <version>${flink.version}</version>
> > <!--            <scope>provided</scope>-->
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
> >             <version>${flink.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-table-planner-blink_2.11</artifactId>
> >             <version>${flink.version}</version>
> > <!--            <scope>provided</scope>-->
> >         </dependency>
> >
> >
> >
> >
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-sql-connector-kafka_2.11</artifactId>
> >             <version>${flink.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-connector-kafka_2.11</artifactId>
> >             <version>${flink.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-avro</artifactId>
> >             <version>${flink.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-csv</artifactId>
> >             <version>${flink.version}</version>
> >             <scope>provided</scope>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-json</artifactId>
> >             <version>${flink.version}</version>
> >             <scope>provided</scope>
> >         </dependency>
> >
> >
> >
> >
> >         <dependency>
> >             <groupId>org.apache.bahir</groupId>
> >             <artifactId>flink-connector-redis_2.11</artifactId>
> >             <version>1.0</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.commons</groupId>
> >             <artifactId>commons-pool2</artifactId>
> >             <version>2.8.0</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>redis.clients</groupId>
> >             <artifactId>jedis</artifactId>
> >             <version>3.3.0</version>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-connector-hbase_2.11</artifactId>
> >             <version>1.11-SNAPSHOT</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hbase</groupId>
> >             <artifactId>hbase-client</artifactId>
> >             <version>2.1.0</version>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>org.projectlombok</groupId>
> >             <artifactId>lombok</artifactId>
> >             <version>1.18.12</version>
> >             <scope>provided</scope>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>io.lettuce</groupId>
> >             <artifactId>lettuce-core</artifactId>
> >             <version>5.3.1.RELEASE</version>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>junit</groupId>
> >             <artifactId>junit</artifactId>
> >             <version>4.13</version>
> >             <!--<scope>test</scope>-->
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>org.apache.commons</groupId>
> >             <artifactId>commons-email</artifactId>
> >             <version>1.5</version>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>org.apache.hadoop</groupId>
> >             <artifactId>hadoop-common</artifactId>
> >             <version>3.0.0-cdh6.3.2</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hadoop</groupId>
> >             <artifactId>hadoop-hdfs</artifactId>
> >             <version>3.0.0-cdh6.3.2</version>
> >         </dependency>
> >
> >
> >
> >
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-connector-hive_2.11</artifactId>
> >             <version>${flink.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>org.apache.hive</groupId>
> >             <artifactId>hive-exec</artifactId>
> >             <version>${hive.version}</version>
> >             <scope>provided</scope>
> >         </dependency>
> >
> >
> >         <!-- Add logging framework, to produce console output when
> running
> > in the IDE. -->
> >         <!-- These dependencies are excluded from the application JAR by
> > default. -->
> >         <dependency>
> >             <groupId>org.slf4j</groupId>
> >             <artifactId>slf4j-log4j12</artifactId>
> >             <version>1.7.7</version>
> >             <scope>runtime</scope>
> >         </dependency>
> >         <dependency>
> >             <groupId>log4j</groupId>
> >             <artifactId>log4j</artifactId>
> >             <version>1.2.17</version>
> >             <scope>runtime</scope>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>com.alibaba</groupId>
> >             <artifactId>fastjson</artifactId>
> >             <version>1.2.68</version>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>com.jayway.jsonpath</groupId>
> >             <artifactId>json-path</artifactId>
> >             <version>2.4.0</version>
> >         </dependency>
> >
> >
> >         <dependency>
> >             <groupId>org.apache.flink</groupId>
> >             <artifactId>flink-connector-jdbc_2.11</artifactId>
> >             <version>${flink.version}</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>mysql</groupId>
> >             <artifactId>mysql-connector-java</artifactId>
> >             <version>5.1.46</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>io.vertx</groupId>
> >             <artifactId>vertx-core</artifactId>
> >             <version>3.9.1</version>
> >         </dependency>
> >         <dependency>
> >             <groupId>io.vertx</groupId>
> >             <artifactId>vertx-jdbc-client</artifactId>
> >             <version>3.9.1</version>
> >         </dependency>
> >
> >
> >     </dependencies>
> >
> >
> >
> >
> >
> >
> > 集群节点flink-1.11.0/lib/:
> > -rw-r--r-- 1 root root    197597 6月  30 10:28
> flink-clients_2.11-1.11.0.jar
> > -rw-r--r-- 1 root root     90782 6月  30 17:46 flink-csv-1.11.0.jar
> > -rw-r--r-- 1 root root 108349203 6月  30 17:52 flink-dist_2.11-1.11.0.jar
> > -rw-r--r-- 1 root root     94863 6月  30 17:45 flink-json-1.11.0.jar
> > -rw-r--r-- 1 root root   7712156 6月  18 10:42
> > flink-shaded-zookeeper-3.4.14.jar
> > -rw-r--r-- 1 root root  33325754 6月  30 17:50 flink-table_2.11-1.11.0.jar
> > -rw-r--r-- 1 root root     47333 6月  30 10:38
> > flink-table-api-scala-bridge_2.11-1.11.0.jar
> > -rw-r--r-- 1 root root  37330521 6月  30 17:50
> > flink-table-blink_2.11-1.11.0.jar
> > -rw-r--r-- 1 root root    754983 6月  30 12:29
> flink-table-common-1.11.0.jar
> > -rw-r--r-- 1 root root     67114 4月  20 20:47 log4j-1.2-api-2.12.1.jar
> > -rw-r--r-- 1 root root    276771 4月  20 20:47 log4j-api-2.12.1.jar
> > -rw-r--r-- 1 root root   1674433 4月  20 20:47 log4j-core-2.12.1.jar
> > -rw-r--r-- 1 root root     23518 4月  20 20:47 log4j-slf4j-impl-2.12.1.jar
> >
> >
> > 把table相关的包都下载下来了,还是报同样的错,好奇怪。。。
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > 在 2020-07-10 10:24:02,"Congxian Qiu" <[hidden email]> 写道:
> > >Hi
> > >
> > >这个看上去是提交到 Yarn 了,具体的原因需要看下 JM log 是啥原因。另外是否是日志没有贴全,这里只看到本地 log,其他的就只有小部分
> > >jobmanager.err 的 log。
> > >
> > >Best,
> > >Congxian
> > >
> > >
> > >Zhou Zach <[hidden email]> 于2020年7月9日周四 下午9:23写道:
> > >
> > >> hi all,
> > >> 原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
> > >> yarn log:
> > >> Log Type: jobmanager.err
> > >>
> > >>
> > >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> > >>
> > >>
> > >> Log Length: 785
> > >>
> > >>
> > >> SLF4J: Class path contains multiple SLF4J bindings.
> > >> SLF4J: Found binding in
> > >>
> >
> [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > >> SLF4J: Found binding in
> > >>
> >
> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > >> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > >> explanation.
> > >> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> > >> log4j:WARN No appenders could be found for logger
> > >> (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
> > >> log4j:WARN Please initialize the log4j system properly.
> > >> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
> > for
> > >> more info.
> > >>
> > >>
> > >> Log Type: jobmanager.out
> > >>
> > >>
> > >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> > >>
> > >>
> > >> Log Length: 0
> > >>
> > >>
> > >>
> > >>
> > >> Log Type: prelaunch.err
> > >>
> > >>
> > >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> > >>
> > >>
> > >> Log Length: 0
> > >>
> > >>
> > >>
> > >>
> > >> Log Type: prelaunch.out
> > >>
> > >>
> > >> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
> > >>
> > >>
> > >> Log Length: 70
> > >>
> > >>
> > >> Setting up env variables
> > >> Setting up job resources
> > >> Launching container
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> 本地log:
> > >> 2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend
> > >>                 [] -
> > >>
> >
> --------------------------------------------------------------------------------
> > >> 2020-07-09 21:02:41,020 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: jobmanager.rpc.address, localhost
> > >> 2020-07-09 21:02:41,020 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: jobmanager.rpc.port, 6123
> > >> 2020-07-09 21:02:41,021 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: jobmanager.memory.process.size, 1600m
> > >> 2020-07-09 21:02:41,021 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: taskmanager.memory.process.size, 1728m
> > >> 2020-07-09 21:02:41,021 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: taskmanager.numberOfTaskSlots, 1
> > >> 2020-07-09 21:02:41,021 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: parallelism.default, 1
> > >> 2020-07-09 21:02:41,021 INFO
> > >> org.apache.flink.configuration.GlobalConfiguration           [] -
> > Loading
> > >> configuration property: jobmanager.execution.failover-strategy, region
> > >> 2020-07-09 21:02:41,164 INFO
> > >> org.apache.flink.runtime.security.modules.HadoopModule       [] -
> Hadoop
> > >> user set to hdfs (auth:SIMPLE)
> > >> 2020-07-09 21:02:41,172 INFO
> > >> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas
> > file
> > >> will be created as /tmp/jaas-2213111423022415421.conf.
> > >> 2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend
> > >>                 [] - Running 'run-application' command.
> > >> 2020-07-09 21:02:41,194 INFO
> > >>
> >
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer
> > >> [] - Submitting application in 'Application Mode'.
> > >> 2020-07-09 21:02:41,201 WARN
> > >> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The
> > >> configuration directory ('/opt/flink-1.11.0/conf') already contains a
> > LOG4J
> > >> config file.If you want to use logback, then please delete or rename
> the
> > >> log configuration file.
> > >> 2020-07-09 21:02:41,537 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - No path for the flink jar passed. Using the
> > location
> > >> of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> > >> 2020-07-09 21:02:41,665 INFO
> > >> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] -
> > >> Failing over to rm220
> > >> 2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration
> > >>                  [] - resource-types.xml not found
> > >> 2020-07-09 21:02:41,718 INFO
> > >> org.apache.hadoop.yarn.util.resource.ResourceUtils           [] -
> > Unable to
> > >> find 'resource-types.xml'.
> > >> 2020-07-09 21:02:41,755 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - Cluster specification:
> > >> ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096,
> > >> slotsPerTaskManager=1}
> > >> 2020-07-09 21:02:42,723 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - Submitting application master
> > >> application_1594271580406_0010
> > >> 2020-07-09 21:02:42,969 INFO
> > >> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] -
> > Submitted
> > >> application application_1594271580406_0010
> > >> 2020-07-09 21:02:42,969 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - Waiting for the cluster to be allocated
> > >> 2020-07-09 21:02:42,971 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - Deploying cluster, current state ACCEPTED
> > >> 2020-07-09 21:02:47,619 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - YARN application has been deployed successfully.
> > >> 2020-07-09 21:02:47,620 INFO
> > org.apache.flink.yarn.YarnClusterDescriptor
> > >>                 [] - Found Web Interface cdh003:38716 of application
> > >> 'application_1594271580406_0010'
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: flink1.10升级到flink1.11 提交到yarn失败

Leonard Xu
Hello,Zach

>>> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
>>> Could not find a suitable table factory for
>>> 'org.apache.flink.table.factories.TableSourceFactory' in
>>> the classpath.
>>>
>>>
>>> Reason: Required context properties mismatch.
这个错误,一般是SQL 程序缺少了SQL connector 或 format的依赖,你pom里下面的这两个依赖,

      <dependency>
           <groupId>org.apache.flink</groupId>
           <artifactId>flink-sql-connector-kafka_2.11</artifactId>
           <version>${flink.version}</version>
       </dependency>
       <dependency>
           <groupId>org.apache.flink</groupId>
           <artifactId>flink-connector-kafka_2.11</artifactId>
           <version>${flink.version}</version>
       </dependency>

放在一起是会冲突的,flink-sql-connector-kafka_2.11 shaded 了kafka的依赖, flink-connector-kafka_2.11 是没有shade的。
你根据你的需要,如果是SQL 程序用第一个, 如果是 dataStream 作业 使用第二个。

祝好,
Leonard Xu


> 在 2020年7月10日,11:08,Shuiqiang Chen <[hidden email]> 写道:
>
> Hi,
> 看样子是kafka table source没有成功创建,也许你需要将
> <dependency>
>            <groupId>org.apache.flink</groupId>
>            <artifactId>flink-sql-connector-kafka_2.11</artifactId>
>            <version>${flink.version}</version>
> </dependency>
>
> 这个jar 放到 FLINK_HOME/lib 目录下
>
> Congxian Qiu <[hidden email]> 于2020年7月10日周五 上午10:57写道:
>
>> Hi
>>
>> 从异常看,可能是某个包没有引入导致的,和这个[1]比较像,可能你需要对比一下需要的是哪个包没有引入。
>>
>> PS 从栈那里看到是 csv 相关的,可以优先考虑下 cvs 相关的包
>>
>> ```
>> The following factories have been considered:
>> org.apache.flink.table.sources.CsvBatchTableSourceFactory
>> org.apache.flink.table.sources.CsvAppendTableSourceFactory
>> org.apache.flink.table.filesystem.FileSystemTableFactory
>> at
>>
>> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>> at
>>
>> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>> at
>>
>> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>> at
>>
>> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
>> at
>>
>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
>> ... 37 more
>> ```
>>
>> [1] http://apache-flink.147419.n8.nabble.com/flink-1-11-td4471.html
>> Best,
>> Congxian
>>
>>
>> Zhou Zach <[hidden email]> 于2020年7月10日周五 上午10:39写道:
>>
>>> 日志贴全了的,这是从yarn ui贴的full log,用yarn logs命令也是这些log,太简短,看不出错误在哪。。。
>>>
>>>
>>> 我又提交了另外之前用flink1.10跑过的任务,现在用flink1.11跑,报了异常:
>>>
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>>
>> [jar:file:/opt/flink-1.11.0/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>>
>> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> SLF4J: Actual binding is of type
>>> [org.apache.logging.slf4j.Log4jLoggerFactory]
>>>
>>>
>>> ------------------------------------------------------------
>>> The program finished with the following exception:
>>>
>>>
>>> org.apache.flink.client.program.ProgramInvocationException: The main
>>> method caused an error: findAndCreateTableSource failed.
>>> at
>>>
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>>> at
>>>
>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>>> at
>> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
>>> at
>>>
>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
>>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
>>> at
>>>
>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>>> at
>>>
>> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>> at
>>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>>> at
>>>
>> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
>>> Caused by: org.apache.flink.table.api.TableException:
>>> findAndCreateTableSource failed.
>>> at
>>>
>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:49)
>>> at
>>>
>> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:190)
>>> at
>>>
>> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:89)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>>> at
>>>
>> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>>> at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org
>>>
>> $apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>>> at
>>>
>> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>>> at
>>>
>> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>>> at
>>>
>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>>> at
>>>
>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>>> at
>>>
>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>>> at
>>>
>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>>> at
>>>
>> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>>> at
>>>
>> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:747)
>>> at
>>>
>> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV$.main(FromKafkaSinkJdbcForUserUV.scala:78)
>>> at
>>>
>> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV.main(FromKafkaSinkJdbcForUserUV.scala)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> at
>>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at
>>>
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>>> ... 11 more
>>> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
>>> Could not find a suitable table factory for
>>> 'org.apache.flink.table.factories.TableSourceFactory' in
>>> the classpath.
>>>
>>>
>>> Reason: Required context properties mismatch.
>>>
>>>
>>> The following properties are requested:
>>> connector.properties.bootstrap.servers=cdh1:9092,cdh2:9092,cdh3:9092
>>> connector.properties.group.id=user_flink
>>> connector.properties.zookeeper.connect=cdh1:2181,cdh2:2181,cdh3:2181
>>> connector.startup-mode=latest-offset
>>> connector.topic=user
>>> connector.type=kafka
>>> connector.version=universal
>>> format.derive-schema=true
>>> format.type=json
>>> schema.0.data-type=VARCHAR(2147483647)
>>> schema.0.name=uid
>>> schema.1.data-type=VARCHAR(2147483647)
>>> schema.1.name=sex
>>> schema.2.data-type=INT
>>> schema.2.name=age
>>> schema.3.data-type=TIMESTAMP(3)
>>> schema.3.name=created_time
>>> schema.4.data-type=TIMESTAMP(3) NOT NULL
>>> schema.4.expr=PROCTIME()
>>> schema.4.name=proctime
>>> schema.watermark.0.rowtime=created_time
>>> schema.watermark.0.strategy.data-type=TIMESTAMP(3)
>>> schema.watermark.0.strategy.expr=`created_time` - INTERVAL '3' SECOND
>>>
>>>
>>> The following factories have been considered:
>>> org.apache.flink.table.sources.CsvBatchTableSourceFactory
>>> org.apache.flink.table.sources.CsvAppendTableSourceFactory
>>> org.apache.flink.table.filesystem.FileSystemTableFactory
>>> at
>>>
>> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>>> at
>>>
>> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>>> at
>>>
>> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>>> at
>>>
>> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
>>> at
>>>
>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
>>> ... 37 more
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 我把maven依赖的provide范围全部去掉了:
>>> <properties>
>>>
>> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
>>>        <flink.version>1.11.0</flink.version>
>>>        <hive.version>2.1.1</hive.version>
>>>        <java.version>1.8</java.version>
>>>        <scala.version>2.11.12</scala.version>
>>>        <scala.binary.version>2.11</scala.binary.version>
>>>        <maven.compiler.source>${java.version}</maven.compiler.source>
>>>        <maven.compiler.target>${java.version}</maven.compiler.target>
>>>    </properties>
>>>
>>>
>>>    <repositories>
>>>        <repository>
>>>            <id>maven-net-cn</id>
>>>            <name>Maven China Mirror</name>
>>>            <url>http://maven.aliyun.com/nexus/content/groups/public/
>>> </url>
>>>            <releases>
>>>                <enabled>true</enabled>
>>>            </releases>
>>>            <snapshots>
>>>                <enabled>false</enabled>
>>>            </snapshots>
>>>        </repository>
>>>
>>>
>>>        <repository>
>>>            <id>apache.snapshots</id>
>>>            <name>Apache Development Snapshot Repository</name>
>>>            <url>
>>> https://repository.apache.org/content/repositories/snapshots/</url>
>>>            <releases>
>>>                <enabled>false</enabled>
>>>            </releases>
>>>            <snapshots>
>>>                <enabled>true</enabled>
>>>            </snapshots>
>>>        </repository>
>>>    </repositories>
>>>
>>>
>>>    <dependencies>
>>>        <!-- Apache Flink dependencies -->
>>>        <!-- These dependencies are provided, because they should not be
>>> packaged into the JAR file. -->
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-scala_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>> <!--            <scope>provided</scope>-->
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-streaming-scala_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>> <!--            <scope>provided</scope>-->
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-clients_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>> <!--            <scope>provided</scope>-->
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-table-common</artifactId>
>>>            <version>${flink.version}</version>
>>> <!--            <scope>provided</scope>-->
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-table-planner-blink_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>> <!--            <scope>provided</scope>-->
>>>        </dependency>
>>>
>>>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-sql-connector-kafka_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-connector-kafka_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-avro</artifactId>
>>>            <version>${flink.version}</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-csv</artifactId>
>>>            <version>${flink.version}</version>
>>>            <scope>provided</scope>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-json</artifactId>
>>>            <version>${flink.version}</version>
>>>            <scope>provided</scope>
>>>        </dependency>
>>>
>>>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.bahir</groupId>
>>>            <artifactId>flink-connector-redis_2.11</artifactId>
>>>            <version>1.0</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.commons</groupId>
>>>            <artifactId>commons-pool2</artifactId>
>>>            <version>2.8.0</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>redis.clients</groupId>
>>>            <artifactId>jedis</artifactId>
>>>            <version>3.3.0</version>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-connector-hbase_2.11</artifactId>
>>>            <version>1.11-SNAPSHOT</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.hbase</groupId>
>>>            <artifactId>hbase-client</artifactId>
>>>            <version>2.1.0</version>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.projectlombok</groupId>
>>>            <artifactId>lombok</artifactId>
>>>            <version>1.18.12</version>
>>>            <scope>provided</scope>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>io.lettuce</groupId>
>>>            <artifactId>lettuce-core</artifactId>
>>>            <version>5.3.1.RELEASE</version>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>junit</groupId>
>>>            <artifactId>junit</artifactId>
>>>            <version>4.13</version>
>>>            <!--<scope>test</scope>-->
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.commons</groupId>
>>>            <artifactId>commons-email</artifactId>
>>>            <version>1.5</version>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.hadoop</groupId>
>>>            <artifactId>hadoop-common</artifactId>
>>>            <version>3.0.0-cdh6.3.2</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.hadoop</groupId>
>>>            <artifactId>hadoop-hdfs</artifactId>
>>>            <version>3.0.0-cdh6.3.2</version>
>>>        </dependency>
>>>
>>>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-connector-hive_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>org.apache.hive</groupId>
>>>            <artifactId>hive-exec</artifactId>
>>>            <version>${hive.version}</version>
>>>            <scope>provided</scope>
>>>        </dependency>
>>>
>>>
>>>        <!-- Add logging framework, to produce console output when
>> running
>>> in the IDE. -->
>>>        <!-- These dependencies are excluded from the application JAR by
>>> default. -->
>>>        <dependency>
>>>            <groupId>org.slf4j</groupId>
>>>            <artifactId>slf4j-log4j12</artifactId>
>>>            <version>1.7.7</version>
>>>            <scope>runtime</scope>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>log4j</groupId>
>>>            <artifactId>log4j</artifactId>
>>>            <version>1.2.17</version>
>>>            <scope>runtime</scope>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>com.alibaba</groupId>
>>>            <artifactId>fastjson</artifactId>
>>>            <version>1.2.68</version>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>com.jayway.jsonpath</groupId>
>>>            <artifactId>json-path</artifactId>
>>>            <version>2.4.0</version>
>>>        </dependency>
>>>
>>>
>>>        <dependency>
>>>            <groupId>org.apache.flink</groupId>
>>>            <artifactId>flink-connector-jdbc_2.11</artifactId>
>>>            <version>${flink.version}</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>mysql</groupId>
>>>            <artifactId>mysql-connector-java</artifactId>
>>>            <version>5.1.46</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>io.vertx</groupId>
>>>            <artifactId>vertx-core</artifactId>
>>>            <version>3.9.1</version>
>>>        </dependency>
>>>        <dependency>
>>>            <groupId>io.vertx</groupId>
>>>            <artifactId>vertx-jdbc-client</artifactId>
>>>            <version>3.9.1</version>
>>>        </dependency>
>>>
>>>
>>>    </dependencies>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 集群节点flink-1.11.0/lib/:
>>> -rw-r--r-- 1 root root    197597 6月  30 10:28
>> flink-clients_2.11-1.11.0.jar
>>> -rw-r--r-- 1 root root     90782 6月  30 17:46 flink-csv-1.11.0.jar
>>> -rw-r--r-- 1 root root 108349203 6月  30 17:52 flink-dist_2.11-1.11.0.jar
>>> -rw-r--r-- 1 root root     94863 6月  30 17:45 flink-json-1.11.0.jar
>>> -rw-r--r-- 1 root root   7712156 6月  18 10:42
>>> flink-shaded-zookeeper-3.4.14.jar
>>> -rw-r--r-- 1 root root  33325754 6月  30 17:50 flink-table_2.11-1.11.0.jar
>>> -rw-r--r-- 1 root root     47333 6月  30 10:38
>>> flink-table-api-scala-bridge_2.11-1.11.0.jar
>>> -rw-r--r-- 1 root root  37330521 6月  30 17:50
>>> flink-table-blink_2.11-1.11.0.jar
>>> -rw-r--r-- 1 root root    754983 6月  30 12:29
>> flink-table-common-1.11.0.jar
>>> -rw-r--r-- 1 root root     67114 4月  20 20:47 log4j-1.2-api-2.12.1.jar
>>> -rw-r--r-- 1 root root    276771 4月  20 20:47 log4j-api-2.12.1.jar
>>> -rw-r--r-- 1 root root   1674433 4月  20 20:47 log4j-core-2.12.1.jar
>>> -rw-r--r-- 1 root root     23518 4月  20 20:47 log4j-slf4j-impl-2.12.1.jar
>>>
>>>
>>> 把table相关的包都下载下来了,还是报同样的错,好奇怪。。。
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 在 2020-07-10 10:24:02,"Congxian Qiu" <[hidden email]> 写道:
>>>> Hi
>>>>
>>>> 这个看上去是提交到 Yarn 了,具体的原因需要看下 JM log 是啥原因。另外是否是日志没有贴全,这里只看到本地 log,其他的就只有小部分
>>>> jobmanager.err 的 log。
>>>>
>>>> Best,
>>>> Congxian
>>>>
>>>>
>>>> Zhou Zach <[hidden email]> 于2020年7月9日周四 下午9:23写道:
>>>>
>>>>> hi all,
>>>>> 原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
>>>>> yarn log:
>>>>> Log Type: jobmanager.err
>>>>>
>>>>>
>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>
>>>>>
>>>>> Log Length: 785
>>>>>
>>>>>
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in
>>>>>
>>>
>> [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>>
>>>
>> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>> explanation.
>>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>>> log4j:WARN No appenders could be found for logger
>>>>> (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
>>>>> log4j:WARN Please initialize the log4j system properly.
>>>>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>> for
>>>>> more info.
>>>>>
>>>>>
>>>>> Log Type: jobmanager.out
>>>>>
>>>>>
>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>
>>>>>
>>>>> Log Length: 0
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Log Type: prelaunch.err
>>>>>
>>>>>
>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>
>>>>>
>>>>> Log Length: 0
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Log Type: prelaunch.out
>>>>>
>>>>>
>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>
>>>>>
>>>>> Log Length: 70
>>>>>
>>>>>
>>>>> Setting up env variables
>>>>> Setting up job resources
>>>>> Launching container
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 本地log:
>>>>> 2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend
>>>>>                [] -
>>>>>
>>>
>> --------------------------------------------------------------------------------
>>>>> 2020-07-09 21:02:41,020 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: jobmanager.rpc.address, localhost
>>>>> 2020-07-09 21:02:41,020 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: jobmanager.rpc.port, 6123
>>>>> 2020-07-09 21:02:41,021 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: jobmanager.memory.process.size, 1600m
>>>>> 2020-07-09 21:02:41,021 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: taskmanager.memory.process.size, 1728m
>>>>> 2020-07-09 21:02:41,021 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: taskmanager.numberOfTaskSlots, 1
>>>>> 2020-07-09 21:02:41,021 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: parallelism.default, 1
>>>>> 2020-07-09 21:02:41,021 INFO
>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>> Loading
>>>>> configuration property: jobmanager.execution.failover-strategy, region
>>>>> 2020-07-09 21:02:41,164 INFO
>>>>> org.apache.flink.runtime.security.modules.HadoopModule       [] -
>> Hadoop
>>>>> user set to hdfs (auth:SIMPLE)
>>>>> 2020-07-09 21:02:41,172 INFO
>>>>> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas
>>> file
>>>>> will be created as /tmp/jaas-2213111423022415421.conf.
>>>>> 2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend
>>>>>                [] - Running 'run-application' command.
>>>>> 2020-07-09 21:02:41,194 INFO
>>>>>
>>>
>> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer
>>>>> [] - Submitting application in 'Application Mode'.
>>>>> 2020-07-09 21:02:41,201 WARN
>>>>> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The
>>>>> configuration directory ('/opt/flink-1.11.0/conf') already contains a
>>> LOG4J
>>>>> config file.If you want to use logback, then please delete or rename
>> the
>>>>> log configuration file.
>>>>> 2020-07-09 21:02:41,537 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - No path for the flink jar passed. Using the
>>> location
>>>>> of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
>>>>> 2020-07-09 21:02:41,665 INFO
>>>>> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] -
>>>>> Failing over to rm220
>>>>> 2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration
>>>>>                 [] - resource-types.xml not found
>>>>> 2020-07-09 21:02:41,718 INFO
>>>>> org.apache.hadoop.yarn.util.resource.ResourceUtils           [] -
>>> Unable to
>>>>> find 'resource-types.xml'.
>>>>> 2020-07-09 21:02:41,755 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - Cluster specification:
>>>>> ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096,
>>>>> slotsPerTaskManager=1}
>>>>> 2020-07-09 21:02:42,723 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - Submitting application master
>>>>> application_1594271580406_0010
>>>>> 2020-07-09 21:02:42,969 INFO
>>>>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] -
>>> Submitted
>>>>> application application_1594271580406_0010
>>>>> 2020-07-09 21:02:42,969 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - Waiting for the cluster to be allocated
>>>>> 2020-07-09 21:02:42,971 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - Deploying cluster, current state ACCEPTED
>>>>> 2020-07-09 21:02:47,619 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - YARN application has been deployed successfully.
>>>>> 2020-07-09 21:02:47,620 INFO
>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>                [] - Found Web Interface cdh003:38716 of application
>>>>> 'application_1594271580406_0010'
>>>
>>

Reply | Threaded
Open this post in threaded view
|

Re:Re: flink1.10升级到flink1.11 提交到yarn失败

Zhou Zach
Hello,Leonard报的错误是Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSinkFactory' in the classpath.




不过,根据你的提示,我下载了flink-connector-jdbc_2.11-1.11.0.jar,放到了/opt/flink-1.11.0/lib/,作业成功运行了!早上跑的第一个作业,也是类似原因,下载了hbase connector就好了,感谢答疑问!











在 2020-07-10 11:31:39,"Leonard Xu" <[hidden email]> 写道:

>Hello,Zach
>
>>>> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
>>>> Could not find a suitable table factory for
>>>> 'org.apache.flink.table.factories.TableSourceFactory' in
>>>> the classpath.
>>>>
>>>>
>>>> Reason: Required context properties mismatch.
>这个错误,一般是SQL 程序缺少了SQL connector 或 format的依赖,你pom里下面的这两个依赖,
>
>      <dependency>
>           <groupId>org.apache.flink</groupId>
>           <artifactId>flink-sql-connector-kafka_2.11</artifactId>
>           <version>${flink.version}</version>
>       </dependency>
>       <dependency>
>           <groupId>org.apache.flink</groupId>
>           <artifactId>flink-connector-kafka_2.11</artifactId>
>           <version>${flink.version}</version>
>       </dependency>
>
>放在一起是会冲突的,flink-sql-connector-kafka_2.11 shaded 了kafka的依赖, flink-connector-kafka_2.11 是没有shade的。
>你根据你的需要,如果是SQL 程序用第一个, 如果是 dataStream 作业 使用第二个。
>
>祝好,
>Leonard Xu
>
>
>> 在 2020年7月10日,11:08,Shuiqiang Chen <[hidden email]> 写道:
>>
>> Hi,
>> 看样子是kafka table source没有成功创建,也许你需要将
>> <dependency>
>>            <groupId>org.apache.flink</groupId>
>>            <artifactId>flink-sql-connector-kafka_2.11</artifactId>
>>            <version>${flink.version}</version>
>> </dependency>
>>
>> 这个jar 放到 FLINK_HOME/lib 目录下
>>
>> Congxian Qiu <[hidden email]> 于2020年7月10日周五 上午10:57写道:
>>
>>> Hi
>>>
>>> 从异常看,可能是某个包没有引入导致的,和这个[1]比较像,可能你需要对比一下需要的是哪个包没有引入。
>>>
>>> PS 从栈那里看到是 csv 相关的,可以优先考虑下 cvs 相关的包
>>>
>>> ```
>>> The following factories have been considered:
>>> org.apache.flink.table.sources.CsvBatchTableSourceFactory
>>> org.apache.flink.table.sources.CsvAppendTableSourceFactory
>>> org.apache.flink.table.filesystem.FileSystemTableFactory
>>> at
>>>
>>> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>>> at
>>>
>>> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>>> at
>>>
>>> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>>> at
>>>
>>> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
>>> at
>>>
>>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
>>> ... 37 more
>>> ```
>>>
>>> [1] http://apache-flink.147419.n8.nabble.com/flink-1-11-td4471.html
>>> Best,
>>> Congxian
>>>
>>>
>>> Zhou Zach <[hidden email]> 于2020年7月10日周五 上午10:39写道:
>>>
>>>> 日志贴全了的,这是从yarn ui贴的full log,用yarn logs命令也是这些log,太简短,看不出错误在哪。。。
>>>>
>>>>
>>>> 我又提交了另外之前用flink1.10跑过的任务,现在用flink1.11跑,报了异常:
>>>>
>>>>
>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>> SLF4J: Found binding in
>>>>
>>> [jar:file:/opt/flink-1.11.0/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: Found binding in
>>>>
>>> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>> explanation.
>>>> SLF4J: Actual binding is of type
>>>> [org.apache.logging.slf4j.Log4jLoggerFactory]
>>>>
>>>>
>>>> ------------------------------------------------------------
>>>> The program finished with the following exception:
>>>>
>>>>
>>>> org.apache.flink.client.program.ProgramInvocationException: The main
>>>> method caused an error: findAndCreateTableSource failed.
>>>> at
>>>>
>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>>>> at
>>>>
>>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>>>> at
>>> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
>>>> at
>>>>
>>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
>>>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
>>>> at
>>>>
>>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>>>> at
>>>>
>>> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>>> at
>>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>>>> at
>>>>
>>> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
>>>> Caused by: org.apache.flink.table.api.TableException:
>>>> findAndCreateTableSource failed.
>>>> at
>>>>
>>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:49)
>>>> at
>>>>
>>> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.findAndCreateLegacyTableSource(LegacyCatalogSourceTable.scala:190)
>>>> at
>>>>
>>> org.apache.flink.table.planner.plan.schema.LegacyCatalogSourceTable.toRel(LegacyCatalogSourceTable.scala:89)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>>>> at
>>>>
>>> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>>>> at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org
>>>>
>>> $apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>>>> at
>>>>
>>> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>>>> at
>>>>
>>> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>>>> at
>>>>
>>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>>>> at
>>>>
>>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>>>> at
>>>>
>>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>>>> at
>>>>
>>> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>>>> at
>>>>
>>> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>>>> at
>>>>
>>> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:747)
>>>> at
>>>>
>>> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV$.main(FromKafkaSinkJdbcForUserUV.scala:78)
>>>> at
>>>>
>>> cn.ibobei.qile.dataflow.sql.FromKafkaSinkJdbcForUserUV.main(FromKafkaSinkJdbcForUserUV.scala)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>> at
>>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>>> at
>>>>
>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>>>> ... 11 more
>>>> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException:
>>>> Could not find a suitable table factory for
>>>> 'org.apache.flink.table.factories.TableSourceFactory' in
>>>> the classpath.
>>>>
>>>>
>>>> Reason: Required context properties mismatch.
>>>>
>>>>
>>>> The following properties are requested:
>>>> connector.properties.bootstrap.servers=cdh1:9092,cdh2:9092,cdh3:9092
>>>> connector.properties.group.id=user_flink
>>>> connector.properties.zookeeper.connect=cdh1:2181,cdh2:2181,cdh3:2181
>>>> connector.startup-mode=latest-offset
>>>> connector.topic=user
>>>> connector.type=kafka
>>>> connector.version=universal
>>>> format.derive-schema=true
>>>> format.type=json
>>>> schema.0.data-type=VARCHAR(2147483647)
>>>> schema.0.name=uid
>>>> schema.1.data-type=VARCHAR(2147483647)
>>>> schema.1.name=sex
>>>> schema.2.data-type=INT
>>>> schema.2.name=age
>>>> schema.3.data-type=TIMESTAMP(3)
>>>> schema.3.name=created_time
>>>> schema.4.data-type=TIMESTAMP(3) NOT NULL
>>>> schema.4.expr=PROCTIME()
>>>> schema.4.name=proctime
>>>> schema.watermark.0.rowtime=created_time
>>>> schema.watermark.0.strategy.data-type=TIMESTAMP(3)
>>>> schema.watermark.0.strategy.expr=`created_time` - INTERVAL '3' SECOND
>>>>
>>>>
>>>> The following factories have been considered:
>>>> org.apache.flink.table.sources.CsvBatchTableSourceFactory
>>>> org.apache.flink.table.sources.CsvAppendTableSourceFactory
>>>> org.apache.flink.table.filesystem.FileSystemTableFactory
>>>> at
>>>>
>>> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>>>> at
>>>>
>>> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>>>> at
>>>>
>>> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>>>> at
>>>>
>>> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
>>>> at
>>>>
>>> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:46)
>>>> ... 37 more
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 我把maven依赖的provide范围全部去掉了:
>>>> <properties>
>>>>
>>> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
>>>>        <flink.version>1.11.0</flink.version>
>>>>        <hive.version>2.1.1</hive.version>
>>>>        <java.version>1.8</java.version>
>>>>        <scala.version>2.11.12</scala.version>
>>>>        <scala.binary.version>2.11</scala.binary.version>
>>>>        <maven.compiler.source>${java.version}</maven.compiler.source>
>>>>        <maven.compiler.target>${java.version}</maven.compiler.target>
>>>>    </properties>
>>>>
>>>>
>>>>    <repositories>
>>>>        <repository>
>>>>            <id>maven-net-cn</id>
>>>>            <name>Maven China Mirror</name>
>>>>            <url>http://maven.aliyun.com/nexus/content/groups/public/
>>>> </url>
>>>>            <releases>
>>>>                <enabled>true</enabled>
>>>>            </releases>
>>>>            <snapshots>
>>>>                <enabled>false</enabled>
>>>>            </snapshots>
>>>>        </repository>
>>>>
>>>>
>>>>        <repository>
>>>>            <id>apache.snapshots</id>
>>>>            <name>Apache Development Snapshot Repository</name>
>>>>            <url>
>>>> https://repository.apache.org/content/repositories/snapshots/</url>
>>>>            <releases>
>>>>                <enabled>false</enabled>
>>>>            </releases>
>>>>            <snapshots>
>>>>                <enabled>true</enabled>
>>>>            </snapshots>
>>>>        </repository>
>>>>    </repositories>
>>>>
>>>>
>>>>    <dependencies>
>>>>        <!-- Apache Flink dependencies -->
>>>>        <!-- These dependencies are provided, because they should not be
>>>> packaged into the JAR file. -->
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-scala_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>> <!--            <scope>provided</scope>-->
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-streaming-scala_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>> <!--            <scope>provided</scope>-->
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-clients_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>> <!--            <scope>provided</scope>-->
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-table-common</artifactId>
>>>>            <version>${flink.version}</version>
>>>> <!--            <scope>provided</scope>-->
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-table-planner-blink_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>> <!--            <scope>provided</scope>-->
>>>>        </dependency>
>>>>
>>>>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-sql-connector-kafka_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-connector-kafka_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-avro</artifactId>
>>>>            <version>${flink.version}</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-csv</artifactId>
>>>>            <version>${flink.version}</version>
>>>>            <scope>provided</scope>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-json</artifactId>
>>>>            <version>${flink.version}</version>
>>>>            <scope>provided</scope>
>>>>        </dependency>
>>>>
>>>>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.bahir</groupId>
>>>>            <artifactId>flink-connector-redis_2.11</artifactId>
>>>>            <version>1.0</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.commons</groupId>
>>>>            <artifactId>commons-pool2</artifactId>
>>>>            <version>2.8.0</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>redis.clients</groupId>
>>>>            <artifactId>jedis</artifactId>
>>>>            <version>3.3.0</version>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-connector-hbase_2.11</artifactId>
>>>>            <version>1.11-SNAPSHOT</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.hbase</groupId>
>>>>            <artifactId>hbase-client</artifactId>
>>>>            <version>2.1.0</version>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.projectlombok</groupId>
>>>>            <artifactId>lombok</artifactId>
>>>>            <version>1.18.12</version>
>>>>            <scope>provided</scope>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>io.lettuce</groupId>
>>>>            <artifactId>lettuce-core</artifactId>
>>>>            <version>5.3.1.RELEASE</version>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>junit</groupId>
>>>>            <artifactId>junit</artifactId>
>>>>            <version>4.13</version>
>>>>            <!--<scope>test</scope>-->
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.commons</groupId>
>>>>            <artifactId>commons-email</artifactId>
>>>>            <version>1.5</version>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.hadoop</groupId>
>>>>            <artifactId>hadoop-common</artifactId>
>>>>            <version>3.0.0-cdh6.3.2</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.hadoop</groupId>
>>>>            <artifactId>hadoop-hdfs</artifactId>
>>>>            <version>3.0.0-cdh6.3.2</version>
>>>>        </dependency>
>>>>
>>>>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-connector-hive_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>org.apache.hive</groupId>
>>>>            <artifactId>hive-exec</artifactId>
>>>>            <version>${hive.version}</version>
>>>>            <scope>provided</scope>
>>>>        </dependency>
>>>>
>>>>
>>>>        <!-- Add logging framework, to produce console output when
>>> running
>>>> in the IDE. -->
>>>>        <!-- These dependencies are excluded from the application JAR by
>>>> default. -->
>>>>        <dependency>
>>>>            <groupId>org.slf4j</groupId>
>>>>            <artifactId>slf4j-log4j12</artifactId>
>>>>            <version>1.7.7</version>
>>>>            <scope>runtime</scope>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>log4j</groupId>
>>>>            <artifactId>log4j</artifactId>
>>>>            <version>1.2.17</version>
>>>>            <scope>runtime</scope>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>com.alibaba</groupId>
>>>>            <artifactId>fastjson</artifactId>
>>>>            <version>1.2.68</version>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>com.jayway.jsonpath</groupId>
>>>>            <artifactId>json-path</artifactId>
>>>>            <version>2.4.0</version>
>>>>        </dependency>
>>>>
>>>>
>>>>        <dependency>
>>>>            <groupId>org.apache.flink</groupId>
>>>>            <artifactId>flink-connector-jdbc_2.11</artifactId>
>>>>            <version>${flink.version}</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>mysql</groupId>
>>>>            <artifactId>mysql-connector-java</artifactId>
>>>>            <version>5.1.46</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>io.vertx</groupId>
>>>>            <artifactId>vertx-core</artifactId>
>>>>            <version>3.9.1</version>
>>>>        </dependency>
>>>>        <dependency>
>>>>            <groupId>io.vertx</groupId>
>>>>            <artifactId>vertx-jdbc-client</artifactId>
>>>>            <version>3.9.1</version>
>>>>        </dependency>
>>>>
>>>>
>>>>    </dependencies>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 集群节点flink-1.11.0/lib/:
>>>> -rw-r--r-- 1 root root    197597 6月  30 10:28
>>> flink-clients_2.11-1.11.0.jar
>>>> -rw-r--r-- 1 root root     90782 6月  30 17:46 flink-csv-1.11.0.jar
>>>> -rw-r--r-- 1 root root 108349203 6月  30 17:52 flink-dist_2.11-1.11.0.jar
>>>> -rw-r--r-- 1 root root     94863 6月  30 17:45 flink-json-1.11.0.jar
>>>> -rw-r--r-- 1 root root   7712156 6月  18 10:42
>>>> flink-shaded-zookeeper-3.4.14.jar
>>>> -rw-r--r-- 1 root root  33325754 6月  30 17:50 flink-table_2.11-1.11.0.jar
>>>> -rw-r--r-- 1 root root     47333 6月  30 10:38
>>>> flink-table-api-scala-bridge_2.11-1.11.0.jar
>>>> -rw-r--r-- 1 root root  37330521 6月  30 17:50
>>>> flink-table-blink_2.11-1.11.0.jar
>>>> -rw-r--r-- 1 root root    754983 6月  30 12:29
>>> flink-table-common-1.11.0.jar
>>>> -rw-r--r-- 1 root root     67114 4月  20 20:47 log4j-1.2-api-2.12.1.jar
>>>> -rw-r--r-- 1 root root    276771 4月  20 20:47 log4j-api-2.12.1.jar
>>>> -rw-r--r-- 1 root root   1674433 4月  20 20:47 log4j-core-2.12.1.jar
>>>> -rw-r--r-- 1 root root     23518 4月  20 20:47 log4j-slf4j-impl-2.12.1.jar
>>>>
>>>>
>>>> 把table相关的包都下载下来了,还是报同样的错,好奇怪。。。
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 在 2020-07-10 10:24:02,"Congxian Qiu" <[hidden email]> 写道:
>>>>> Hi
>>>>>
>>>>> 这个看上去是提交到 Yarn 了,具体的原因需要看下 JM log 是啥原因。另外是否是日志没有贴全,这里只看到本地 log,其他的就只有小部分
>>>>> jobmanager.err 的 log。
>>>>>
>>>>> Best,
>>>>> Congxian
>>>>>
>>>>>
>>>>> Zhou Zach <[hidden email]> 于2020年7月9日周四 下午9:23写道:
>>>>>
>>>>>> hi all,
>>>>>> 原来用1.10使用per job模式,可以提交的作业,现在用1.11使用应用模式提交失败,看日志,也不清楚原因,
>>>>>> yarn log:
>>>>>> Log Type: jobmanager.err
>>>>>>
>>>>>>
>>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>>
>>>>>>
>>>>>> Log Length: 785
>>>>>>
>>>>>>
>>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>>> SLF4J: Found binding in
>>>>>>
>>>>
>>> [jar:file:/yarn/nm/usercache/hdfs/appcache/application_1594271580406_0010/filecache/11/data-flow-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>> SLF4J: Found binding in
>>>>>>
>>>>
>>> [jar:file:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>>> explanation.
>>>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>>>> log4j:WARN No appenders could be found for logger
>>>>>> (org.apache.flink.runtime.entrypoint.ClusterEntrypoint).
>>>>>> log4j:WARN Please initialize the log4j system properly.
>>>>>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>>>> for
>>>>>> more info.
>>>>>>
>>>>>>
>>>>>> Log Type: jobmanager.out
>>>>>>
>>>>>>
>>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>>
>>>>>>
>>>>>> Log Length: 0
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Log Type: prelaunch.err
>>>>>>
>>>>>>
>>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>>
>>>>>>
>>>>>> Log Length: 0
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Log Type: prelaunch.out
>>>>>>
>>>>>>
>>>>>> Log Upload Time: Thu Jul 09 21:02:48 +0800 2020
>>>>>>
>>>>>>
>>>>>> Log Length: 70
>>>>>>
>>>>>>
>>>>>> Setting up env variables
>>>>>> Setting up job resources
>>>>>> Launching container
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> 本地log:
>>>>>> 2020-07-09 21:02:41,015 INFO  org.apache.flink.client.cli.CliFrontend
>>>>>>                [] -
>>>>>>
>>>>
>>> --------------------------------------------------------------------------------
>>>>>> 2020-07-09 21:02:41,020 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: jobmanager.rpc.address, localhost
>>>>>> 2020-07-09 21:02:41,020 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: jobmanager.rpc.port, 6123
>>>>>> 2020-07-09 21:02:41,021 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: jobmanager.memory.process.size, 1600m
>>>>>> 2020-07-09 21:02:41,021 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: taskmanager.memory.process.size, 1728m
>>>>>> 2020-07-09 21:02:41,021 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: taskmanager.numberOfTaskSlots, 1
>>>>>> 2020-07-09 21:02:41,021 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: parallelism.default, 1
>>>>>> 2020-07-09 21:02:41,021 INFO
>>>>>> org.apache.flink.configuration.GlobalConfiguration           [] -
>>>> Loading
>>>>>> configuration property: jobmanager.execution.failover-strategy, region
>>>>>> 2020-07-09 21:02:41,164 INFO
>>>>>> org.apache.flink.runtime.security.modules.HadoopModule       [] -
>>> Hadoop
>>>>>> user set to hdfs (auth:SIMPLE)
>>>>>> 2020-07-09 21:02:41,172 INFO
>>>>>> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas
>>>> file
>>>>>> will be created as /tmp/jaas-2213111423022415421.conf.
>>>>>> 2020-07-09 21:02:41,181 INFO  org.apache.flink.client.cli.CliFrontend
>>>>>>                [] - Running 'run-application' command.
>>>>>> 2020-07-09 21:02:41,194 INFO
>>>>>>
>>>>
>>> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer
>>>>>> [] - Submitting application in 'Application Mode'.
>>>>>> 2020-07-09 21:02:41,201 WARN
>>>>>> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The
>>>>>> configuration directory ('/opt/flink-1.11.0/conf') already contains a
>>>> LOG4J
>>>>>> config file.If you want to use logback, then please delete or rename
>>> the
>>>>>> log configuration file.
>>>>>> 2020-07-09 21:02:41,537 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - No path for the flink jar passed. Using the
>>>> location
>>>>>> of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
>>>>>> 2020-07-09 21:02:41,665 INFO
>>>>>> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] -
>>>>>> Failing over to rm220
>>>>>> 2020-07-09 21:02:41,717 INFO  org.apache.hadoop.conf.Configuration
>>>>>>                 [] - resource-types.xml not found
>>>>>> 2020-07-09 21:02:41,718 INFO
>>>>>> org.apache.hadoop.yarn.util.resource.ResourceUtils           [] -
>>>> Unable to
>>>>>> find 'resource-types.xml'.
>>>>>> 2020-07-09 21:02:41,755 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - Cluster specification:
>>>>>> ClusterSpecification{masterMemoryMB=2048, taskManagerMemoryMB=4096,
>>>>>> slotsPerTaskManager=1}
>>>>>> 2020-07-09 21:02:42,723 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - Submitting application master
>>>>>> application_1594271580406_0010
>>>>>> 2020-07-09 21:02:42,969 INFO
>>>>>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] -
>>>> Submitted
>>>>>> application application_1594271580406_0010
>>>>>> 2020-07-09 21:02:42,969 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - Waiting for the cluster to be allocated
>>>>>> 2020-07-09 21:02:42,971 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - Deploying cluster, current state ACCEPTED
>>>>>> 2020-07-09 21:02:47,619 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - YARN application has been deployed successfully.
>>>>>> 2020-07-09 21:02:47,620 INFO
>>>> org.apache.flink.yarn.YarnClusterDescriptor
>>>>>>                [] - Found Web Interface cdh003:38716 of application
>>>>>> 'application_1594271580406_0010'
>>>>
>>>