flink1.12版本,使用yarn-application模式提交任务失败

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

flink1.12版本,使用yarn-application模式提交任务失败

todd
通过脚本提交flink作业,提交命令:
/bin/flink run-application -t yarn-application
-Dyarn.provided.lib.dirs="hdfs://xx/flink120/" hdfs://xx/flink-example.jar
--sqlFilePath   /xxx/kafka2print.sql

flink使用的Lib及user jar已经上传到Hdfs路径,但是抛出以下错误:
-----------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
deploy Yarn Application Cluster
        at
org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
        at
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
        at
org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
        at
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
        at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
        at
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
Caused by: java.lang.IllegalArgumentException: Wrong FS:
hdfs://xx/flink120/, expected: file:///
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:648)
        at
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
        at
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
        at
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
        at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
        at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
        at
org.apache.flink.yarn.YarnApplicationFileUploader.lambda$getAllFilesInProvidedLibDirs$2(YarnApplicationFileUploader.java:469)
        at
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedConsumer$3(FunctionUtils.java:93)
        at java.util.ArrayList.forEach(ArrayList.java:1257)
        at
org.apache.flink.yarn.YarnApplicationFileUploader.getAllFilesInProvidedLibDirs(YarnApplicationFileUploader.java:466)
        at
org.apache.flink.yarn.YarnApplicationFileUploader.<init>(YarnApplicationFileUploader.java:106)
        at
org.apache.flink.yarn.YarnApplicationFileUploader.from(YarnApplicationFileUploader.java:381)
        at
org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:789)
        at
org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:592)
        at
org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:458)
        ... 9 more



--
Sent from: http://apache-flink.147419.n8.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: flink1.12版本,使用yarn-application模式提交任务失败

Congxian Qiu
Hi
    从你的日志看作业启动失败的原因是:
   Caused by: java.lang.IllegalArgumentException: Wrong FS:
   hdfs://xx/flink120/, expected: file:///
   看上去你设置的地址和 需要的 schema 不一样,你需要解决一下这个问题

Best,
Congxian


todd <[hidden email]> 于2021年3月15日周一 下午2:22写道:

> 通过脚本提交flink作业,提交命令:
> /bin/flink run-application -t yarn-application
> -Dyarn.provided.lib.dirs="hdfs://xx/flink120/" hdfs://xx/flink-example.jar
> --sqlFilePath   /xxx/kafka2print.sql
>
> flink使用的Lib及user jar已经上传到Hdfs路径,但是抛出以下错误:
> -----------------------------------------------------------
>  The program finished with the following exception:
>
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't
> deploy Yarn Application Cluster
>         at
>
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
>         at
>
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
>         at
>
> org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
>         at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
>         at
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>         at
>
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>         at
> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
> Caused by: java.lang.IllegalArgumentException: Wrong FS:
> hdfs://xx/flink120/, expected: file:///
>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:648)
>         at
>
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
>         at
>
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
>         at
>
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
>         at
>
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
>         at
>
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1425)
>         at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.lambda$getAllFilesInProvidedLibDirs$2(YarnApplicationFileUploader.java:469)
>         at
>
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedConsumer$3(FunctionUtils.java:93)
>         at java.util.ArrayList.forEach(ArrayList.java:1257)
>         at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.getAllFilesInProvidedLibDirs(YarnApplicationFileUploader.java:466)
>         at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.<init>(YarnApplicationFileUploader.java:106)
>         at
>
> org.apache.flink.yarn.YarnApplicationFileUploader.from(YarnApplicationFileUploader.java:381)
>         at
>
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:789)
>         at
>
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:592)
>         at
>
> org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:458)
>         ... 9 more
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
>
Reply | Threaded
Open this post in threaded view
|

Re: flink1.12版本,使用yarn-application模式提交任务失败

todd
我从flink yaml文件设置了如下配置项:
HADOOP_CONF_DIR:
execution.target: yarn-application
yarn.provided.lib.dirs:hdfs://...
pipeline.jars: hdfs://...

所以我不确定你们使用yarn-application如何进行的配置



--
Sent from: http://apache-flink.147419.n8.nabble.com/