本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

凌战

  List<URL> userClassPaths = new ArrayList<>();
        File file = ResourceUtils.getFile(new URL(Objects.requireNonNull(this.getClass().getClassLoader().getResource("")).toString()+"lib"));
        if(file.isDirectory()&&file.listFiles()!=null){
            for(File ele: Objects.requireNonNull(file.listFiles())) {
                userClassPaths.add(ele.toURI().toURL());
            }
        }

        // 构建PackagedProgram
        PackagedProgram packagedProgram =
                PackagedProgram.newBuilder()
                .setJarFile(jar)
                .setUserClassPaths(userClassPaths)
                .build();

        // 获取Configuration
        String configurationDirectory = CliFrontend.getConfigurationDirectoryFromEnv();

        // 2. load the global configuration
        // 加载 flink-conf.yaml构成 Configuration
        Configuration configuration = GlobalConfiguration.loadConfiguration(configurationDirectory);


        // 3. 加载jar包
        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.JARS,
                packagedProgram.getJobJarAndDependencies(),
                URL::toString
        );

        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.CLASSPATHS,
                packagedProgram.getClasspaths(),
                URL::toString
        );


        Pipeline pipeline = this.wrapClassLoader(packagedProgram.getUserCodeClassLoader(),() -> {
            try {
                return PackagedProgramUtils.
                        getPipelineFromProgram(packagedProgram,
                                configuration,
                                10,
                                false);
            } catch (ProgramInvocationException e) {
                e.printStackTrace();
                return null;
            }
        });


        // yarn-per-job模式
        return new PlatformAbstractJobClusterExecutor<>(new YarnClusterClientFactory()).
                execute(pipeline,configuration,packagedProgram.getUserCodeClassLoader()).get().getJobID().toString();



这里添加的依赖jar包如下


但是出现报错:

2021-02-23 18:49:23.404  INFO 26116 --- [nio-8080-exec-4] o.a.flink.api.java.utils.PlanGenerator   : The job has 0 registered types and 0 default Kryo serializers
2021-02-23 18:50:38.746  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:41.159  INFO 26116 --- [nio-8080-exec-4] .h.y.c.ConfiguredRMFailoverProxyProvider : Failing over to rm80
2021-02-23 18:50:41.848  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
2021-02-23 18:50:41.849  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:42.555  WARN 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Environment variable 'FLINK_LIB_DIR' not set and ship files have not been provided manually. Not shipping any library files.
2021-02-23 18:50:42.556  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:48.554  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Submitting application master application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.h.y.client.api.impl.YarnClientImpl   : Submitted application application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Waiting for the cluster to be allocated
2021-02-23 18:50:49.281  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Deploying cluster, current state ACCEPTED
2021-02-23 18:51:00.820 ERROR 26116 --- [nio-8080-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.] with root cause


yarn那边显示的错误信息:Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

请问是缺少什么jar包,还是哪里有问题?
Reply | Threaded
Open this post in threaded view
|

回复:本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

凌战
上面添加的jar包没有显示,补充一下:目前除了用户jar包,添加的依赖jar包就是
flink-dist_2.11-1.10.1.jar
flink-queryable-state-runtime_2.11-1.10.1.jar
flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
flink-table-blink_2.11-1.10.1.jar
flink-table_2.11-1.10.1.jar
flink-yarn_2.11-1.11.1.jar


但是提交到flink on yarn那边,仍然报错


| |
凌战
|
|
[hidden email]
|
签名由网易邮箱大师定制
在2021年2月23日 19:02,凌战<[hidden email]> 写道:


  List<URL> userClassPaths = new ArrayList<>();
        File file = ResourceUtils.getFile(new URL(Objects.requireNonNull(this.getClass().getClassLoader().getResource("")).toString()+"lib"));
        if(file.isDirectory()&&file.listFiles()!=null){
            for(File ele: Objects.requireNonNull(file.listFiles())) {
                userClassPaths.add(ele.toURI().toURL());
            }
        }


        // 构建PackagedProgram
        PackagedProgram packagedProgram =
                PackagedProgram.newBuilder()
                .setJarFile(jar)
                .setUserClassPaths(userClassPaths)
                .build();


        // 获取Configuration
        String configurationDirectory = CliFrontend.getConfigurationDirectoryFromEnv();


        // 2. load the global configuration
        // 加载 flink-conf.yaml构成 Configuration
        Configuration configuration = GlobalConfiguration.loadConfiguration(configurationDirectory);




        // 3. 加载jar包
        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.JARS,
                packagedProgram.getJobJarAndDependencies(),
                URL::toString
        );


        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.CLASSPATHS,
                packagedProgram.getClasspaths(),
                URL::toString
        );




        Pipeline pipeline = this.wrapClassLoader(packagedProgram.getUserCodeClassLoader(),() -> {
            try {
                return PackagedProgramUtils.
                        getPipelineFromProgram(packagedProgram,
                                configuration,
                                10,
                                false);
            } catch (ProgramInvocationException e) {
                e.printStackTrace();
                return null;
            }
        });




        // yarn-per-job模式
        return new PlatformAbstractJobClusterExecutor<>(new YarnClusterClientFactory()).
                execute(pipeline,configuration,packagedProgram.getUserCodeClassLoader()).get().getJobID().toString();






这里添加的依赖jar包如下




但是出现报错:


2021-02-23 18:49:23.404  INFO 26116 --- [nio-8080-exec-4] o.a.flink.api.java.utils.PlanGenerator   : The job has 0 registered types and 0 default Kryo serializers
2021-02-23 18:50:38.746  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:41.159  INFO 26116 --- [nio-8080-exec-4] .h.y.c.ConfiguredRMFailoverProxyProvider : Failing over to rm80
2021-02-23 18:50:41.848  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
2021-02-23 18:50:41.849  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:42.555  WARN 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Environment variable 'FLINK_LIB_DIR' not set and ship files have not been provided manually. Not shipping any library files.
2021-02-23 18:50:42.556  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:48.554  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Submitting application master application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.h.y.client.api.impl.YarnClientImpl   : Submitted application application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Waiting for the cluster to be allocated
2021-02-23 18:50:49.281  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Deploying cluster, current state ACCEPTED
2021-02-23 18:51:00.820 ERROR 26116 --- [nio-8080-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.] with root cause




yarn那边显示的错误信息:Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint


请问是缺少什么jar包,还是哪里有问题?
| |
凌战
|
|
[hidden email]
|
签名由网易邮箱大师定制
Reply | Threaded
Open this post in threaded view
|

回复:本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

凌战
In reply to this post by 凌战
上面添加的jar包没有显示,补充一下:目前除了用户jar包,添加的依赖jar包就是
flink-dist_2.11-1.10.1.jar
flink-queryable-state-runtime_2.11-1.10.1.jar
flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
flink-table-blink_2.11-1.10.1.jar
flink-table_2.11-1.10.1.jar
flink-yarn_2.11-1.11.1.jar


| |
凌战
|
|
[hidden email]
|
签名由网易邮箱大师定制
在2021年2月23日 19:02,凌战<[hidden email]> 写道:


  List<URL> userClassPaths = new ArrayList<>();
        File file = ResourceUtils.getFile(new URL(Objects.requireNonNull(this.getClass().getClassLoader().getResource("")).toString()+"lib"));
        if(file.isDirectory()&&file.listFiles()!=null){
            for(File ele: Objects.requireNonNull(file.listFiles())) {
                userClassPaths.add(ele.toURI().toURL());
            }
        }


        // 构建PackagedProgram
        PackagedProgram packagedProgram =
                PackagedProgram.newBuilder()
                .setJarFile(jar)
                .setUserClassPaths(userClassPaths)
                .build();


        // 获取Configuration
        String configurationDirectory = CliFrontend.getConfigurationDirectoryFromEnv();


        // 2. load the global configuration
        // 加载 flink-conf.yaml构成 Configuration
        Configuration configuration = GlobalConfiguration.loadConfiguration(configurationDirectory);




        // 3. 加载jar包
        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.JARS,
                packagedProgram.getJobJarAndDependencies(),
                URL::toString
        );


        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.CLASSPATHS,
                packagedProgram.getClasspaths(),
                URL::toString
        );




        Pipeline pipeline = this.wrapClassLoader(packagedProgram.getUserCodeClassLoader(),() -> {
            try {
                return PackagedProgramUtils.
                        getPipelineFromProgram(packagedProgram,
                                configuration,
                                10,
                                false);
            } catch (ProgramInvocationException e) {
                e.printStackTrace();
                return null;
            }
        });




        // yarn-per-job模式
        return new PlatformAbstractJobClusterExecutor<>(new YarnClusterClientFactory()).
                execute(pipeline,configuration,packagedProgram.getUserCodeClassLoader()).get().getJobID().toString();






这里添加的依赖jar包如下




但是出现报错:


2021-02-23 18:49:23.404  INFO 26116 --- [nio-8080-exec-4] o.a.flink.api.java.utils.PlanGenerator   : The job has 0 registered types and 0 default Kryo serializers
2021-02-23 18:50:38.746  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:41.159  INFO 26116 --- [nio-8080-exec-4] .h.y.c.ConfiguredRMFailoverProxyProvider : Failing over to rm80
2021-02-23 18:50:41.848  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
2021-02-23 18:50:41.849  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:42.555  WARN 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Environment variable 'FLINK_LIB_DIR' not set and ship files have not been provided manually. Not shipping any library files.
2021-02-23 18:50:42.556  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:48.554  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Submitting application master application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.h.y.client.api.impl.YarnClientImpl   : Submitted application application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Waiting for the cluster to be allocated
2021-02-23 18:50:49.281  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Deploying cluster, current state ACCEPTED
2021-02-23 18:51:00.820 ERROR 26116 --- [nio-8080-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.] with root cause




yarn那边显示的错误信息:Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint


请问是缺少什么jar包,还是哪里有问题?
| |
凌战
|
|
[hidden email]
|
签名由网易邮箱大师定制
Reply | Threaded
Open this post in threaded view
|

Re:回复:本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

Smile
你好,<br/>org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint 这个类应该是在 flink-yarn 这个 module 里面,打 lib 包的时候作为依赖被打进 flink-dist 里面。<br/>为什么你同时添加了 flink-dist_2.11-1.10.1.jar 和 flink-yarn_2.11-1.11.1.jar 这两个 jar 呀,不会冲突吗?<br/><br/>Smile
在 2021-02-23 19:27:43,"凌战" <[hidden email]> 写道:

>上面添加的jar包没有显示,补充一下:目前除了用户jar包,添加的依赖jar包就是
>flink-dist_2.11-1.10.1.jar
>flink-queryable-state-runtime_2.11-1.10.1.jar
>flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
>flink-table-blink_2.11-1.10.1.jar
>flink-table_2.11-1.10.1.jar
>flink-yarn_2.11-1.11.1.jar
>
>
>| |
>凌战
>|
>|
>[hidden email]
>|
>签名由网易邮箱大师定制
>在2021年2月23日 19:02,凌战<[hidden email]> 写道:
>
>
>  List<URL> userClassPaths = new ArrayList<>();
>        File file = ResourceUtils.getFile(new URL(Objects.requireNonNull(this.getClass().getClassLoader().getResource("")).toString()+"lib"));
>        if(file.isDirectory()&&file.listFiles()!=null){
>            for(File ele: Objects.requireNonNull(file.listFiles())) {
>                userClassPaths.add(ele.toURI().toURL());
>            }
>        }
>
>
>        // 构建PackagedProgram
>        PackagedProgram packagedProgram =
>                PackagedProgram.newBuilder()
>                .setJarFile(jar)
>                .setUserClassPaths(userClassPaths)
>                .build();
>
>
>        // 获取Configuration
>        String configurationDirectory = CliFrontend.getConfigurationDirectoryFromEnv();
>
>
>        // 2. load the global configuration
>        // 加载 flink-conf.yaml构成 Configuration
>        Configuration configuration = GlobalConfiguration.loadConfiguration(configurationDirectory);
>
>
>
>
>        // 3. 加载jar包
>        ConfigUtils.encodeCollectionToConfig(
>                configuration,
>                PipelineOptions.JARS,
>                packagedProgram.getJobJarAndDependencies(),
>                URL::toString
>        );
>
>
>        ConfigUtils.encodeCollectionToConfig(
>                configuration,
>                PipelineOptions.CLASSPATHS,
>                packagedProgram.getClasspaths(),
>                URL::toString
>        );
>
>
>
>
>        Pipeline pipeline = this.wrapClassLoader(packagedProgram.getUserCodeClassLoader(),() -> {
>            try {
>                return PackagedProgramUtils.
>                        getPipelineFromProgram(packagedProgram,
>                                configuration,
>                                10,
>                                false);
>            } catch (ProgramInvocationException e) {
>                e.printStackTrace();
>                return null;
>            }
>        });
>
>
>
>
>        // yarn-per-job模式
>        return new PlatformAbstractJobClusterExecutor<>(new YarnClusterClientFactory()).
>                execute(pipeline,configuration,packagedProgram.getUserCodeClassLoader()).get().getJobID().toString();
>
>
>
>
>
>
>这里添加的依赖jar包如下
>
>
>
>
>但是出现报错:
>
>
>2021-02-23 18:49:23.404  INFO 26116 --- [nio-8080-exec-4] o.a.flink.api.java.utils.PlanGenerator   : The job has 0 registered types and 0 default Kryo serializers
>2021-02-23 18:50:38.746  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
>2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
>2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
>2021-02-23 18:50:41.159  INFO 26116 --- [nio-8080-exec-4] .h.y.c.ConfiguredRMFailoverProxyProvider : Failing over to rm80
>2021-02-23 18:50:41.848  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
>2021-02-23 18:50:41.849  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
>2021-02-23 18:50:42.555  WARN 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Environment variable 'FLINK_LIB_DIR' not set and ship files have not been provided manually. Not shipping any library files.
>2021-02-23 18:50:42.556  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
>2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
>2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
>2021-02-23 18:50:48.554  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Submitting application master application_1610671284452_0243
>2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.h.y.client.api.impl.YarnClientImpl   : Submitted application application_1610671284452_0243
>2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Waiting for the cluster to be allocated
>2021-02-23 18:50:49.281  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Deploying cluster, current state ACCEPTED
>2021-02-23 18:51:00.820 ERROR 26116 --- [nio-8080-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.] with root cause
>
>
>
>
>yarn那边显示的错误信息:Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
>
>
>请问是缺少什么jar包,还是哪里有问题?
>| |
>凌战
>|
>|
>[hidden email]
>|
>签名由网易邮箱大师定制
Reply | Threaded
Open this post in threaded view
|

Re: 本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

Smile
In reply to this post by 凌战
你好,
org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint 这个类应该是在 flink-yarn
这个 module 里面,打 lib 包的时候作为依赖被打进 flink-dist 里面。
为什么你同时添加了 flink-dist_2.11-1.10.1.jar 和 flink-yarn_2.11-1.11.1.jar 这两个 jar
呀,不会冲突吗?

Smile



--
Sent from: http://apache-flink.147419.n8.nabble.com/