本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

凌战
        List<URL> userClassPaths = new ArrayList<>();
        File file = ResourceUtils.getFile(new URL(Objects.requireNonNull(this.getClass().getClassLoader().getResource("")).toString()+"lib"));
        if(file.isDirectory()&&file.listFiles()!=null){
            for(File ele: Objects.requireNonNull(file.listFiles())) {
                userClassPaths.add(ele.toURI().toURL());
            }
        }

        // 构建PackagedProgram
        PackagedProgram packagedProgram =
                PackagedProgram.newBuilder()
                .setJarFile(jar)
                .setUserClassPaths(userClassPaths)
                .build();

        // 获取Configuration
        String configurationDirectory = CliFrontend.getConfigurationDirectoryFromEnv();

        // 2. load the global configuration
        // 加载 flink-conf.yaml构成 Configuration
        Configuration configuration = GlobalConfiguration.loadConfiguration(configurationDirectory);


        // 3. 加载jar包
        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.JARS,
                packagedProgram.getJobJarAndDependencies(),
                URL::toString
        );

        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.CLASSPATHS,
                packagedProgram.getClasspaths(),
                URL::toString
        );


        Pipeline pipeline = this.wrapClassLoader(packagedProgram.getUserCodeClassLoader(),() -> {
            try {
                return PackagedProgramUtils.
                        getPipelineFromProgram(packagedProgram,
                                configuration,
                                10,
                                false);
            } catch (ProgramInvocationException e) {
                e.printStackTrace();
                return null;
            }
        });


        // yarn-per-job模式
        return new PlatformAbstractJobClusterExecutor<>(new YarnClusterClientFactory()).
                execute(pipeline,configuration,packagedProgram.getUserCodeClassLoader()).get().getJobID().toString();



这里添加的依赖jar包如下


但是出现报错:

2021-02-23 18:49:23.404  INFO 26116 --- [nio-8080-exec-4] o.a.flink.api.java.utils.PlanGenerator   : The job has 0 registered types and 0 default Kryo serializers
2021-02-23 18:50:38.746  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:41.159  INFO 26116 --- [nio-8080-exec-4] .h.y.c.ConfiguredRMFailoverProxyProvider : Failing over to rm80
2021-02-23 18:50:41.848  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
2021-02-23 18:50:41.849  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:42.555  WARN 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Environment variable 'FLINK_LIB_DIR' not set and ship files have not been provided manually. Not shipping any library files.
2021-02-23 18:50:42.556  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:48.554  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Submitting application master application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.h.y.client.api.impl.YarnClientImpl   : Submitted application application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Waiting for the cluster to be allocated
2021-02-23 18:50:49.281  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Deploying cluster, current state ACCEPTED
2021-02-23 18:51:00.820 ERROR 26116 --- [nio-8080-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.] with root cause


yarn那边显示的错误信息:Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

请问是缺少什么jar包,还是哪里有问题?
Reply | Threaded
Open this post in threaded view
|

回复:本地api提交jar包到Flink on Yarn集群,报错 Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

凌战
2021年2月23日 18:57[hidden email] 写道:
        List<URL> userClassPaths = new ArrayList<>();
        File file = ResourceUtils.getFile(new URL(Objects.requireNonNull(this.getClass().getClassLoader().getResource("")).toString()+"lib"));
        if(file.isDirectory()&&file.listFiles()!=null){
            for(File ele: Objects.requireNonNull(file.listFiles())) {
                userClassPaths.add(ele.toURI().toURL());
            }
        }

        // 构建PackagedProgram
        PackagedProgram packagedProgram =
                PackagedProgram.newBuilder()
                .setJarFile(jar)
                .setUserClassPaths(userClassPaths)
                .build();

        // 获取Configuration
        String configurationDirectory = CliFrontend.getConfigurationDirectoryFromEnv();

        // 2. load the global configuration
        // 加载 flink-conf.yaml构成 Configuration
        Configuration configuration = GlobalConfiguration.loadConfiguration(configurationDirectory);


        // 3. 加载jar包
        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.JARS,
                packagedProgram.getJobJarAndDependencies(),
                URL::toString
        );

        ConfigUtils.encodeCollectionToConfig(
                configuration,
                PipelineOptions.CLASSPATHS,
                packagedProgram.getClasspaths(),
                URL::toString
        );


        Pipeline pipeline = this.wrapClassLoader(packagedProgram.getUserCodeClassLoader(),() -> {
            try {
                return PackagedProgramUtils.
                        getPipelineFromProgram(packagedProgram,
                                configuration,
                                10,
                                false);
            } catch (ProgramInvocationException e) {
                e.printStackTrace();
                return null;
            }
        });


        // yarn-per-job模式
        return new PlatformAbstractJobClusterExecutor<>(new YarnClusterClientFactory()).
                execute(pipeline,configuration,packagedProgram.getUserCodeClassLoader()).get().getJobID().toString();



这里添加的依赖jar包如下


但是出现报错:

2021-02-23 18:49:23.404  INFO 26116 --- [nio-8080-exec-4] o.a.flink.api.java.utils.PlanGenerator   : The job has 0 registered types and 0 default Kryo serializers
2021-02-23 18:50:38.746  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:38.747  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:41.159  INFO 26116 --- [nio-8080-exec-4] .h.y.c.ConfiguredRMFailoverProxyProvider : Failing over to rm80
2021-02-23 18:50:41.848  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
2021-02-23 18:50:41.849  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:42.555  WARN 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Environment variable 'FLINK_LIB_DIR' not set and ship files have not been provided manually. Not shipping any library files.
2021-02-23 18:50:42.556  WARN 26116 --- [nio-8080-exec-4] o.apache.flink.core.plugin.PluginConfig  : The plugins directory [plugins] does not exist.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] .u.c.m.MemoryBackwardsCompatibilityUtils : 'jobmanager.memory.process.size' is not specified, use the configured deprecated task manager heap value (1024.000mb (1073741824 bytes)) for it.
2021-02-23 18:50:48.552  INFO 26116 --- [nio-8080-exec-4] o.a.f.r.u.c.memory.ProcessMemoryUtils    : The derived from fraction jvm overhead memory (102.400mb (107374184 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
2021-02-23 18:50:48.554  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Submitting application master application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.h.y.client.api.impl.YarnClientImpl   : Submitted application application_1610671284452_0243
2021-02-23 18:50:49.133  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Waiting for the cluster to be allocated
2021-02-23 18:50:49.281  INFO 26116 --- [nio-8080-exec-4] o.a.flink.yarn.YarnClusterDescriptor     : Deploying cluster, current state ACCEPTED
2021-02-23 18:51:00.820 ERROR 26116 --- [nio-8080-exec-4] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.] with root cause


yarn那边显示的错误信息:Error: Could not find or load main class org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint

请问是缺少什么jar包,还是哪里有问题?