flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

zjfplayer@hotmail.com
大家好,
       flink on yarn任务启动时,发现报错了The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
       环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241

flink版本为1.8.1,yarn上的日志:

20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: --------------------------------------------------------------------------------
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @ 23:04:28 CST)
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user: cloudera-scm
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current Hadoop/Kerberos user: root
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size: 406 MiBytes
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME: /usr/java/default
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments: (none)
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath: core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: --------------------------------------------------------------------------------
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX signal handlers for [TERM, HUP, INT]
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is running as: root Yarn client user obtainer: root
20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading configuration property: time.characteristic, EventTime
20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading configuration property: internal.cluster.execution-mode, DETACHED
20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading configuration property: high-availability.cluster-id, application_1579661300080_0005
20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading configuration property: taskmanager.numberOfTaskSlots, 1
20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading configuration property: taskmanager.heap.size, 1024m
20/01/22 11:07:53 WARN configuration.Configuration: Config uses deprecated configuration key 'web.port' instead of proper key 'rest.bind-port'
20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting directories for temporary files to: /yarn/nm/usercache/root/appcache/application_1579661300080_0005
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting YarnJobClusterEntrypoint.
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default filesystem.
20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root (auth:SIMPLE)
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster services.
20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor system at uf30-3:0
20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on addresses :[akka.tcp://flink@uf30-3:61028]
20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at akka.tcp://flink@uf30-3:61028
20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated configuration key 'web.port' instead of proper key 'rest.port'
20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage directory /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter configured, no metrics will be exposed/reported.
20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start actor system at uf30-3:0
20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on addresses :[akka.tcp://flink-metrics@uf30-3:26151]
20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started at akka.tcp://flink-metrics@uf30-3:26151
20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache storage directory /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated configuration key 'web.port' instead of proper key 'rest.bind-port'
20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload directory /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does not exist, or has been deleted externally. Previously uploaded files are no longer available.
20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created directory /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for file uploads.
20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting rest endpoint.
20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment variable 'log.file' is not set.
20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'.
20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest endpoint listening at uf30-3:17001
20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: http://uf30-3:17001 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000
20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend listening at http://uf30-3:17001.
20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for org.apache.flink.yarn.YarnResourceManager at akka://flink/user/resourcemanager .
20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for org.apache.flink.runtime.dispatcher.MiniDispatcher at akka://flink/user/dispatcher .
20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership with fencing token 00000000-0000-0000-0000-000000000000
20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all persisted jobs.
20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_0 .
20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest (e1b2df526572dd9e93be25763519ee35).
20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000 msmaxFailuresPerInterval=3) for xctest (e1b2df526572dd9e93be25763519ee35).
20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via failover strategy: full graph restart
20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm225
20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on master for job xctest (e1b2df526572dd9e93be25763519ee35).
20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran initialization on master in 0 ms.
20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been configured, using default (Memory / JobManager) MemoryStateBackend (data in heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 'null', asynchronous: TRUE, maxStateSize: 5242880)
20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with session id 00000000-0000-0000-0000-000000000000 at akka.tcp://flink@uf30-3:61028/user/jobmanager_0.
20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job xctest (e1b2df526572dd9e93be25763519ee35) under job master id 00000000000000000000000000000000.
20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source: testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1, a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time, msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request, no ResourceManager connected. Adding as pending request [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager akka.tcp://flink@uf30-3:61028/user/resourcemanager(00000000000000000000000000000000)
20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers from previous attempts ([]).
20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted leadership with fencing token 00000000000000000000000000000000
20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager address, beginning registration
20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at ResourceManager attempt 1 (timeout=100ms)
20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager [hidden email]://flink@uf30-3:61028/user/jobmanager_0 for job e1b2df526572dd9e93be25763519ee35.
20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager [hidden email]://flink@uf30-3:61028/user/jobmanager_0 for job e1b2df526572dd9e93be25763519ee35.
20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully registered at ResourceManager, leader id: 00000000000000000000000000000000.
20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} for job e1b2df526572dd9e93be25763519ee35 with allocation id 2394a48465851f57cb3592402df11112.
20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new TaskExecutor container with resources <memory:1024, vCores:1>. Number pending requests 1.
20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for : uf30-3:8041
20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container: container_e10_1579661300080_0005_01_000002 - Remaining pending container requests: 1
20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container requests 0.
20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container launch context for TaskManagers
20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening proxy : uf30-3:8041
20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager with ResourceID container_e10_1579661300080_0005_01_000002 (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source: testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1, a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time, msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source: testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1, a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time, msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0) to container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source: testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1, a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time, msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor connection container_e10_1579661300080_0005_01_000002 because: The heartbeat of TaskManager with id container_e10_1579661300080_0005_01_000002  timed out.
20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source: testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1, a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1, a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time, msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
org.apache.flink.util.FlinkException: The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
        at org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
        at org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
        at org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
        at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
        at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
        at akka.actor.ActorCell.invoke(ActorCell.scala:495)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
        at akka.dispatch.Mailbox.run(Mailbox.scala:224)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to FAILING.
org.apache.flink.util.FlinkException: The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
        at org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
        at org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
        at org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
        at org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
        at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
        at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
        at akka.actor.ActorCell.invoke(ActorCell.scala:495)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
        at akka.dispatch.Mailbox.run(Mailbox.scala:224)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer possible.
20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to RESTARTING.
20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job xctest (e1b2df526572dd9e93be25763519ee35).
20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to CREATED.
20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.

jobmanager.err:
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: --------------------------------------------------------------------------------
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @ 23:04:28 CST)
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user: cloudera-scm
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current Hadoop/Kerberos user: root
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size: 406 MiBytes
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME: /usr/java/default
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments: (none)

taskmanager.err:
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner: --------------------------------------------------------------------------------
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @ 23:04:28 CST)
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user: cloudera-scm
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current Hadoop/Kerberos user: root
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size: 345 MiBytes
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME: /usr/java/default
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version: 2.6.5
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -XX:MaxDirectMemorySize=664m
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .

网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?

________________________________
[hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

tison
20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
connection container_e10_1579661300080_0005_01_000002 because: The
heartbeat of TaskManager with id container_e10_1579661300080_0005_01_000002
timed out.

你请求资源的时候把 slot 请求发到这台机器上了,然后它心跳超时了,你看看 TM 有没有正常起来,有没有资源不够或者挂了

Best,
tison.


郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:

> 大家好,
>        flink on yarn任务启动时,发现报错了The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
>
> flink版本为1.8.1,yarn上的日志:
>
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> signal handlers for [TERM, HUP, INT]
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> running as: root Yarn client user obtainer: root
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: time.characteristic, EventTime
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: internal.cluster.execution-mode, DETACHED
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: high-availability.cluster-id,
> application_1579661300080_0005
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.heap.size, 1024m
> 20/01/22 11:07:53 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> directories for temporary files to:
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> YarnJobClusterEntrypoint.
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> filesystem.
> 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> (auth:SIMPLE)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> services.
> 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink@uf30-3:61028]
> 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> akka.tcp://flink@uf30-3:61028
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.port'
> 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> configured, no metrics will be exposed/reported.
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> at akka.tcp://flink-metrics@uf30-3:26151
> 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> not exist, or has been deleted externally. Previously uploaded files are no
> longer available.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> file uploads.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting rest
> endpoint.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> variable 'log.file' is not set.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> are unavailable in the web dashboard. Log file location not found in
> environment variable 'log.file' or configuration key 'Key: 'web.log.path' ,
> default: null (fallback keys: [{key=jobmanager.web.log.path,
> isDeprecated=true}])'.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest endpoint
> listening at uf30-3:17001
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> http://uf30-3:17001 was granted leadership with
> leaderSessionID=00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> listening at http://uf30-3:17001.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.yarn.YarnResourceManager at
> akka://flink/user/resourcemanager .
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.dispatcher.MiniDispatcher at
> akka://flink/user/dispatcher .
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership with
> fencing token 00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all persisted
> jobs.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.jobmaster.JobMaster at
> akka://flink/user/jobmanager_0 .
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> msmaxFailuresPerInterval=3) for xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> failover strategy: full graph restart
> 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to rm225
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> master for job xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> initialization on master in 0 ms.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> configured, using default (Memory / JobManager) MemoryStateBackend (data in
> heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints:
> 'null', asynchronous: TRUE, maxStateSize: 5242880)
> 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> session id 00000000-0000-0000-0000-000000000000 at akka.tcp://flink@uf30-3
> :61028/user/jobmanager_0.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> 00000000000000000000000000000000.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> no ResourceManager connected. Adding as pending request
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> akka.tcp://flink@uf30-3
> :61028/user/resourcemanager(00000000000000000000000000000000)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> from previous attempts ([]).
> 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-cached-nodemanagers-proxies : 0
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted leadership
> with fencing token 00000000000000000000000000000000
> 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> address, beginning registration
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> ResourceManager attempt 1 (timeout=100ms)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> registered at ResourceManager, leader id: 00000000000000000000000000000000.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} for job
> e1b2df526572dd9e93be25763519ee35 with allocation id
> 2394a48465851f57cb3592402df11112.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> TaskExecutor container with resources <memory:1024, vCores:1>. Number
> pending requests 1.
> 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> uf30-3:8041
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> container_e10_1579661300080_0005_01_000002 - Remaining pending container
> requests: 1
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> requests 0.
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container launch
> context for TaskManagers
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : uf30-3:8041
> 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> with ResourceID container_e10_1579661300080_0005_01_000002
> (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0) to
> container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> connection container_e10_1579661300080_0005_01_000002 because: The
> heartbeat of TaskManager with id
> container_e10_1579661300080_0005_01_000002  timed out.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to FAILING.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> possible.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> RESTARTING.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> CREATED.
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
>
> jobmanager.err:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
>
> taskmanager.err:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> 23:04:28 CST)
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> cloudera-scm
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java HotSpot(TM)
> 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> 345 MiBytes
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version: 2.6.5
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
>  -XX:MaxDirectMemorySize=664m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
>
> 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
>
> ________________________________
> [hidden email]
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

zjfplayer@hotmail.com
TM没有起来,服务器本身内存cpu都是够的,还很空闲

________________________________
[hidden email]

发件人: tison<mailto:[hidden email]>
发送时间: 2020-01-22 11:25
收件人: user-zh<mailto:[hidden email]>
主题: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
connection container_e10_1579661300080_0005_01_000002 because: The
heartbeat of TaskManager with id container_e10_1579661300080_0005_01_000002
timed out.

你请求资源的时候把 slot 请求发到这台机器上了,然后它心跳超时了,你看看 TM 有没有正常起来,有没有资源不够或者挂了

Best,
tison.


郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:

> 大家好,
>        flink on yarn任务启动时,发现报错了The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
>
> flink版本为1.8.1,yarn上的日志:
>
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> signal handlers for [TERM, HUP, INT]
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> running as: root Yarn client user obtainer: root
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: time.characteristic, EventTime
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: internal.cluster.execution-mode, DETACHED
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: high-availability.cluster-id,
> application_1579661300080_0005
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.heap.size, 1024m
> 20/01/22 11:07:53 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> directories for temporary files to:
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> YarnJobClusterEntrypoint.
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> filesystem.
> 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> (auth:SIMPLE)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> services.
> 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink@uf30-3:61028]
> 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> akka.tcp://flink@uf30-3:61028
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.port'
> 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> configured, no metrics will be exposed/reported.
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> at akka.tcp://flink-metrics@uf30-3:26151
> 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> not exist, or has been deleted externally. Previously uploaded files are no
> longer available.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> file uploads.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting rest
> endpoint.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> variable 'log.file' is not set.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> are unavailable in the web dashboard. Log file location not found in
> environment variable 'log.file' or configuration key 'Key: 'web.log.path' ,
> default: null (fallback keys: [{key=jobmanager.web.log.path,
> isDeprecated=true}])'.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest endpoint
> listening at uf30-3:17001
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> http://uf30-3:17001 was granted leadership with
> leaderSessionID=00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> listening at http://uf30-3:17001.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.yarn.YarnResourceManager at
> akka://flink/user/resourcemanager .
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.dispatcher.MiniDispatcher at
> akka://flink/user/dispatcher .
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership with
> fencing token 00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all persisted
> jobs.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.jobmaster.JobMaster at
> akka://flink/user/jobmanager_0 .
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> msmaxFailuresPerInterval=3) for xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> failover strategy: full graph restart
> 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to rm225
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> master for job xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> initialization on master in 0 ms.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> configured, using default (Memory / JobManager) MemoryStateBackend (data in
> heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints:
> 'null', asynchronous: TRUE, maxStateSize: 5242880)
> 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> session id 00000000-0000-0000-0000-000000000000 at akka.tcp://flink@uf30-3
> :61028/user/jobmanager_0.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> 00000000000000000000000000000000.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> no ResourceManager connected. Adding as pending request
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> akka.tcp://flink@uf30-3
> :61028/user/resourcemanager(00000000000000000000000000000000)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> from previous attempts ([]).
> 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-cached-nodemanagers-proxies : 0
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted leadership
> with fencing token 00000000000000000000000000000000
> 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> address, beginning registration
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> ResourceManager attempt 1 (timeout=100ms)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> registered at ResourceManager, leader id: 00000000000000000000000000000000.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} for job
> e1b2df526572dd9e93be25763519ee35 with allocation id
> 2394a48465851f57cb3592402df11112.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> TaskExecutor container with resources <memory:1024, vCores:1>. Number
> pending requests 1.
> 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> uf30-3:8041
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> container_e10_1579661300080_0005_01_000002 - Remaining pending container
> requests: 1
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> requests 0.
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container launch
> context for TaskManagers
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : uf30-3:8041
> 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> with ResourceID container_e10_1579661300080_0005_01_000002
> (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0) to
> container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> connection container_e10_1579661300080_0005_01_000002 because: The
> heartbeat of TaskManager with id
> container_e10_1579661300080_0005_01_000002  timed out.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to FAILING.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> possible.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> RESTARTING.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> CREATED.
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
>
> jobmanager.err:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
>
> taskmanager.err:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> 23:04:28 CST)
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> cloudera-scm
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java HotSpot(TM)
> 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> 345 MiBytes
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version: 2.6.5
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
>  -XX:MaxDirectMemorySize=664m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
>
> 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
>
> ________________________________
> [hidden email]
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

tison
那你看下 TM 那台机器上的 TM 日志,从 JM 端来看 TM 曾经成功起来过并注册了自己,你看看 TM 是怎么挂的或者别的什么情况

Best,
tison.


郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:54写道:

> TM没有起来,服务器本身内存cpu都是够的,还很空闲
>
> ________________________________
> [hidden email]
>
> 发件人: tison<mailto:[hidden email]>
> 发送时间: 2020-01-22 11:25
> 收件人: user-zh<mailto:[hidden email]>
> 主题: Re: flink on yarn任务启动报错 The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> connection container_e10_1579661300080_0005_01_000002 because: The
> heartbeat of TaskManager with id container_e10_1579661300080_0005_01_000002
> timed out.
>
> 你请求资源的时候把 slot 请求发到这台机器上了,然后它心跳超时了,你看看 TM 有没有正常起来,有没有资源不够或者挂了
>
> Best,
> tison.
>
>
> 郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:
>
> > 大家好,
> >        flink on yarn任务启动时,发现报错了The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
> >
> > flink版本为1.8.1,yarn上的日志:
> >
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> >
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> > signal handlers for [TERM, HUP, INT]
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> > running as: root Yarn client user obtainer: root
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: time.characteristic, EventTime
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: internal.cluster.execution-mode, DETACHED
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: high-availability.cluster-id,
> > application_1579661300080_0005
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.numberOfTaskSlots, 1
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.heap.size, 1024m
> > 20/01/22 11:07:53 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> > directories for temporary files to:
> > /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> > YarnJobClusterEntrypoint.
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> > filesystem.
> > 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> > (auth:SIMPLE)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> > services.
> > 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink@uf30-3:61028]
> > 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> > akka.tcp://flink@uf30-3:61028
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.port'
> > 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> > 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> > 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> > 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> > configured, no metrics will be exposed/reported.
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start
> actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> > at akka.tcp://flink-metrics@uf30-3:26151
> > 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache
> storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> > not exist, or has been deleted externally. Previously uploaded files are
> no
> > longer available.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> > file uploads.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting
> rest
> > endpoint.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> > variable 'log.file' is not set.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> > are unavailable in the web dashboard. Log file location not found in
> > environment variable 'log.file' or configuration key 'Key:
> 'web.log.path' ,
> > default: null (fallback keys: [{key=jobmanager.web.log.path,
> > isDeprecated=true}])'.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest
> endpoint
> > listening at uf30-3:17001
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> > http://uf30-3:17001 was granted leadership with
> > leaderSessionID=00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> > listening at http://uf30-3:17001.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.yarn.YarnResourceManager at
> > akka://flink/user/resourcemanager .
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.dispatcher.MiniDispatcher at
> > akka://flink/user/dispatcher .
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> > akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership
> with
> > fencing token 00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all
> persisted
> > jobs.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.jobmaster.JobMaster at
> > akka://flink/user/jobmanager_0 .
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> > (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> > FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> > msmaxFailuresPerInterval=3) for xctest
> (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> > failover strategy: full graph restart
> > 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> > over to rm225
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> > master for job xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> > initialization on master in 0 ms.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> > configured, using default (Memory / JobManager) MemoryStateBackend (data
> in
> > heap memory / checkpoints to JobManager) (checkpoints: 'null',
> savepoints:
> > 'null', asynchronous: TRUE, maxStateSize: 5242880)
> > 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> > job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> > session id 00000000-0000-0000-0000-000000000000 at
> akka.tcp://flink@uf30-3
> > :61028/user/jobmanager_0.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> > xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> > 00000000000000000000000000000000.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> > no ResourceManager connected. Adding as pending request
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> > akka.tcp://flink@uf30-3
> > :61028/user/resourcemanager(00000000000000000000000000000000)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> > from previous attempts ([]).
> > 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> > yarn.client.max-cached-nodemanagers-proxies : 0
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> > akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted
> leadership
> > with fencing token 00000000000000000000000000000000
> > 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> > address, beginning registration
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> > ResourceManager attempt 1 (timeout=100ms)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> > registered at ResourceManager, leader id:
> 00000000000000000000000000000000.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with
> profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} for job
> > e1b2df526572dd9e93be25763519ee35 with allocation id
> > 2394a48465851f57cb3592402df11112.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> > TaskExecutor container with resources <memory:1024, vCores:1>. Number
> > pending requests 1.
> > 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> > uf30-3:8041
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> > container_e10_1579661300080_0005_01_000002 - Remaining pending container
> > requests: 1
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> > request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> > requests 0.
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container
> launch
> > context for TaskManagers
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> > 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> > proxy : uf30-3:8041
> > 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> > with ResourceID container_e10_1579661300080_0005_01_000002
> > (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0)
> to
> > container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> > TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> > connection container_e10_1579661300080_0005_01_000002 because: The
> > heartbeat of TaskManager with id
> > container_e10_1579661300080_0005_01_000002  timed out.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to
> FAILING.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> > fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> > possible.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> > RESTARTING.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> > xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> > CREATED.
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> >
> > jobmanager.err:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> >
> > taskmanager.err:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> > TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> > 23:04:28 CST)
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java
> HotSpot(TM)
> > 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> > 345 MiBytes
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >  -XX:MaxDirectMemorySize=664m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
> >
> > 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
> >
> > ________________________________
> > [hidden email]
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

zhisheng
In reply to this post by zjfplayer@hotmail.com
应该是你作业之前挂过了

郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:

> 大家好,
>        flink on yarn任务启动时,发现报错了The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
>
> flink版本为1.8.1,yarn上的日志:
>
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> signal handlers for [TERM, HUP, INT]
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> running as: root Yarn client user obtainer: root
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: time.characteristic, EventTime
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: internal.cluster.execution-mode, DETACHED
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: high-availability.cluster-id,
> application_1579661300080_0005
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.heap.size, 1024m
> 20/01/22 11:07:53 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> directories for temporary files to:
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> YarnJobClusterEntrypoint.
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> filesystem.
> 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> (auth:SIMPLE)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> services.
> 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink@uf30-3:61028]
> 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> akka.tcp://flink@uf30-3:61028
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.port'
> 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> configured, no metrics will be exposed/reported.
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> at akka.tcp://flink-metrics@uf30-3:26151
> 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> not exist, or has been deleted externally. Previously uploaded files are no
> longer available.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> file uploads.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting rest
> endpoint.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> variable 'log.file' is not set.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> are unavailable in the web dashboard. Log file location not found in
> environment variable 'log.file' or configuration key 'Key: 'web.log.path' ,
> default: null (fallback keys: [{key=jobmanager.web.log.path,
> isDeprecated=true}])'.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest endpoint
> listening at uf30-3:17001
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> http://uf30-3:17001 was granted leadership with
> leaderSessionID=00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> listening at http://uf30-3:17001.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.yarn.YarnResourceManager at
> akka://flink/user/resourcemanager .
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.dispatcher.MiniDispatcher at
> akka://flink/user/dispatcher .
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership with
> fencing token 00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all persisted
> jobs.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.jobmaster.JobMaster at
> akka://flink/user/jobmanager_0 .
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> msmaxFailuresPerInterval=3) for xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> failover strategy: full graph restart
> 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to rm225
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> master for job xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> initialization on master in 0 ms.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> configured, using default (Memory / JobManager) MemoryStateBackend (data in
> heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints:
> 'null', asynchronous: TRUE, maxStateSize: 5242880)
> 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> session id 00000000-0000-0000-0000-000000000000 at akka.tcp://flink@uf30-3
> :61028/user/jobmanager_0.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> 00000000000000000000000000000000.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> no ResourceManager connected. Adding as pending request
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> akka.tcp://flink@uf30-3
> :61028/user/resourcemanager(00000000000000000000000000000000)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> from previous attempts ([]).
> 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-cached-nodemanagers-proxies : 0
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted leadership
> with fencing token 00000000000000000000000000000000
> 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> address, beginning registration
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> ResourceManager attempt 1 (timeout=100ms)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> registered at ResourceManager, leader id: 00000000000000000000000000000000.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} for job
> e1b2df526572dd9e93be25763519ee35 with allocation id
> 2394a48465851f57cb3592402df11112.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> TaskExecutor container with resources <memory:1024, vCores:1>. Number
> pending requests 1.
> 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> uf30-3:8041
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> container_e10_1579661300080_0005_01_000002 - Remaining pending container
> requests: 1
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> requests 0.
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container launch
> context for TaskManagers
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : uf30-3:8041
> 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> with ResourceID container_e10_1579661300080_0005_01_000002
> (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0) to
> container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> connection container_e10_1579661300080_0005_01_000002 because: The
> heartbeat of TaskManager with id
> container_e10_1579661300080_0005_01_000002  timed out.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to FAILING.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> possible.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> RESTARTING.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> CREATED.
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
>
> jobmanager.err:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
>
> taskmanager.err:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> 23:04:28 CST)
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> cloudera-scm
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java HotSpot(TM)
> 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> 345 MiBytes
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version: 2.6.5
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
>  -XX:MaxDirectMemorySize=664m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
>
> 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
>
> ________________________________
> [hidden email]
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

zjfplayer@hotmail.com
In reply to this post by tison
日志已经在前面的邮件里面了

________________________________
[hidden email]

发件人: tison<mailto:[hidden email]>
发送时间: 2020-01-22 12:10
收件人: user-zh<mailto:[hidden email]>
主题: Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
那你看下 TM 那台机器上的 TM 日志,从 JM 端来看 TM 曾经成功起来过并注册了自己,你看看 TM 是怎么挂的或者别的什么情况

Best,
tison.


郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:54写道:

> TM没有起来,服务器本身内存cpu都是够的,还很空闲
>
> ________________________________
> [hidden email]
>
> 发件人: tison<mailto:[hidden email]>
> 发送时间: 2020-01-22 11:25
> 收件人: user-zh<mailto:[hidden email]>
> 主题: Re: flink on yarn任务启动报错 The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> connection container_e10_1579661300080_0005_01_000002 because: The
> heartbeat of TaskManager with id container_e10_1579661300080_0005_01_000002
> timed out.
>
> 你请求资源的时候把 slot 请求发到这台机器上了,然后它心跳超时了,你看看 TM 有没有正常起来,有没有资源不够或者挂了
>
> Best,
> tison.
>
>
> 郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:
>
> > 大家好,
> >        flink on yarn任务启动时,发现报错了The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
> >
> > flink版本为1.8.1,yarn上的日志:
> >
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> >
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> > signal handlers for [TERM, HUP, INT]
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> > running as: root Yarn client user obtainer: root
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: time.characteristic, EventTime
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: internal.cluster.execution-mode, DETACHED
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: high-availability.cluster-id,
> > application_1579661300080_0005
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.numberOfTaskSlots, 1
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.heap.size, 1024m
> > 20/01/22 11:07:53 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> > directories for temporary files to:
> > /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> > YarnJobClusterEntrypoint.
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> > filesystem.
> > 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> > (auth:SIMPLE)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> > services.
> > 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink@uf30-3:61028]
> > 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> > akka.tcp://flink@uf30-3:61028
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.port'
> > 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> > 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> > 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> > 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> > configured, no metrics will be exposed/reported.
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start
> actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> > at akka.tcp://flink-metrics@uf30-3:26151
> > 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache
> storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> > not exist, or has been deleted externally. Previously uploaded files are
> no
> > longer available.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> > file uploads.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting
> rest
> > endpoint.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> > variable 'log.file' is not set.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> > are unavailable in the web dashboard. Log file location not found in
> > environment variable 'log.file' or configuration key 'Key:
> 'web.log.path' ,
> > default: null (fallback keys: [{key=jobmanager.web.log.path,
> > isDeprecated=true}])'.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest
> endpoint
> > listening at uf30-3:17001
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> > http://uf30-3:17001 was granted leadership with
> > leaderSessionID=00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> > listening at http://uf30-3:17001.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.yarn.YarnResourceManager at
> > akka://flink/user/resourcemanager .
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.dispatcher.MiniDispatcher at
> > akka://flink/user/dispatcher .
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> > akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership
> with
> > fencing token 00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all
> persisted
> > jobs.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.jobmaster.JobMaster at
> > akka://flink/user/jobmanager_0 .
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> > (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> > FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> > msmaxFailuresPerInterval=3) for xctest
> (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> > failover strategy: full graph restart
> > 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> > over to rm225
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> > master for job xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> > initialization on master in 0 ms.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> > configured, using default (Memory / JobManager) MemoryStateBackend (data
> in
> > heap memory / checkpoints to JobManager) (checkpoints: 'null',
> savepoints:
> > 'null', asynchronous: TRUE, maxStateSize: 5242880)
> > 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> > job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> > session id 00000000-0000-0000-0000-000000000000 at
> akka.tcp://flink@uf30-3
> > :61028/user/jobmanager_0.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> > xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> > 00000000000000000000000000000000.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> > no ResourceManager connected. Adding as pending request
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> > akka.tcp://flink@uf30-3
> > :61028/user/resourcemanager(00000000000000000000000000000000)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> > from previous attempts ([]).
> > 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> > yarn.client.max-cached-nodemanagers-proxies : 0
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> > akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted
> leadership
> > with fencing token 00000000000000000000000000000000
> > 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> > address, beginning registration
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> > ResourceManager attempt 1 (timeout=100ms)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> > registered at ResourceManager, leader id:
> 00000000000000000000000000000000.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with
> profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} for job
> > e1b2df526572dd9e93be25763519ee35 with allocation id
> > 2394a48465851f57cb3592402df11112.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> > TaskExecutor container with resources <memory:1024, vCores:1>. Number
> > pending requests 1.
> > 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> > uf30-3:8041
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> > container_e10_1579661300080_0005_01_000002 - Remaining pending container
> > requests: 1
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> > request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> > requests 0.
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container
> launch
> > context for TaskManagers
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> > 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> > proxy : uf30-3:8041
> > 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> > with ResourceID container_e10_1579661300080_0005_01_000002
> > (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0)
> to
> > container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> > TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> > connection container_e10_1579661300080_0005_01_000002 because: The
> > heartbeat of TaskManager with id
> > container_e10_1579661300080_0005_01_000002  timed out.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to
> FAILING.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> > fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> > possible.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> > RESTARTING.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> > xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> > CREATED.
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> >
> > jobmanager.err:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> >
> > taskmanager.err:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> > TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> > 23:04:28 CST)
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java
> HotSpot(TM)
> > 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> > 345 MiBytes
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >  -XX:MaxDirectMemorySize=664m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
> >
> > 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
> >
> > ________________________________
> > [hidden email]
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

zjfplayer@hotmail.com
In reply to this post by zhisheng
之前挂过 后面启动的时候 是checkpoints的文件丢了? 你是这个意思吗?

________________________________
[hidden email]

发件人: zhisheng<mailto:[hidden email]>
发送时间: 2020-01-22 16:45
收件人: user-zh<mailto:[hidden email]>
主题: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
应该是你作业之前挂过了

郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:

> 大家好,
>        flink on yarn任务启动时,发现报错了The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
>
> flink版本为1.8.1,yarn上的日志:
>
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> signal handlers for [TERM, HUP, INT]
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> running as: root Yarn client user obtainer: root
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: time.characteristic, EventTime
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: internal.cluster.execution-mode, DETACHED
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: high-availability.cluster-id,
> application_1579661300080_0005
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> configuration property: taskmanager.heap.size, 1024m
> 20/01/22 11:07:53 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> directories for temporary files to:
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> YarnJobClusterEntrypoint.
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> filesystem.
> 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> (auth:SIMPLE)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> services.
> 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink@uf30-3:61028]
> 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> akka.tcp://flink@uf30-3:61028
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.port'
> 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> configured, no metrics will be exposed/reported.
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start actor
> system at uf30-3:0
> 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> at akka.tcp://flink-metrics@uf30-3:26151
> 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache storage
> directory
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> 20/01/22 11:07:54 WARN configuration.Configuration: Config uses deprecated
> configuration key 'web.port' instead of proper key 'rest.bind-port'
> 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> not exist, or has been deleted externally. Previously uploaded files are no
> longer available.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> directory
> /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> file uploads.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting rest
> endpoint.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> variable 'log.file' is not set.
> 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> are unavailable in the web dashboard. Log file location not found in
> environment variable 'log.file' or configuration key 'Key: 'web.log.path' ,
> default: null (fallback keys: [{key=jobmanager.web.log.path,
> isDeprecated=true}])'.
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest endpoint
> listening at uf30-3:17001
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> http://uf30-3:17001 was granted leadership with
> leaderSessionID=00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> listening at http://uf30-3:17001.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.yarn.YarnResourceManager at
> akka://flink/user/resourcemanager .
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.dispatcher.MiniDispatcher at
> akka://flink/user/dispatcher .
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership with
> fencing token 00000000-0000-0000-0000-000000000000
> 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all persisted
> jobs.
> 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> org.apache.flink.runtime.jobmaster.JobMaster at
> akka://flink/user/jobmanager_0 .
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> msmaxFailuresPerInterval=3) for xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> failover strategy: full graph restart
> 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to rm225
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> master for job xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> initialization on master in 0 ms.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> configured, using default (Memory / JobManager) MemoryStateBackend (data in
> heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints:
> 'null', asynchronous: TRUE, maxStateSize: 5242880)
> 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> session id 00000000-0000-0000-0000-000000000000 at akka.tcp://flink@uf30-3
> :61028/user/jobmanager_0.
> 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> 00000000000000000000000000000000.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
> 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> no ResourceManager connected. Adding as pending request
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> akka.tcp://flink@uf30-3
> :61028/user/resourcemanager(00000000000000000000000000000000)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> from previous attempts ([]).
> 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-cached-nodemanagers-proxies : 0
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted leadership
> with fencing token 00000000000000000000000000000000
> 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> address, beginning registration
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> ResourceManager attempt 1 (timeout=100ms)
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> [hidden email]://flink@uf30-3:61028/user/jobmanager_0
> for job e1b2df526572dd9e93be25763519ee35.
> 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> registered at ResourceManager, leader id: 00000000000000000000000000000000.
> 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with profile
> ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> nativeMemoryInMB=0, networkMemoryInMB=0} for job
> e1b2df526572dd9e93be25763519ee35 with allocation id
> 2394a48465851f57cb3592402df11112.
> 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> TaskExecutor container with resources <memory:1024, vCores:1>. Number
> pending requests 1.
> 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> uf30-3:8041
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> container_e10_1579661300080_0005_01_000002 - Remaining pending container
> requests: 1
> 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> requests 0.
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container launch
> context for TaskManagers
> 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy : uf30-3:8041
> 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> with ResourceID container_e10_1579661300080_0005_01_000002
> (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0) to
> container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> connection container_e10_1579661300080_0005_01_000002 because: The
> heartbeat of TaskManager with id
> container_e10_1579661300080_0005_01_000002  timed out.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS r_v1,
> a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to FAILING.
> org.apache.flink.util.FlinkException: The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
>         at
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
>         at
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
>         at
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>         at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
>         at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
>         at
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>         at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>         at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>         at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>         at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> possible.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> RESTARTING.
> 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> xctest (e1b2df526572dd9e93be25763519ee35).
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> CREATED.
> 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to RUNNING.
>
> jobmanager.err:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac, Date:24.06.2019
> @ 23:04:28 CST)
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> cloudera-scm
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> 406 MiBytes
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version: 2.6.5
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> (none)
>
> taskmanager.err:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> --------------------------------------------------------------------------------
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> 23:04:28 CST)
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> cloudera-scm
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> Hadoop/Kerberos user: root
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java HotSpot(TM)
> 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> 345 MiBytes
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> /usr/java/default
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version: 2.6.5
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
>  -XX:MaxDirectMemorySize=664m
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
>
> 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
>
> ________________________________
> [hidden email]
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

tison
你上面的是 taskmanager.err,需要的是 taskmanager.log

Best,
tison.


郑 洁锋 <[hidden email]> 于2020年1月23日周四 下午10:22写道:

> 之前挂过 后面启动的时候 是checkpoints的文件丢了? 你是这个意思吗?
>
> ________________________________
> [hidden email]
>
> 发件人: zhisheng<mailto:[hidden email]>
> 发送时间: 2020-01-22 16:45
> 收件人: user-zh<mailto:[hidden email]>
> 主题: Re: flink on yarn任务启动报错 The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
> 应该是你作业之前挂过了
>
> 郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:
>
> > 大家好,
> >        flink on yarn任务启动时,发现报错了The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
> >
> > flink版本为1.8.1,yarn上的日志:
> >
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> >
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> > signal handlers for [TERM, HUP, INT]
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> > running as: root Yarn client user obtainer: root
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: time.characteristic, EventTime
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: internal.cluster.execution-mode, DETACHED
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: high-availability.cluster-id,
> > application_1579661300080_0005
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.numberOfTaskSlots, 1
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.heap.size, 1024m
> > 20/01/22 11:07:53 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> > directories for temporary files to:
> > /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> > YarnJobClusterEntrypoint.
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> > filesystem.
> > 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> > (auth:SIMPLE)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> > services.
> > 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink@uf30-3:61028]
> > 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> > akka.tcp://flink@uf30-3:61028
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.port'
> > 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> > 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> > 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> > 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> > configured, no metrics will be exposed/reported.
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start
> actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> > at akka.tcp://flink-metrics@uf30-3:26151
> > 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache
> storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> > not exist, or has been deleted externally. Previously uploaded files are
> no
> > longer available.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> > file uploads.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting
> rest
> > endpoint.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> > variable 'log.file' is not set.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> > are unavailable in the web dashboard. Log file location not found in
> > environment variable 'log.file' or configuration key 'Key:
> 'web.log.path' ,
> > default: null (fallback keys: [{key=jobmanager.web.log.path,
> > isDeprecated=true}])'.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest
> endpoint
> > listening at uf30-3:17001
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> > http://uf30-3:17001 was granted leadership with
> > leaderSessionID=00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> > listening at http://uf30-3:17001.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.yarn.YarnResourceManager at
> > akka://flink/user/resourcemanager .
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.dispatcher.MiniDispatcher at
> > akka://flink/user/dispatcher .
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> > akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership
> with
> > fencing token 00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all
> persisted
> > jobs.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.jobmaster.JobMaster at
> > akka://flink/user/jobmanager_0 .
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> > (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> > FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> > msmaxFailuresPerInterval=3) for xctest
> (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> > failover strategy: full graph restart
> > 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> > over to rm225
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> > master for job xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> > initialization on master in 0 ms.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> > configured, using default (Memory / JobManager) MemoryStateBackend (data
> in
> > heap memory / checkpoints to JobManager) (checkpoints: 'null',
> savepoints:
> > 'null', asynchronous: TRUE, maxStateSize: 5242880)
> > 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> > job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> > session id 00000000-0000-0000-0000-000000000000 at
> akka.tcp://flink@uf30-3
> > :61028/user/jobmanager_0.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> > xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> > 00000000000000000000000000000000.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> > no ResourceManager connected. Adding as pending request
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> > akka.tcp://flink@uf30-3
> > :61028/user/resourcemanager(00000000000000000000000000000000)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> > from previous attempts ([]).
> > 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> > yarn.client.max-cached-nodemanagers-proxies : 0
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> > akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted
> leadership
> > with fencing token 00000000000000000000000000000000
> > 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> > address, beginning registration
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> > ResourceManager attempt 1 (timeout=100ms)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> > registered at ResourceManager, leader id:
> 00000000000000000000000000000000.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with
> profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} for job
> > e1b2df526572dd9e93be25763519ee35 with allocation id
> > 2394a48465851f57cb3592402df11112.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> > TaskExecutor container with resources <memory:1024, vCores:1>. Number
> > pending requests 1.
> > 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> > uf30-3:8041
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> > container_e10_1579661300080_0005_01_000002 - Remaining pending container
> > requests: 1
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> > request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> > requests 0.
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container
> launch
> > context for TaskManagers
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> > 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> > proxy : uf30-3:8041
> > 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> > with ResourceID container_e10_1579661300080_0005_01_000002
> > (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0)
> to
> > container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> > TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> > connection container_e10_1579661300080_0005_01_000002 because: The
> > heartbeat of TaskManager with id
> > container_e10_1579661300080_0005_01_000002  timed out.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to
> FAILING.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> > fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> > possible.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> > RESTARTING.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> > xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> > CREATED.
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> >
> > jobmanager.err:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> >
> > taskmanager.err:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> > TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> > 23:04:28 CST)
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java
> HotSpot(TM)
> > 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> > 345 MiBytes
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >  -XX:MaxDirectMemorySize=664m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
> >
> > 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
> >
> > ________________________________
> > [hidden email]
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.

zjfplayer@hotmail.com
没有log,只有err和out,out为空

________________________________
[hidden email]

发件人: tison<mailto:[hidden email]>
发送时间: 2020-01-24 10:03
收件人: user-zh<mailto:[hidden email]>
抄送: zhisheng2018<mailto:[hidden email]>
主题: Re: Re: flink on yarn任务启动报错 The assigned slot container_e10_1579661300080_0005_01_000002_0 was removed.
你上面的是 taskmanager.err,需要的是 taskmanager.log

Best,
tison.


郑 洁锋 <[hidden email]> 于2020年1月23日周四 下午10:22写道:

> 之前挂过 后面启动的时候 是checkpoints的文件丢了? 你是这个意思吗?
>
> ________________________________
> [hidden email]
>
> 发件人: zhisheng<mailto:[hidden email]>
> 发送时间: 2020-01-22 16:45
> 收件人: user-zh<mailto:[hidden email]>
> 主题: Re: flink on yarn任务启动报错 The assigned slot
> container_e10_1579661300080_0005_01_000002_0 was removed.
> 应该是你作业之前挂过了
>
> 郑 洁锋 <[hidden email]> 于2020年1月22日周三 上午11:16写道:
>
> > 大家好,
> >        flink on yarn任务启动时,发现报错了The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >        环境:flink1.8.1,cdh5.14.2,kafka0.10,jdk1.8.0_241
> >
> > flink版本为1.8.1,yarn上的日志:
> >
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Classpath:
> >
> core-1.8.0_release.jar:flink-shaded-hadoop-2-uber-2.6.5-7.0.jar:kafka10-source-1.8.0_release.jar:log4j-1.2.17.jar:mysql-all-side-1.8.0_release.jar:mysql-sink-1.8.0_release.jar:slf4j-log4j12-1.7.15.jar:sql.launcher-1.0-SNAPSHOT.jar:flink.jar:flink-conf.yaml:job.graph::/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/1129-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Registered UNIX
> > signal handlers for [TERM, HUP, INT]
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: YARN daemon is
> > running as: root Yarn client user obtainer: root
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: time.characteristic, EventTime
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: internal.cluster.execution-mode, DETACHED
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: high-availability.cluster-id,
> > application_1579661300080_0005
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.numberOfTaskSlots, 1
> > 20/01/22 11:07:53 INFO configuration.GlobalConfiguration: Loading
> > configuration property: taskmanager.heap.size, 1024m
> > 20/01/22 11:07:53 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:53 INFO clusterframework.BootstrapTools: Setting
> > directories for temporary files to:
> > /yarn/nm/usercache/root/appcache/application_1579661300080_0005
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Starting
> > YarnJobClusterEntrypoint.
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Install default
> > filesystem.
> > 20/01/22 11:07:53 INFO modules.HadoopModule: Hadoop user set to root
> > (auth:SIMPLE)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint: Initializing cluster
> > services.
> > 20/01/22 11:07:53 INFO akka.AkkaRpcServiceUtils: Trying to start actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink@uf30-3:61028]
> > 20/01/22 11:07:54 INFO akka.AkkaRpcServiceUtils: Actor system started at
> > akka.tcp://flink@uf30-3:61028
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.port'
> > 20/01/22 11:07:54 INFO blob.BlobServer: Created BLOB server storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-bda7ba98-c1ee-4ad7-b04e-22b2fa1c6268
> > 20/01/22 11:07:54 INFO blob.BlobServer: Started BLOB server at
> > 0.0.0.0:15790 - max concurrent requests: 50 - max backlog: 1000
> > 20/01/22 11:07:54 INFO metrics.MetricRegistryImpl: No metrics reporter
> > configured, no metrics will be exposed/reported.
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Trying to start
> actor
> > system at uf30-3:0
> > 20/01/22 11:07:54 INFO slf4j.Slf4jLogger: Slf4jLogger started
> > 20/01/22 11:07:54 INFO remote.Remoting: Starting remoting
> > 20/01/22 11:07:54 INFO remote.Remoting: Remoting started; listening on
> > addresses :[akka.tcp://flink-metrics@uf30-3:26151]
> > 20/01/22 11:07:54 INFO entrypoint.ClusterEntrypoint: Actor system started
> > at akka.tcp://flink-metrics@uf30-3:26151
> > 20/01/22 11:07:54 INFO blob.TransientBlobCache: Created BLOB cache
> storage
> > directory
> >
> /yarn/nm/usercache/root/appcache/application_1579661300080_0005/blobStore-cc2030ec-c73c-4383-a4df-30358745cd17
> > 20/01/22 11:07:54 WARN configuration.Configuration: Config uses
> deprecated
> > configuration key 'web.port' instead of proper key 'rest.bind-port'
> > 20/01/22 11:07:54 WARN jobmaster.MiniDispatcherRestEndpoint: Upload
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload does
> > not exist, or has been deleted externally. Previously uploaded files are
> no
> > longer available.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Created
> > directory
> > /tmp/flink-web-383e26d9-e789-4756-8f69-1b03462e27f6/flink-web-upload for
> > file uploads.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Starting
> rest
> > endpoint.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: Log file environment
> > variable 'log.file' is not set.
> > 20/01/22 11:07:54 WARN webmonitor.WebMonitorUtils: JobManager log files
> > are unavailable in the web dashboard. Log file location not found in
> > environment variable 'log.file' or configuration key 'Key:
> 'web.log.path' ,
> > default: null (fallback keys: [{key=jobmanager.web.log.path,
> > isDeprecated=true}])'.
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Rest
> endpoint
> > listening at uf30-3:17001
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint:
> > http://uf30-3:17001 was granted leadership with
> > leaderSessionID=00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO jobmaster.MiniDispatcherRestEndpoint: Web frontend
> > listening at http://uf30-3:17001.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.yarn.YarnResourceManager at
> > akka://flink/user/resourcemanager .
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.dispatcher.MiniDispatcher at
> > akka://flink/user/dispatcher .
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Dispatcher
> > akka.tcp://flink@uf30-3:61028/user/dispatcher was granted leadership
> with
> > fencing token 00000000-0000-0000-0000-000000000000
> > 20/01/22 11:07:54 INFO dispatcher.MiniDispatcher: Recovering all
> persisted
> > jobs.
> > 20/01/22 11:07:54 INFO akka.AkkaRpcService: Starting RPC endpoint for
> > org.apache.flink.runtime.jobmaster.JobMaster at
> > akka://flink/user/jobmanager_0 .
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Initializing job xctest
> > (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Using restart strategy
> > FailureRateRestartStrategy(failuresInterval=360000 msdelayInterval=10000
> > msmaxFailuresPerInterval=3) for xctest
> (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job recovers via
> > failover strategy: full graph restart
> > 20/01/22 11:07:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> > over to rm225
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Running initialization on
> > master for job xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Successfully ran
> > initialization on master in 0 ms.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: No state backend has been
> > configured, using default (Memory / JobManager) MemoryStateBackend (data
> in
> > heap memory / checkpoints to JobManager) (checkpoints: 'null',
> savepoints:
> > 'null', asynchronous: TRUE, maxStateSize: 5242880)
> > 20/01/22 11:07:54 INFO jobmaster.JobManagerRunner: JobManager runner for
> > job xctest (e1b2df526572dd9e93be25763519ee35) was granted leadership with
> > session id 00000000-0000-0000-0000-000000000000 at
> akka.tcp://flink@uf30-3
> > :61028/user/jobmanager_0.
> > 20/01/22 11:07:54 INFO jobmaster.JobMaster: Starting execution of job
> > xctest (e1b2df526572dd9e93be25763519ee35) under job master id
> > 00000000000000000000000000000000.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> > 20/01/22 11:07:54 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from CREATED to SCHEDULED.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Cannot serve slot request,
> > no ResourceManager connected. Adding as pending request
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}]
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Connecting to ResourceManager
> > akka.tcp://flink@uf30-3
> > :61028/user/resourcemanager(00000000000000000000000000000000)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Recovered 0 containers
> > from previous attempts ([]).
> > 20/01/22 11:07:55 INFO impl.ContainerManagementProtocolProxy:
> > yarn.client.max-cached-nodemanagers-proxies : 0
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: ResourceManager
> > akka.tcp://flink@uf30-3:61028/user/resourcemanager was granted
> leadership
> > with fencing token 00000000000000000000000000000000
> > 20/01/22 11:07:55 INFO slotmanager.SlotManager: Starting the SlotManager.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Resolved ResourceManager
> > address, beginning registration
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: Registration at
> > ResourceManager attempt 1 (timeout=100ms)
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registering job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Registered job manager
> > [hidden email]://flink@uf30-3
> :61028/user/jobmanager_0
> > for job e1b2df526572dd9e93be25763519ee35.
> > 20/01/22 11:07:55 INFO jobmaster.JobMaster: JobManager successfully
> > registered at ResourceManager, leader id:
> 00000000000000000000000000000000.
> > 20/01/22 11:07:55 INFO slotpool.SlotPoolImpl: Requesting new slot
> > [SlotRequestId{ff60413f2edc00a134b584d1a5953d77}] and profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Request slot with
> profile
> > ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0,
> > nativeMemoryInMB=0, networkMemoryInMB=0} for job
> > e1b2df526572dd9e93be25763519ee35 with allocation id
> > 2394a48465851f57cb3592402df11112.
> > 20/01/22 11:07:55 INFO yarn.YarnResourceManager: Requesting new
> > TaskExecutor container with resources <memory:1024, vCores:1>. Number
> > pending requests 1.
> > 20/01/22 11:07:56 INFO impl.AMRMClientImpl: Received new token for :
> > uf30-3:8041
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Received new container:
> > container_e10_1579661300080_0005_01_000002 - Remaining pending container
> > requests: 1
> > 20/01/22 11:07:56 INFO yarn.YarnResourceManager: Removing container
> > request Capability[<memory:1024, vCores:1>]Priority[1]. Pending container
> > requests 0.
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Creating container
> launch
> > context for TaskManagers
> > 20/01/22 11:07:57 INFO yarn.YarnResourceManager: Starting TaskManagers
> > 20/01/22 11:07:57 INFO impl.ContainerManagementProtocolProxy: Opening
> > proxy : uf30-3:8041
> > 20/01/22 11:07:59 INFO yarn.YarnResourceManager: Registering TaskManager
> > with ResourceID container_e10_1579661300080_0005_01_000002
> > (akka.tcp://flink@uf30-3:25536/user/taskmanager_0) at ResourceManager
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from SCHEDULED to DEPLOYING.
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Deploying Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1) (attempt #0)
> to
> > container_e10_1579661300080_0005_01_000002 @ uf30-3 (dataPort=58080)
> > 20/01/22 11:07:59 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from DEPLOYING to RUNNING.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: The heartbeat of
> > TaskManager with id container_e10_1579661300080_0005_01_000002 timed out.
> > 20/01/22 11:08:49 INFO yarn.YarnResourceManager: Closing TaskExecutor
> > connection container_e10_1579661300080_0005_01_000002 because: The
> > heartbeat of TaskManager with id
> > container_e10_1579661300080_0005_01_000002  timed out.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Source:
> > testFlink_kafkaTable -> Map -> to: Tuple2 -> Map -> from: (a_v1, a_v2,
> > a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME) -> select: (a_v1,
> > a_v2, a_v3, a_i1, curr_time, msg_index, send_time, PROCTIME(PROCTIME) AS
> > PROCTIME) -> to: Tuple2 -> Map -> Flat Map -> Map -> select: (a_v1 AS
> r_v1,
> > a_v2 AS r_v2, a_v3 AS r_v3, a_i1 AS r_i1, a_i2 AS r_i2, curr_time,
> > msg_index, send_time) -> to: Tuple2 -> Sink: MyResult (1/1)
> > (083db3e18b24bc9329931aa39bf3109e) switched from RUNNING to FAILED.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RUNNING to
> FAILING.
> > org.apache.flink.util.FlinkException: The assigned slot
> > container_e10_1579661300080_0005_01_000002_0 was removed.
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlot(SlotManager.java:899)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.removeSlots(SlotManager.java:869)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.internalUnregisterTaskManager(SlotManager.java:1080)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager.unregisterTaskManager(SlotManager.java:391)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager.closeTaskManagerConnection(ResourceManager.java:845)
> >         at
> >
> org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1187)
> >         at
> >
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
> >         at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:392)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:185)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:147)
> >         at
> >
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> >         at
> >
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
> >         at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> >         at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
> >         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> >         at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> >         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> >         at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> >         at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> >         at
> > scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >         at
> > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >         at
> >
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Try to restart or
> > fail the job xctest (e1b2df526572dd9e93be25763519ee35) if no longer
> > possible.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state FAILING to
> > RESTARTING.
> > 20/01/22 11:08:49 INFO executiongraph.ExecutionGraph: Restarting the job
> > xctest (e1b2df526572dd9e93be25763519ee35).
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state RESTARTING to
> > CREATED.
> > 20/01/22 11:08:59 INFO executiongraph.ExecutionGraph: Job xctest
> > (e1b2df526572dd9e93be25763519ee35) switched from state CREATED to
> RUNNING.
> >
> > jobmanager.err:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Starting
> > YarnJobClusterEntrypoint (Version: <unknown>, Rev:7297bac,
> Date:24.06.2019
> > @ 23:04:28 CST)
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM: Java
> > HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Maximum heap size:
> > 406 MiBytes
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  JVM Options:
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xms424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:     -Xmx424m
> > 20/01/22 11:07:53 INFO entrypoint.ClusterEntrypoint:  Program Arguments:
> > (none)
> >
> > taskmanager.err:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >
> --------------------------------------------------------------------------------
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Starting YARN
> > TaskExecutor runner (Version: <unknown>, Rev:7297bac, Date:24.06.2019 @
> > 23:04:28 CST)
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  OS current user:
> > cloudera-scm
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Current
> > Hadoop/Kerberos user: root
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM: Java
> HotSpot(TM)
> > 64-Bit Server VM - Oracle Corporation - 1.8/25.241-b07
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Maximum heap size:
> > 345 MiBytes
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JAVA_HOME:
> > /usr/java/default
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Hadoop version:
> 2.6.5
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  JVM Options:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xms360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     -Xmx360m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:
> >  -XX:MaxDirectMemorySize=664m
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:  Program Arguments:
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     --configDir
> > 20/01/22 11:07:57 INFO yarn.YarnTaskExecutorRunner:     .
> >
> > 网上搜了下,这报错一般都是内存的问题,请问下这个是跟yarn上的内存设置造成的吗?
> >
> > ________________________________
> > [hidden email]
> >
>