HI.
环境 :2.6.0+cdh5.15.2+ islon FlinkX (基于Flink 1.8.1) 提交任务报错。这个问题卡了好长时间了。提交任务的地方Kerberos是正常通过的, Yarn资源本地化的地方报权限错误,很不理解,各位大佬能不能帮忙提供一点排除思路。 1. Flinkx的任务是正常提交的; 2. 还有一个测试环境也是CDH + Kerberos , Flinkx 提交也是正常的; 3. 升级到FlinkX 1.10.1 + Flink 1.10.1 也是同样的问题。 提交命令 : /opt/flinkx/bin/flinkx -mode yarnPer -job /tmp/flink_tmp_json/mysql2mysql.json -pluginRoot /opt/flinkx/plugins -flinkconf /opt/flink-1.8.1/conf -flinkLibJar /opt/flink-1.8.1/lib -yarnconf /etc/hadoop/conf -queue root.liu -jobid 10698 报错如下: 13:51:45.298 [main] INFO com.dtstack.flinkx.launcher.perjob.PerJobSubmitter - start to submit per-job task, LauncherOptions = com.dtstack.flinkx.options.Options@32eebfca log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 13:51:45.530 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, localhost 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6124 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.mb, 1024 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.port, 8082 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.web.checkpoints.disable, true 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.archive.fs.dir, hdfs://xxx:8020/data/flink/flink-completed-jobs 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.mb, 2048 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 4 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.memory.preallocate, false 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: io.tmp.dirs, /tmp/flink/taskmanager 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 2 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.per-job-cluster.include-user-jar, last 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: akka.lookup.timeout, 30 s 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.checkpoints.history, 30 13:51:45.531 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: fs.hdfs.hadoopconf, /etc/hadoop/conf 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.backend, filesystem 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.backend.fs.checkpointdir, hdfs://xxx:8020/data/flink/flink-checkpoints 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.checkpoints.dir, hdfs://xxx:8020/data/flink/flink-checkpoints 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.checkpoints.num-retained, 5 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: state.savepoints.dir, hdfs://xxx:8020/data/flink/flink-savepoints 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: historyserver.archive.fs.dir, hdfs://xxx:8020/data/flink/flink-completed-jobs 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: historyserver.web.port, 16899 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: yarn.application-attempts, 10 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: restart-strategy.fixed-delay.attempts, 10000 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: restart-strategy.fixed-delay.delay, 30s 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: security.kerberos.login.contexts, Client 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: security.kerberos.login.keytab, /root/keytab/hive.keytab 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: security.kerberos.login.principal, [hidden email] 13:51:45.532 [main] INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: security.kerberos.login.use-ticket-cache, true Debug is true storeKey true useTicketCache false useKeyTab true doNotPrompt true ticketCache is null isInitiator true KeyTab is /root/keytab/hive.keytab refreshKrb5Config is true principal is [hidden email] tryFirstPass is false useFirstPass is false storePass is false clearPass is false Refreshing Kerberos configuration principal is [hidden email] Will use keytab Commit Succeeded 13:51:45.886 [main] INFO org.apache.flink.runtime.security.modules.HadoopModule - Hadoop user set to [hidden email] (auth:KERBEROS) 13:51:46.028 [main] INFO com.dtstack.flinkx.launcher.perjob.PerJobClusterClientBuilder - ----init yarn success ---- 13:51:46.271 [main] WARN org.apache.flink.yarn.AbstractYarnClusterDescriptor - Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink YARN Client needs one of these to be set to properly load the Hadoop configuration for accessing YARN. 13:51:46.488 [main] WARN org.apache.flink.yarn.AbstractYarnClusterDescriptor - The JobManager or TaskManager memory is below the smallest possible YARN Container size. The value of 'yarn.scheduler.minimum-allocation-mb' is '4096'. Please increase the memory size.YARN will allocate the smaller containers but the scheduler will account for the minimum-allocation-mb, maybe not all instances you requested will start. 13:51:46.488 [main] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Cluster specification: ClusterSpecification{masterMemoryMB=4096, taskManagerMemoryMB=4096, numberTaskManagers=1, slotsPerTaskManager=1, priority=0} 13:51:46.818 [main] WARN org.apache.flink.yarn.AbstractYarnClusterDescriptor - The configuration directory ('/opt/flink-1.8.1/conf') contains both LOG4J and Logback configuration files. Please delete or rename one of them. 13:51:48.019 [main] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Adding keytab /root/keytab/hive.keytab to the AM container local resource bucket 13:51:48.064 [main] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Adding delegation token to the AM container.. 13:51:48.082 [main] INFO org.apache.flink.yarn.Utils - Attempting to obtain Kerberos security token for HBase 13:51:48.082 [main] INFO org.apache.flink.yarn.Utils - HBase is not available (not packaged with this application): ClassNotFoundException : "org.apache.hadoop.hbase.HBaseConfiguration". 13:51:48.085 [main] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Submitting application master application_1578634332134_3087235 13:51:48.304 [main] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Waiting for the cluster to be allocated 13:51:48.305 [main] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Deploying cluster, current state ACCEPTED Exception in thread "main" org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster. at org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:82) at com.dtstack.flinkx.launcher.perjob.PerJobSubmitter.submit(PerJobSubmitter.java:89) at com.dtstack.flinkx.launcher.Launcher.main(Launcher.java:139) Caused by: org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. Diagnostics from YARN: Application application_1578634332134_3087235 failed 2 times due to AM Container for appattempt_1578634332134_3087235_000002 exited with exitCode: -1000 For more detailed output, check application tracking page: http://hadoop-001.com:8088/proxy/application_1578634332134_3087235/Then, click on links to logs of each attempt. Diagnostics: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: " hadoop-006.com/172.16.54.16"; destination host is: "xxx":8020; java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: " hadoop-006.com/172.16.54.16"; destination host is: "xxx":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1508) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2168) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:362) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:718) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:681) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:769) at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:396) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1557) at org.apache.hadoop.ipc.Client.call(Client.java:1480) ... 31 more Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:594) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:396) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:761) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:757) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756) ... 34 more Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:718) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:681) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:769) at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:396) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1557) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2168) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:362) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:594) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:396) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:761) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:757) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756) ... 34 more Caused by: Client cannot authenticate via:[TOKEN, KERBEROS] org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:594) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:396) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:761) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:757) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756) at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:396) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1557) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2168) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:362) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application. If log aggregation is enabled on your cluster, use this command to further investigate the issue: yarn logs -applicationId application_1578634332134_3087235 at org.apache.flink.yarn.AbstractYarnClusterDescriptor.startAppMaster(AbstractYarnClusterDescriptor.java:1169) at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:500) at org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:75) ... 2 more 13:51:56.362 [Thread-43] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Cancelling deployment from Deployment Failure Hook 13:51:56.363 [Thread-43] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Killing YARN application 13:51:56.472 [Thread-43] INFO org.apache.flink.yarn.AbstractYarnClusterDescriptor - Deleting files in hdfs://xxx:8020/user/hive/.flink/application_1578634332134_3087235. *container-localizer-syslog * 2020-12-29 17:07:17,685 WARN [ContainerLocalizer Downloader] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hive (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2020-12-29 17:07:17,686 WARN [ContainerLocalizer Downloader] org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2020-12-29 17:07:17,687 WARN [ContainerLocalizer Downloader] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hive (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 2020-12-29 17:07:17,691 WARN [ContainerLocalizer Downloader] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hive (auth:SIMPLE) cause:java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "hadoop-042.com/172.16.54.52"; destination host is: "xxxx.qcc":8020; |
Free forum by Nabble | Edit this page |