flink sql连接HBase报错

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

flink sql连接HBase报错

flink_learner
Hello:
        在使用如下语句创建Flink SQL任务,执行查询报错,我想问下,是我遗漏了什么配置项导致flink在“/hbase” node去取元数据,实际集群的hbase配置是在zk的“/hbase-unsecure” node下的


Flink 版本是1.10,hbase的t1表有数据


create table t1 (
  rowkey string,
  f1 ROW<num BIGINT>
) WITH (
  'connector.type' = 'hbase',
  'connector.version' = '1.4.3',
  'connector.table-name' = 't1',
  'connector.zookeeper.quorum' = '10.101.236.2:2181,10.101.236.3:2181,10.101.236.4:2181',
  'connector.zookeeper.znode.parent' = '/hbase-unsecure',
  'connector.write.buffer-flush.max-size' = '10mb',
  'connector.write.buffer-flush.max-rows' = '1',
  'connector.write.buffer-flush.interval' = '2s'
);


执行查询时,报错如下:
Flink SQL> create table t1 (

>   rowkey string,
>   f1 ROW<num BIGINT>
> ) WITH (
>   'connector.type' = 'hbase',
>   'connector.version' = '1.4.3',
>   'connector.table-name' = 't1',
>   'connector.zookeeper.quorum' = '10.101.236.2:2181,10.101.236.3:2181,10.101.236.4:2181',
>   'connector.zookeeper.znode.parent' = '/hbase-unsecure',
>   'connector.write.buffer-flush.max-size' = '10mb',
>   'connector.write.buffer-flush.max-rows' = '1',
>   'connector.write.buffer-flush.interval' = '2s'
> );
[INFO] Table has been created.


Flink SQL> select * from t1;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:152)
at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)
at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.JobException: Creating the input splits caused an error: Can't get the location for replica 0
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:271)
at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:807)
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:228)
at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)
at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)
at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:215)
at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:120)
at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)
at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:146)
... 10 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:332)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:268)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:311)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:596)
at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:754)
at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:670)
at org.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:665)
at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:152)
at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:118)
at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:202)
at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:44)
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:257)
... 22 more
Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/meta-region-server
at org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2009)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:785)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
... 37 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/meta-region-server
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:168)
at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
at java.lang.Thread.run(Thread.java:748)


End of exception on server side>]

Reply | Threaded
Open this post in threaded view
|

Re: flink sql连接HBase报错

Leonard Xu
hello,

 这应该是碰到了Hbase connector的bug [1],  用户配置的hbaseconf 相关的参数,如connector.zookeeper.quorum 不会生效,这个 bug 在1.11.0 已经修复,可以升级下版本。
 在1.10.0版本上一种 walkwaround 的方式是把把这些参数放在 hbase-site.xml 的配置文件中,然后将把配置文件添加到 HADOOP_CLASSPATH中,这样Flink程序也可以加载到正确的配置。


祝好,
Leonard Xu
[1] https://issues.apache.org/jira/browse/FLINK-17968 <https://issues.apache.org/jira/browse/FLINK-17968>

> 在 2020年7月13日,11:23,flink_learner <[hidden email]> 写道:
>
> Hello:
>        在使用如下语句创建Flink SQL任务,执行查询报错,我想问下,是我遗漏了什么配置项导致flink在“/hbase” node去取元数据,实际集群的hbase配置是在zk的“/hbase-unsecure” node下的
>
>
> Flink 版本是1.10,hbase的t1表有数据
>
>
> create table t1 (
>  rowkey string,
>  f1 ROW<num BIGINT>
> ) WITH (
>  'connector.type' = 'hbase',
>  'connector.version' = '1.4.3',
>  'connector.table-name' = 't1',
>  'connector.zookeeper.quorum' = '10.101.236.2:2181,10.101.236.3:2181,10.101.236.4:2181',
>  'connector.zookeeper.znode.parent' = '/hbase-unsecure',
>  'connector.write.buffer-flush.max-size' = '10mb',
>  'connector.write.buffer-flush.max-rows' = '1',
>  'connector.write.buffer-flush.interval' = '2s'
> );
>
>
> 执行查询时,报错如下:
> Flink SQL> create table t1 (
>>  rowkey string,
>>  f1 ROW<num BIGINT>
>> ) WITH (
>>  'connector.type' = 'hbase',
>>  'connector.version' = '1.4.3',
>>  'connector.table-name' = 't1',
>>  'connector.zookeeper.quorum' = '10.101.236.2:2181,10.101.236.3:2181,10.101.236.4:2181',
>>  'connector.zookeeper.znode.parent' = '/hbase-unsecure',
>>  'connector.write.buffer-flush.max-size' = '10mb',
>>  'connector.write.buffer-flush.max-rows' = '1',
>>  'connector.write.buffer-flush.interval' = '2s'
>> );
> [INFO] Table has been created.
>
>
> Flink SQL> select * from t1;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)
> at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
> at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> ... 6 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
> at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:152)
> at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
> at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)
> at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> ... 7 more
> Caused by: org.apache.flink.runtime.JobException: Creating the input splits caused an error: Can't get the location for replica 0
> at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:271)
> at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:807)
> at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:228)
> at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)
> at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)
> at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:215)
> at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:120)
> at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)
> at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
> at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:146)
> ... 10 more
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
> at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:332)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
> at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
> at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
> at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:268)
> at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
> at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:311)
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:596)
> at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:754)
> at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:670)
> at org.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:665)
> at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:152)
> at org.apache.hadoop.hbase.client.HRegionLocator.getStartEndKeys(HRegionLocator.java:118)
> at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:202)
> at org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:44)
> at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:257)
> ... 22 more
> Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/meta-region-server
> at org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2009)
> at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:785)
> at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
> ... 37 more
> Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/meta-region-server
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:168)
> at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
> at java.lang.Thread.run(Thread.java:748)
>
>
> End of exception on server side>]
>