flinksql查询hbase表失败问题

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

flinksql查询hbase表失败问题

MuChen
Hi, All:


麻烦大佬们帮我分析下,flinksql查询时报hbase连接失败的原因。


以下是我操作流程和集群环境:



我在一个hadoop集群的master节点上部署了flink。

先在集群上启动了yarn-session:
bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm fsql-cli 2>&1 & # 启动时有如下错误,不明白什么意思。 [admin@uhadoop-op3raf-master2 flink10]$ 2020-06-23 09:30:56,402 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState  - Authentication failed 2020-06-23 09:30:56,515 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState  - Authentication failed JobManager Web Interface: http://uhadoop-op3raf-core24:42976 
接下来启动了sql-client:
bin/sql-client.sh embedded
然后,按照官网上的建hbase表的说明,在flinksql客户端建了表:
#  CREATE TABLE hbase_video_pic_title_q70 (   key string,   cf1 ROW<vid string, q70 string&gt; ) WITH (   'connector.type' = 'hbase',   'connector.version' = '1.4.3',   'connector.table-name' = 'hbase_video_pic_title_q70',   'connector.zookeeper.quorum' = 'uhadoop-op3raf-master1:2181,uhadoop-op3raf-master2:2181,uhadoop-op3raf-core1:2181',   'connector.zookeeper.znode.parent' = '/hbase',   'connector.write.buffer-flush.max-size' = '10mb',   'connector.write.buffer-flush.max-rows' = '1000',    'connector.write.buffer-flush.interval' = '2s' );
建表成功后,执行查询:
select key from hbase_video_pic_title_q70;
执行查询的时候报了连接HBase失败的错误:
[ERROR] Could not execute SQL statement. Reason: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.         at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)         at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)         at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)         at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)         at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)         at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)         at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)         at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)         at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager         at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)         at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)         ... 6 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager         at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&gt;(JobManagerRunnerImpl.java:152)         at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)         at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)         at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)         ... 7 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'Source: HBaseTableSource[schema=[key, cf1], projectFields=[0]] -&gt; SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70, source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], fields=[key]) -&gt; SinkConversionToTuple2 -&gt; Sink: SQL Client Stream Collect Sink': Configuring the input format (null) failed: Cannot create connection to HBase.         at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)         at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)         at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)         at org.apache.flink.runtime.scheduler.SchedulerBase.<init&gt;(SchedulerBase.java:215)         at org.apache.flink.runtime.scheduler.DefaultScheduler.<init&gt;(DefaultScheduler.java:120)         at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)         at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)         at org.apache.flink.runtime.jobmaster.JobMaster.<init&gt;(JobMaster.java:266)         at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)         at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)         at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&gt;(JobManagerRunnerImpl.java:146)         ... 10 more Caused by: java.lang.Exception: Configuring the input format (null) failed: Cannot create connection to HBase.         at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:80)         at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:212)         ... 20 more Caused by: java.lang.RuntimeException: Cannot create connection to HBase.         at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:103)         at org.apache.flink.addons.hbase.HBaseRowInputFormat.configure(HBaseRowInputFormat.java:68)         at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:77)         ... 21 more Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)         at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:96)         ... 23 more Caused by: java.lang.reflect.InvocationTargetException         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)         ... 26 more Caused by: java.lang.NoSuchFieldError: HBASE_CLIENT_PREFETCH         at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&gt;(ConnectionManager.java:713)         at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&gt;(ConnectionManager.java:652)         ... 31 more End of exception on server side&gt;]
flink版本:1.10.0

hbase版本:1.2.0

hbase也部署在同一个hadoop集群上。

hbase表为hive关联表,通过hbase客户端可以查到数据:
# hive表 CREATE TABLE edw.hbase_video_pic_title_q70(     key string,     vid string,      q70 string )  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:vid,cf1:q70") TBLPROPERTIES ("hbase.table.name" = "hbase_video_pic_title_q70"); # 插入数据到hive的同时插入了hbase INSERT OVERWRITE TABLE edw.hbase_video_pic_title_q70 SELECT vid, vid, q70 FROM dw.video_pic_title_q70;
flink/lib有如下jar:
flink-dist_2.11-1.10.0.jar flink-hbase_2.11-1.10.0.jar flink-metrics-influxdb-1.10.0.jar flink-metrics-prometheus-1.10.0.jar flink-shaded-hadoop-2-uber-2.7.5-7.0.jar flink-table_2.11-1.10.0.jar flink-table-blink_2.11-1.10.0.jar hbase-client-1.2.0-cdh5.8.0.jar hbase-common-1.2.0.jar hbase-protocol-1.2.0.jar jna-4.2.2.jar jna-platform-4.2.2.jar log4j-1.2.17.jar oshi-core-3.4.0.jar slf4j-log4j12-1.7.15.jar
配置文件flink-conf.yaml:
jobmanager.rpc.address: localhost jobmanager.rpc.port: 6123 jobmanager.heap.size: 1024m taskmanager.memory.process.size: 1568m taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: uhadoop-op3raf-master1,uhadoop-op3raf-master2,uhadoop-op3raf-core1 state.checkpoints.dir: hdfs:///flink/checkpoint state.savepoints.dir: hdfs:///flink/flink-savepoints state.checkpoints.num-retained:60 state.backend.incremental: true jobmanager.execution.failover-strategy: region jobmanager.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.web.port: 8082 historyserver.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.archive.fs.refresh-interval: 10000 metrics.reporter.influxdb.class: org.apache.flink.metrics.influxdb.InfluxdbReporter metrics.reporter.influxdb.host: 10.42.63.116 metrics.reporter.influxdb.port: 8086 metrics.reporter.influxdb.db: flink metrics.reporter.influxdb.username: flink metrics.reporter.influxdb.password: flink*** metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter metrics.reporter.promgateway.host: 10.42.63.116 metrics.reporter.promgateway.port: 9091 metrics.reporter.promgateway.jobName: tdflink_prom metrics.reporter.promgateway.randomJobNameSuffix: true metrics.reporter.promgateway.deleteOnShutdown: true metrics.system-resource: true yarn.application-attempts: 1
配置文件sql-client-defaults.yaml:
tables: [] # empty list functions: [] # empty list catalogs: [] # empty list execution:   # select the implementation responsible for planning table programs   # possible values are 'blink' (used by default) or 'old'   planner: blink   # 'batch' or 'streaming' execution   type: streaming   # allow 'event-time' or only 'processing-time' in sources   time-characteristic: event-time   # interval in ms for emitting periodic watermarks   periodic-watermarks-interval: 200   # 'changelog' or 'table' presentation of results   result-mode: table   # maximum number of maintained rows in 'table' presentation of results   max-table-result-rows: 1000000   # parallelism of the program   parallelism: 1   # maximum parallelism   max-parallelism: 128   # minimum idle state retention in ms   min-idle-state-retention: 0   # maximum idle state retention in ms   max-idle-state-retention: 0   # current catalog ('default_catalog' by default)   current-catalog: default_catalog   # current database of the current catalog (default database of the catalog by default)   current-database: default_database   # controls how table programs are restarted in case of a failures   restart-strategy:     # strategy type     # possible values are "fixed-delay", "failure-rate", "none", or "fallback" (default)     type: fallback deployment:   # general cluster communication timeout in ms   response-timeout: 5000   # (optional) address from cluster to gateway   gateway-address: ""   # (optional) port from cluster to gateway   gateway-port: 0
echo $HADOOP_CLASSPATH:
[admin@uhadoop-op3raf-master2 flink10]$ echo $HADOOP_CLASSPATH /home/hadoop/contrib/capacity-scheduler/*.jar:/home/hadoop/conf:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*:/home/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/share/hadoop/mapreduce/*



求助大佬们!

Best,

MuChen.
Reply | Threaded
Open this post in threaded view
|

Re:flinksql查询hbase表失败问题

Roc Marshal
MuChen,你好。<br/>根据你提供的异常信息,请详细检查下HBase的zk部分的meta信息和Flink作业内部的Hbase Source能够正常工作之后再进行下一步尝试。此外,注意zk集群是否存在鉴权问题等。<br/>以上,仅供参考,祝好。<br/><br/>Best,<br/>Roc Marshal.
在 2020-06-23 10:17:35,"MuChen" <[hidden email]> 写道:

>Hi, All:
>
>
>麻烦大佬们帮我分析下,flinksql查询时报hbase连接失败的原因。
>
>
>以下是我操作流程和集群环境:
>
>
>
>我在一个hadoop集群的master节点上部署了flink。
>
>先在集群上启动了yarn-session:
>bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm fsql-cli 2&gt;&amp;1 &amp; # 启动时有如下错误,不明白什么意思。 [admin@uhadoop-op3raf-master2 flink10]$ 2020-06-23 09:30:56,402 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState  - Authentication failed 2020-06-23 09:30:56,515 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState  - Authentication failed JobManager Web Interface: http://uhadoop-op3raf-core24:42976 
>接下来启动了sql-client:
>bin/sql-client.sh embedded
>然后,按照官网上的建hbase表的说明,在flinksql客户端建了表:
>#  CREATE TABLE hbase_video_pic_title_q70 (   key string,   cf1 ROW<vid string, q70 string&gt; ) WITH (   'connector.type' = 'hbase',   'connector.version' = '1.4.3',   'connector.table-name' = 'hbase_video_pic_title_q70',   'connector.zookeeper.quorum' = 'uhadoop-op3raf-master1:2181,uhadoop-op3raf-master2:2181,uhadoop-op3raf-core1:2181',   'connector.zookeeper.znode.parent' = '/hbase',   'connector.write.buffer-flush.max-size' = '10mb',   'connector.write.buffer-flush.max-rows' = '1000',    'connector.write.buffer-flush.interval' = '2s' );
>建表成功后,执行查询:
>select key from hbase_video_pic_title_q70;
>执行查询的时候报了连接HBase失败的错误:
>[ERROR] Could not execute SQL statement. Reason: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.         at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)         at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)         at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)         at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)         at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)         at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)         at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)         at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)         at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager         at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)         at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)         ... 6 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager         at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&gt;(JobManagerRunnerImpl.java:152)         at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)         at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)         at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)         ... 7 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'Source: HBaseTableSource[schema=[key, cf1], projectFields=[0]] -&gt; SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70, source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], fields=[key]) -&gt; SinkConversionToTuple2 -&gt; Sink: SQL Client Stream Collect Sink': Configuring the input format (null) failed: Cannot create connection to HBase.         at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)         at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)         at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)         at org.apache.flink.runtime.scheduler.SchedulerBase.<init&gt;(SchedulerBase.java:215)         at org.apache.flink.runtime.scheduler.DefaultScheduler.<init&gt;(DefaultScheduler.java:120)         at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)         at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)         at org.apache.flink.runtime.jobmaster.JobMaster.<init&gt;(JobMaster.java:266)         at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)         at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)         at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&gt;(JobManagerRunnerImpl.java:146)         ... 10 more Caused by: java.lang.Exception: Configuring the input format (null) failed: Cannot create connection to HBase.         at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:80)         at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:212)         ... 20 more Caused by: java.lang.RuntimeException: Cannot create connection to HBase.         at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:103)         at org.apache.flink.addons.hbase.HBaseRowInputFormat.configure(HBaseRowInputFormat.java:68)         at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:77)         ... 21 more Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)         at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:96)         ... 23 more Caused by: java.lang.reflect.InvocationTargetException         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)         at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)         ... 26 more Caused by: java.lang.NoSuchFieldError: HBASE_CLIENT_PREFETCH         at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&gt;(ConnectionManager.java:713)         at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&gt;(ConnectionManager.java:652)         ... 31 more End of exception on server side&gt;]
>flink版本:1.10.0
>
>hbase版本:1.2.0
>
>hbase也部署在同一个hadoop集群上。
>
>hbase表为hive关联表,通过hbase客户端可以查到数据:
># hive表 CREATE TABLE edw.hbase_video_pic_title_q70(     key string,     vid string,      q70 string )  STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:vid,cf1:q70") TBLPROPERTIES ("hbase.table.name" = "hbase_video_pic_title_q70"); # 插入数据到hive的同时插入了hbase INSERT OVERWRITE TABLE edw.hbase_video_pic_title_q70 SELECT vid, vid, q70 FROM dw.video_pic_title_q70;
>flink/lib有如下jar:
>flink-dist_2.11-1.10.0.jar flink-hbase_2.11-1.10.0.jar flink-metrics-influxdb-1.10.0.jar flink-metrics-prometheus-1.10.0.jar flink-shaded-hadoop-2-uber-2.7.5-7.0.jar flink-table_2.11-1.10.0.jar flink-table-blink_2.11-1.10.0.jar hbase-client-1.2.0-cdh5.8.0.jar hbase-common-1.2.0.jar hbase-protocol-1.2.0.jar jna-4.2.2.jar jna-platform-4.2.2.jar log4j-1.2.17.jar oshi-core-3.4.0.jar slf4j-log4j12-1.7.15.jar
>配置文件flink-conf.yaml:
>jobmanager.rpc.address: localhost jobmanager.rpc.port: 6123 jobmanager.heap.size: 1024m taskmanager.memory.process.size: 1568m taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: uhadoop-op3raf-master1,uhadoop-op3raf-master2,uhadoop-op3raf-core1 state.checkpoints.dir: hdfs:///flink/checkpoint state.savepoints.dir: hdfs:///flink/flink-savepoints state.checkpoints.num-retained:60 state.backend.incremental: true jobmanager.execution.failover-strategy: region jobmanager.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.web.port: 8082 historyserver.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.archive.fs.refresh-interval: 10000 metrics.reporter.influxdb.class: org.apache.flink.metrics.influxdb.InfluxdbReporter metrics.reporter.influxdb.host: 10.42.63.116 metrics.reporter.influxdb.port: 8086 metrics.reporter.influxdb.db: flink metrics.reporter.influxdb.username: flink metrics.reporter.influxdb.password: flink*** metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter metrics.reporter.promgateway.host: 10.42.63.116 metrics.reporter.promgateway.port: 9091 metrics.reporter.promgateway.jobName: tdflink_prom metrics.reporter.promgateway.randomJobNameSuffix: true metrics.reporter.promgateway.deleteOnShutdown: true metrics.system-resource: true yarn.application-attempts: 1
>配置文件sql-client-defaults.yaml:
>tables: [] # empty list functions: [] # empty list catalogs: [] # empty list execution:   # select the implementation responsible for planning table programs   # possible values are 'blink' (used by default) or 'old'   planner: blink   # 'batch' or 'streaming' execution   type: streaming   # allow 'event-time' or only 'processing-time' in sources   time-characteristic: event-time   # interval in ms for emitting periodic watermarks   periodic-watermarks-interval: 200   # 'changelog' or 'table' presentation of results   result-mode: table   # maximum number of maintained rows in 'table' presentation of results   max-table-result-rows: 1000000   # parallelism of the program   parallelism: 1   # maximum parallelism   max-parallelism: 128   # minimum idle state retention in ms   min-idle-state-retention: 0   # maximum idle state retention in ms   max-idle-state-retention: 0   # current catalog ('default_catalog' by default)   current-catalog: default_catalog   # current database of the current catalog (default database of the catalog by default)   current-database: default_database   # controls how table programs are restarted in case of a failures   restart-strategy:     # strategy type     # possible values are "fixed-delay", "failure-rate", "none", or "fallback" (default)     type: fallback deployment:   # general cluster communication timeout in ms   response-timeout: 5000   # (optional) address from cluster to gateway   gateway-address: ""   # (optional) port from cluster to gateway   gateway-port: 0
>echo $HADOOP_CLASSPATH:
>[admin@uhadoop-op3raf-master2 flink10]$ echo $HADOOP_CLASSPATH /home/hadoop/contrib/capacity-scheduler/*.jar:/home/hadoop/conf:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*:/home/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/share/hadoop/mapreduce/*
>
>
>
>求助大佬们!
>
>Best,
>
>MuChen.
Reply | Threaded
Open this post in threaded view
|

回复:flinksql查询hbase表失败问题

MuChen
Hi,Roc Marshal:
解决方案麻烦说详细些,看不太明白,谢谢!


Best,
MuChen.




------------------&nbsp;原始邮件&nbsp;------------------
发件人:&nbsp;"Roc Marshal"<[hidden email]&gt;;
发送时间:&nbsp;2020年6月23日(星期二) 上午10:27
收件人:&nbsp;"user-zh"<[hidden email]&gt;;

主题:&nbsp;Re:flinksql查询hbase表失败问题



MuChen,你好。<br/&gt;根据你提供的异常信息,请详细检查下HBase的zk部分的meta信息和Flink作业内部的Hbase Source能够正常工作之后再进行下一步尝试。此外,注意zk集群是否存在鉴权问题等。<br/&gt;以上,仅供参考,祝好。<br/&gt;<br/&gt;Best,<br/&gt;Roc Marshal.
在 2020-06-23 10:17:35,"MuChen" <[hidden email]&gt; 写道:
&gt;Hi, All:
&gt;
&gt;
&gt;麻烦大佬们帮我分析下,flinksql查询时报hbase连接失败的原因。
&gt;
&gt;
&gt;以下是我操作流程和集群环境:
&gt;
&gt;
&gt;
&gt;我在一个hadoop集群的master节点上部署了flink。
&gt;
&gt;先在集群上启动了yarn-session:
&gt;bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm fsql-cli 2&amp;gt;&amp;amp;1 &amp;amp; # 启动时有如下错误,不明白什么意思。 [admin@uhadoop-op3raf-master2 flink10]$ 2020-06-23 09:30:56,402 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&nbsp; - Authentication failed 2020-06-23 09:30:56,515 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&nbsp; - Authentication failed JobManager Web Interface: http://uhadoop-op3raf-core24:42976 
&gt;接下来启动了sql-client:
&gt;bin/sql-client.sh embedded
&gt;然后,按照官网上的建hbase表的说明,在flinksql客户端建了表:
&gt;#&nbsp; CREATE TABLE hbase_video_pic_title_q70 (&nbsp;&nbsp; key string,&nbsp;&nbsp; cf1 ROW<vid string, q70 string&amp;gt; ) WITH (&nbsp;&nbsp; 'connector.type' = 'hbase',&nbsp;&nbsp; 'connector.version' = '1.4.3',&nbsp;&nbsp; 'connector.table-name' = 'hbase_video_pic_title_q70',&nbsp;&nbsp; 'connector.zookeeper.quorum' = 'uhadoop-op3raf-master1:2181,uhadoop-op3raf-master2:2181,uhadoop-op3raf-core1:2181',&nbsp;&nbsp; 'connector.zookeeper.znode.parent' = '/hbase',&nbsp;&nbsp; 'connector.write.buffer-flush.max-size' = '10mb',&nbsp;&nbsp; 'connector.write.buffer-flush.max-rows' = '1000',&nbsp;&nbsp;&nbsp; 'connector.write.buffer-flush.interval' = '2s' );
&gt;建表成功后,执行查询:
&gt;select key from hbase_video_pic_title_q70;
&gt;执行查询的时候报了连接HBase失败的错误:
&gt;[ERROR] Could not execute SQL statement. Reason: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 6 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&amp;gt;(JobManagerRunnerImpl.java:152)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 7 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'Source: HBaseTableSource[schema=[key, cf1], projectFields=[0]] -&amp;gt; SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70, source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], fields=[key]) -&amp;gt; SinkConversionToTuple2 -&amp;gt; Sink: SQL Client Stream Collect Sink': Configuring the input format (null) failed: Cannot create connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.SchedulerBase.<init&amp;gt;(SchedulerBase.java:215)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.DefaultScheduler.<init&amp;gt;(DefaultScheduler.java:120)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobMaster.<init&amp;gt;(JobMaster.java:266)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&amp;gt;(JobManagerRunnerImpl.java:146)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 10 more Caused by: java.lang.Exception: Configuring the input format (null) failed: Cannot create connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:80)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:212)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 20 more Caused by: java.lang.RuntimeException: Cannot create connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:103)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.addons.hbase.HBaseRowInputFormat.configure(HBaseRowInputFormat.java:68)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:77)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 21 more Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:96)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 23 more Caused by: java.lang.reflect.InvocationTargetException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.reflect.Constructor.newInstance(Constructor.java:423)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 26 more Caused by: java.lang.NoSuchFieldError: HBASE_CLIENT_PREFETCH&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&amp;gt;(ConnectionManager.java:713)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&amp;gt;(ConnectionManager.java:652)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 31 more End of exception on server side&amp;gt;]
&gt;flink版本:1.10.0
&gt;
&gt;hbase版本:1.2.0
&gt;
&gt;hbase也部署在同一个hadoop集群上。
&gt;
&gt;hbase表为hive关联表,通过hbase客户端可以查到数据:
&gt;# hive表 CREATE TABLE edw.hbase_video_pic_title_q70(&nbsp;&nbsp;&nbsp;&nbsp; key string,&nbsp;&nbsp;&nbsp;&nbsp; vid string,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; q70 string )&nbsp; STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:vid,cf1:q70") TBLPROPERTIES ("hbase.table.name" = "hbase_video_pic_title_q70"); # 插入数据到hive的同时插入了hbase INSERT OVERWRITE TABLE edw.hbase_video_pic_title_q70 SELECT vid, vid, q70 FROM dw.video_pic_title_q70;
&gt;flink/lib有如下jar:
&gt;flink-dist_2.11-1.10.0.jar flink-hbase_2.11-1.10.0.jar flink-metrics-influxdb-1.10.0.jar flink-metrics-prometheus-1.10.0.jar flink-shaded-hadoop-2-uber-2.7.5-7.0.jar flink-table_2.11-1.10.0.jar flink-table-blink_2.11-1.10.0.jar hbase-client-1.2.0-cdh5.8.0.jar hbase-common-1.2.0.jar hbase-protocol-1.2.0.jar jna-4.2.2.jar jna-platform-4.2.2.jar log4j-1.2.17.jar oshi-core-3.4.0.jar slf4j-log4j12-1.7.15.jar
&gt;配置文件flink-conf.yaml:
&gt;jobmanager.rpc.address: localhost jobmanager.rpc.port: 6123 jobmanager.heap.size: 1024m taskmanager.memory.process.size: 1568m taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: uhadoop-op3raf-master1,uhadoop-op3raf-master2,uhadoop-op3raf-core1 state.checkpoints.dir: hdfs:///flink/checkpoint state.savepoints.dir: hdfs:///flink/flink-savepoints state.checkpoints.num-retained:60 state.backend.incremental: true jobmanager.execution.failover-strategy: region jobmanager.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.web.port: 8082 historyserver.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.archive.fs.refresh-interval: 10000 metrics.reporter.influxdb.class: org.apache.flink.metrics.influxdb.InfluxdbReporter metrics.reporter.influxdb.host: 10.42.63.116 metrics.reporter.influxdb.port: 8086 metrics.reporter.influxdb.db: flink metrics.reporter.influxdb.username: flink metrics.reporter.influxdb.password: flink*** metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter metrics.reporter.promgateway.host: 10.42.63.116 metrics.reporter.promgateway.port: 9091 metrics.reporter.promgateway.jobName: tdflink_prom metrics.reporter.promgateway.randomJobNameSuffix: true metrics.reporter.promgateway.deleteOnShutdown: true metrics.system-resource: true yarn.application-attempts: 1
&gt;配置文件sql-client-defaults.yaml:
&gt;tables: [] # empty list functions: [] # empty list catalogs: [] # empty list execution:&nbsp;&nbsp; # select the implementation responsible for planning table programs&nbsp;&nbsp; # possible values are 'blink' (used by default) or 'old'&nbsp;&nbsp; planner: blink&nbsp;&nbsp; # 'batch' or 'streaming' execution&nbsp;&nbsp; type: streaming&nbsp;&nbsp; # allow 'event-time' or only 'processing-time' in sources&nbsp;&nbsp; time-characteristic: event-time&nbsp;&nbsp; # interval in ms for emitting periodic watermarks&nbsp;&nbsp; periodic-watermarks-interval: 200&nbsp;&nbsp; # 'changelog' or 'table' presentation of results&nbsp;&nbsp; result-mode: table&nbsp;&nbsp; # maximum number of maintained rows in 'table' presentation of results&nbsp;&nbsp; max-table-result-rows: 1000000&nbsp;&nbsp; # parallelism of the program&nbsp;&nbsp; parallelism: 1&nbsp;&nbsp; # maximum parallelism&nbsp;&nbsp; max-parallelism: 128&nbsp;&nbsp; # minimum idle state retention in ms&nbsp;&nbsp; min-idle-state-retention: 0&nbsp;&nbsp; # maximum idle state retention in ms&nbsp;&nbsp; max-idle-state-retention: 0&nbsp;&nbsp; # current catalog ('default_catalog' by default)&nbsp;&nbsp; current-catalog: default_catalog&nbsp;&nbsp; # current database of the current catalog (default database of the catalog by default)&nbsp;&nbsp; current-database: default_database&nbsp;&nbsp; # controls how table programs are restarted in case of a failures&nbsp;&nbsp; restart-strategy:&nbsp;&nbsp;&nbsp;&nbsp; # strategy type&nbsp;&nbsp;&nbsp;&nbsp; # possible values are "fixed-delay", "failure-rate", "none", or "fallback" (default)&nbsp;&nbsp;&nbsp;&nbsp; type: fallback deployment:&nbsp;&nbsp; # general cluster communication timeout in ms&nbsp;&nbsp; response-timeout: 5000&nbsp;&nbsp; # (optional) address from cluster to gateway&nbsp;&nbsp; gateway-address: ""&nbsp;&nbsp; # (optional) port from cluster to gateway&nbsp;&nbsp; gateway-port: 0
&gt;echo $HADOOP_CLASSPATH:
&gt;[admin@uhadoop-op3raf-master2 flink10]$ echo $HADOOP_CLASSPATH /home/hadoop/contrib/capacity-scheduler/*.jar:/home/hadoop/conf:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*:/home/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/share/hadoop/mapreduce/*
&gt;
&gt;
&gt;
&gt;求助大佬们!
&gt;
&gt;Best,
&gt;
&gt;MuChen.
Reply | Threaded
Open this post in threaded view
|

Re:回复:flinksql查询hbase表失败问题

Roc Marshal
MuChen,你好。<br/><br/>1.检查下关于Hbase的zk集群链接的时候是不是需要提供鉴权信息。"org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&amp;nbsp; - Authentication failed JobManager Web Interface: http://uhadoop-op3raf-core24:42976 "<br/>2.关于链接Hbase"Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'Source: HBaseTableSource[schema=[key, cf1], projectFields=[0]] ; SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70, source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], fields=[key]) ; SinkConversionToTuple2 ; Sink: SQL Client Stream Collect Sink': Configuring the input format (null) failed: Cannot create connection to HBase."的异常信息,建议确认下HBASE的配置信息是否正确,即确认当前的Hbase信息是否能够保证成功访问Hbase.<br/><br/>以上,仅供参考。<br/><br/>Best,<br/>Roc Marshal.
在 2020-06-23 11:05:43,"MuChen" <[hidden email]> 写道:

>Hi,Roc Marshal:
>解决方案麻烦说详细些,看不太明白,谢谢!
>
>
>Best,
>MuChen.
>
>
>
>
>------------------&nbsp;原始邮件&nbsp;------------------
>发件人:&nbsp;"Roc Marshal"<[hidden email]&gt;;
>发送时间:&nbsp;2020年6月23日(星期二) 上午10:27
>收件人:&nbsp;"user-zh"<[hidden email]&gt;;
>
>主题:&nbsp;Re:flinksql查询hbase表失败问题
>
>
>
>MuChen,你好。<br/&gt;根据你提供的异常信息,请详细检查下HBase的zk部分的meta信息和Flink作业内部的Hbase Source能够正常工作之后再进行下一步尝试。此外,注意zk集群是否存在鉴权问题等。<br/&gt;以上,仅供参考,祝好。<br/&gt;<br/&gt;Best,<br/&gt;Roc Marshal.
>在 2020-06-23 10:17:35,"MuChen" <[hidden email]&gt; 写道:
>&gt;Hi, All:
>&gt;
>&gt;
>&gt;麻烦大佬们帮我分析下,flinksql查询时报hbase连接失败的原因。
>&gt;
>&gt;
>&gt;以下是我操作流程和集群环境:
>&gt;
>&gt;
>&gt;
>&gt;我在一个hadoop集群的master节点上部署了flink。
>&gt;
>&gt;先在集群上启动了yarn-session:
>&gt;bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm fsql-cli 2&amp;gt;&amp;amp;1 &amp;amp; # 启动时有如下错误,不明白什么意思。 [admin@uhadoop-op3raf-master2 flink10]$ 2020-06-23 09:30:56,402 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&nbsp; - Authentication failed 2020-06-23 09:30:56,515 ERROR org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&nbsp; - Authentication failed JobManager Web Interface: http://uhadoop-op3raf-core24:42976 
>&gt;接下来启动了sql-client:
>&gt;bin/sql-client.sh embedded
>&gt;然后,按照官网上的建hbase表的说明,在flinksql客户端建了表:
>&gt;#&nbsp; CREATE TABLE hbase_video_pic_title_q70 (&nbsp;&nbsp; key string,&nbsp;&nbsp; cf1 ROW<vid string, q70 string&amp;gt; ) WITH (&nbsp;&nbsp; 'connector.type' = 'hbase',&nbsp;&nbsp; 'connector.version' = '1.4.3',&nbsp;&nbsp; 'connector.table-name' = 'hbase_video_pic_title_q70',&nbsp;&nbsp; 'connector.zookeeper.quorum' = 'uhadoop-op3raf-master1:2181,uhadoop-op3raf-master2:2181,uhadoop-op3raf-core1:2181',&nbsp;&nbsp; 'connector.zookeeper.znode.parent' = '/hbase',&nbsp;&nbsp; 'connector.write.buffer-flush.max-size' = '10mb',&nbsp;&nbsp; 'connector.write.buffer-flush.max-rows' = '1000',&nbsp;&nbsp;&nbsp; 'connector.write.buffer-flush.interval' = '2s' );
>&gt;建表成功后,执行查询:
>&gt;select key from hbase_video_pic_title_q70;
>&gt;执行查询的时候报了连接HBase失败的错误:
>&gt;[ERROR] Could not execute SQL statement. Reason: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 6 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&amp;gt;(JobManagerRunnerImpl.java:152)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 7 more Caused by: org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 'Source: HBaseTableSource[schema=[key, cf1], projectFields=[0]] -&amp;gt; SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70, source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], fields=[key]) -&amp;gt; SinkConversionToTuple2 -&amp;gt; Sink: SQL Client Stream Collect Sink': Configuring the input format (null) failed: Cannot create connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.SchedulerBase.<init&amp;gt;(SchedulerBase.java:215)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.DefaultScheduler.<init&amp;gt;(DefaultScheduler.java:120)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobMaster.<init&amp;gt;(JobMaster.java:266)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&amp;gt;(JobManagerRunnerImpl.java:146)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 10 more Caused by: java.lang.Exception: Configuring the input format (null) failed: Cannot create connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:80)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:212)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 20 more Caused by: java.lang.RuntimeException: Cannot create connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:103)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.addons.hbase.HBaseRowInputFormat.configure(HBaseRowInputFormat.java:68)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:77)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 21 more Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:96)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 23 more Caused by: java.lang.reflect.InvocationTargetException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at java.lang.reflect.Constructor.newInstance(Constructor.java:423)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 26 more Caused by: java.lang.NoSuchFieldError: HBASE_CLIENT_PREFETCH&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&amp;gt;(ConnectionManager.java:713)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&amp;gt;(ConnectionManager.java:652)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ... 31 more End of exception on server side&amp;gt;]
>&gt;flink版本:1.10.0
>&gt;
>&gt;hbase版本:1.2.0
>&gt;
>&gt;hbase也部署在同一个hadoop集群上。
>&gt;
>&gt;hbase表为hive关联表,通过hbase客户端可以查到数据:
>&gt;# hive表 CREATE TABLE edw.hbase_video_pic_title_q70(&nbsp;&nbsp;&nbsp;&nbsp; key string,&nbsp;&nbsp;&nbsp;&nbsp; vid string,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; q70 string )&nbsp; STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:vid,cf1:q70") TBLPROPERTIES ("hbase.table.name" = "hbase_video_pic_title_q70"); # 插入数据到hive的同时插入了hbase INSERT OVERWRITE TABLE edw.hbase_video_pic_title_q70 SELECT vid, vid, q70 FROM dw.video_pic_title_q70;
>&gt;flink/lib有如下jar:
>&gt;flink-dist_2.11-1.10.0.jar flink-hbase_2.11-1.10.0.jar flink-metrics-influxdb-1.10.0.jar flink-metrics-prometheus-1.10.0.jar flink-shaded-hadoop-2-uber-2.7.5-7.0.jar flink-table_2.11-1.10.0.jar flink-table-blink_2.11-1.10.0.jar hbase-client-1.2.0-cdh5.8.0.jar hbase-common-1.2.0.jar hbase-protocol-1.2.0.jar jna-4.2.2.jar jna-platform-4.2.2.jar log4j-1.2.17.jar oshi-core-3.4.0.jar slf4j-log4j12-1.7.15.jar
>&gt;配置文件flink-conf.yaml:
>&gt;jobmanager.rpc.address: localhost jobmanager.rpc.port: 6123 jobmanager.heap.size: 1024m taskmanager.memory.process.size: 1568m taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: uhadoop-op3raf-master1,uhadoop-op3raf-master2,uhadoop-op3raf-core1 state.checkpoints.dir: hdfs:///flink/checkpoint state.savepoints.dir: hdfs:///flink/flink-savepoints state.checkpoints.num-retained:60 state.backend.incremental: true jobmanager.execution.failover-strategy: region jobmanager.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.web.port: 8082 historyserver.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.archive.fs.refresh-interval: 10000 metrics.reporter.influxdb.class: org.apache.flink.metrics.influxdb.InfluxdbReporter metrics.reporter.influxdb.host: 10.42.63.116 metrics.reporter.influxdb.port: 8086 metrics.reporter.influxdb.db: flink metrics.reporter.influxdb.username: flink metrics.reporter.influxdb.password: flink*** metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter metrics.reporter.promgateway.host: 10.42.63.116 metrics.reporter.promgateway.port: 9091 metrics.reporter.promgateway.jobName: tdflink_prom metrics.reporter.promgateway.randomJobNameSuffix: true metrics.reporter.promgateway.deleteOnShutdown: true metrics.system-resource: true yarn.application-attempts: 1
>&gt;配置文件sql-client-defaults.yaml:
>&gt;tables: [] # empty list functions: [] # empty list catalogs: [] # empty list execution:&nbsp;&nbsp; # select the implementation responsible for planning table programs&nbsp;&nbsp; # possible values are 'blink' (used by default) or 'old'&nbsp;&nbsp; planner: blink&nbsp;&nbsp; # 'batch' or 'streaming' execution&nbsp;&nbsp; type: streaming&nbsp;&nbsp; # allow 'event-time' or only 'processing-time' in sources&nbsp;&nbsp; time-characteristic: event-time&nbsp;&nbsp; # interval in ms for emitting periodic watermarks&nbsp;&nbsp; periodic-watermarks-interval: 200&nbsp;&nbsp; # 'changelog' or 'table' presentation of results&nbsp;&nbsp; result-mode: table&nbsp;&nbsp; # maximum number of maintained rows in 'table' presentation of results&nbsp;&nbsp; max-table-result-rows: 1000000&nbsp;&nbsp; # parallelism of the program&nbsp;&nbsp; parallelism: 1&nbsp;&nbsp; # maximum parallelism&nbsp;&nbsp; max-parallelism: 128&nbsp;&nbsp; # minimum idle state retention in ms&nbsp;&nbsp; min-idle-state-retention: 0&nbsp;&nbsp; # maximum idle state retention in ms&nbsp;&nbsp; max-idle-state-retention: 0&nbsp;&nbsp; # current catalog ('default_catalog' by default)&nbsp;&nbsp; current-catalog: default_catalog&nbsp;&nbsp; # current database of the current catalog (default database of the catalog by default)&nbsp;&nbsp; current-database: default_database&nbsp;&nbsp; # controls how table programs are restarted in case of a failures&nbsp;&nbsp; restart-strategy:&nbsp;&nbsp;&nbsp;&nbsp; # strategy type&nbsp;&nbsp;&nbsp;&nbsp; # possible values are "fixed-delay", "failure-rate", "none", or "fallback" (default)&nbsp;&nbsp;&nbsp;&nbsp; type: fallback deployment:&nbsp;&nbsp; # general cluster communication timeout in ms&nbsp;&nbsp; response-timeout: 5000&nbsp;&nbsp; # (optional) address from cluster to gateway&nbsp;&nbsp; gateway-address: ""&nbsp;&nbsp; # (optional) port from cluster to gateway&nbsp;&nbsp; gateway-port: 0
>&gt;echo $HADOOP_CLASSPATH:
>&gt;[admin@uhadoop-op3raf-master2 flink10]$ echo $HADOOP_CLASSPATH /home/hadoop/contrib/capacity-scheduler/*.jar:/home/hadoop/conf:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*:/home/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/share/hadoop/mapreduce/*
>&gt;
>&gt;
>&gt;
>&gt;求助大佬们!
>&gt;
>&gt;Best,
>&gt;
>&gt;MuChen.