【求助】Flink Hadoop依赖问题

classic Classic list List threaded Threaded
3 messages Options
Z-Z
Reply | Threaded
Open this post in threaded view
|

【求助】Flink Hadoop依赖问题

Z-Z
我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
version: "2.1"
services:
  jobmanager:
    image: flink:1.11.0-scala_2.12
    expose:
      - "6123"
    ports:
      - "8081:8081"
    command: jobmanager
    environment:
      - JOB_MANAGER_RPC_ADDRESS=jobmanager
      - HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
    volumes:
      - ./jobmanager/conf:/opt/flink/conf
      - ./data:/data


  taskmanager:
    image: flink:1.11.0-scala_2.12
    expose:
      - "6121"
      - "6122"
    depends_on:
      - jobmanager
    command: taskmanager
    links:
      - "jobmanager:jobmanager"
    environment:
      - JOB_MANAGER_RPC_ADDRESS=jobmanager
    volumes:
      - ./taskmanager/conf:/opt/flink/conf
networks:
  default:
    external:
      name: flink-network



hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?
Reply | Threaded
Open this post in threaded view
|

Re:【求助】Flink Hadoop依赖问题

Roc Marshal



你好,Z-Z,

可以尝试在 https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/ 下载对应的uber jar包,并就将下载后的jar文件放到flink镜像的 ${FLINK_HOME}/lib 路径下,之后启动编排的容器。
祝好。
Roc Marshal.











在 2020-07-15 10:47:39,"Z-Z" <[hidden email]> 写道:

>我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
>version: "2.1"
>services:
>&nbsp; jobmanager:
>&nbsp; &nbsp; image: flink:1.11.0-scala_2.12
>&nbsp; &nbsp; expose:
>&nbsp; &nbsp; &nbsp; - "6123"
>&nbsp; &nbsp; ports:
>&nbsp; &nbsp; &nbsp; - "8081:8081"
>&nbsp; &nbsp; command: jobmanager
>&nbsp; &nbsp; environment:
>&nbsp; &nbsp; &nbsp; - JOB_MANAGER_RPC_ADDRESS=jobmanager
>&nbsp; &nbsp; &nbsp; - HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>&nbsp; &nbsp; volumes:
>&nbsp; &nbsp; &nbsp; - ./jobmanager/conf:/opt/flink/conf
>&nbsp; &nbsp; &nbsp; - ./data:/data
>
>
>&nbsp; taskmanager:
>&nbsp; &nbsp; image: flink:1.11.0-scala_2.12
>&nbsp; &nbsp; expose:
>&nbsp; &nbsp; &nbsp; - "6121"
>&nbsp; &nbsp; &nbsp; - "6122"
>&nbsp; &nbsp; depends_on:
>&nbsp; &nbsp; &nbsp; - jobmanager
>&nbsp; &nbsp; command: taskmanager
>&nbsp; &nbsp; links:
>&nbsp; &nbsp; &nbsp; - "jobmanager:jobmanager"
>&nbsp; &nbsp; environment:
>&nbsp; &nbsp; &nbsp; - JOB_MANAGER_RPC_ADDRESS=jobmanager
>&nbsp; &nbsp; volumes:
>&nbsp; &nbsp; &nbsp; - ./taskmanager/conf:/opt/flink/conf
>networks:
>&nbsp; default:
>&nbsp; &nbsp; external:
>&nbsp; &nbsp; &nbsp; name: flink-network
>
>
>
>hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?
Reply | Threaded
Open this post in threaded view
|

Re: 【求助】Flink Hadoop依赖问题

Yang Wang
你可以在Pod里面确认一下/data目录是否正常挂载,另外需要在Pod里ps看一下
起的JVM进程里的classpath是什么,有没有包括hadoop的jar


当然,使用Roc Marshal建议的增加flink-shaded-hadoop并且放到$FLINK_HOME/lib下也可以解决问题

Best,
Yang

Roc Marshal <[hidden email]> 于2020年7月15日周三 下午5:09写道:

>
>
>
> 你好,Z-Z,
>
> 可以尝试在
> https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/
> 下载对应的uber jar包,并就将下载后的jar文件放到flink镜像的 ${FLINK_HOME}/lib 路径下,之后启动编排的容器。
> 祝好。
> Roc Marshal.
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-07-15 10:47:39,"Z-Z" <[hidden email]> 写道:
> >我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
> >version: "2.1"
> >services:
> >&nbsp; jobmanager:
> >&nbsp; &nbsp; image: flink:1.11.0-scala_2.12
> >&nbsp; &nbsp; expose:
> >&nbsp; &nbsp; &nbsp; - "6123"
> >&nbsp; &nbsp; ports:
> >&nbsp; &nbsp; &nbsp; - "8081:8081"
> >&nbsp; &nbsp; command: jobmanager
> >&nbsp; &nbsp; environment:
> >&nbsp; &nbsp; &nbsp; - JOB_MANAGER_RPC_ADDRESS=jobmanager
> >&nbsp; &nbsp; &nbsp; -
> HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >&nbsp; &nbsp; volumes:
> >&nbsp; &nbsp; &nbsp; - ./jobmanager/conf:/opt/flink/conf
> >&nbsp; &nbsp; &nbsp; - ./data:/data
> >
> >
> >&nbsp; taskmanager:
> >&nbsp; &nbsp; image: flink:1.11.0-scala_2.12
> >&nbsp; &nbsp; expose:
> >&nbsp; &nbsp; &nbsp; - "6121"
> >&nbsp; &nbsp; &nbsp; - "6122"
> >&nbsp; &nbsp; depends_on:
> >&nbsp; &nbsp; &nbsp; - jobmanager
> >&nbsp; &nbsp; command: taskmanager
> >&nbsp; &nbsp; links:
> >&nbsp; &nbsp; &nbsp; - "jobmanager:jobmanager"
> >&nbsp; &nbsp; environment:
> >&nbsp; &nbsp; &nbsp; - JOB_MANAGER_RPC_ADDRESS=jobmanager
> >&nbsp; &nbsp; volumes:
> >&nbsp; &nbsp; &nbsp; - ./taskmanager/conf:/opt/flink/conf
> >networks:
> >&nbsp; default:
> >&nbsp; &nbsp; external:
> >&nbsp; &nbsp; &nbsp; name: flink-network
> >
> >
> >
> >hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could
> not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?
>