doesBucketExist on dsw-dia-test: com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

doesBucketExist on dsw-dia-test: com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider

levi-015

目前测试 Flink Native K8s HA, 用的是Flink 1.13.1。发现如下报错

cs org.apache.flink.util.FlinkException: Could not create the ha services from the instantiated HighAvailabilityServicesFactory org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:268)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:124)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:353)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:311)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:239)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:189)
        at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:186)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:600)
        at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:85)
Caused by: java.lang.NullPointerException
        at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:59)
        at org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.<init>(Fabric8FlinkKubeClient.java:87)
        at org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory.fromConfiguration(FlinkKubeClientFactory.java:106)
        at org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.createHAServices(KubernetesHaServicesFactory.java:37)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:265)
        ... 9 more


flink-conf.yaml配置如下

    high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
    state.backend: filesystem
    state.checkpoints.dir: s3://dsw-dia-test/checkpoints
    state.backend.fs.checkpointdir: s3://dsw-dia-test/checkpoints
    s3.path.style.access:true
    fs.allowed-fallback-filesystems: s3
    s3.endpoint: s3.us-south.cloud-object-storage.appdomain.cloud
    s3.access-key: **********
    s3.secret-key: ************
    jobmanager.rpc.address: flink-jobmanager
    taskmanager.numberOfTaskSlots: 2
    blob.server.port: 6124
    jobmanager.rpc.port: 6123
    taskmanager.rpc.port: 6122
    queryable-state.proxy.ports: 6125
    jobmanager.memory.process.size: 1600m
    taskmanager.memory.process.size: 1728m
    parallelism.default: 2
    scheduler-mode: reactive
    execution.checkpointing.interval: 10s  

job 定义

apiVersion: batch/v1
kind: Job
metadata:
  name: flink-jobmanager
spec:
  backoffLimit: 1
  template:
    spec:
      imagePullSecrets:
        - name: artifactory-container-registry
      restartPolicy: Never
      containers:
      - name: flink-jobmanager
        image: txo-dswim-esb-docker-local.artifactory.swg-devops.com/diak8scluster/flink-hadoop-app:fvt
        imagePullPolicy: Always
        command: ["/opt/flink/bin/standalone-job.sh" ]
        args: ["start-foreground",
                "--job-classname", "com.ibm.flink.BatchWordCount",
               "-Djobmanager.rpc.address=flink-jobmanager",
               "-Dparallelism.default=1",
               "-Dblob.server.port=6124",
               "-Dqueryable-state.server.ports=6125"]
        ports:
            - containerPort: 6123
              name: rpc
            - containerPort: 6124
              name: blob-server
            - containerPort: 8081
              name: webui
        livenessProbe:
            tcpSocket:
              port: 6123
            initialDelaySeconds: 30
            periodSeconds: 60
        volumeMounts:
            - name: flink-config-volume
              mountPath: /opt/flink/conf
        securityContext:
            runAsUser: 9999  # refers to user _flink_ from official flink image, change if necessary
      volumes:
        - name: flink-config-volume
          configMap:
            name: flink-config
            items:
              - key: flink-conf.yaml
                path: flink-conf.yaml
              - key: log4j-console.properties
                path: log4j-console.properties


谢谢!