pyflink的where该如何使用?如何筛选?

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

pyflink的where该如何使用?如何筛选?

洗你的头
尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
我的代码如下:
# 处理流程
t_env.from_path('mySource') \
    .select("pickup_datetime, dropoff_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, distance_meters(pickup_longitude, pickup_latitude) as O, distance_meters(dropoff_longitude, dropoff_latitude) as D, compute_duration_time(pickup_datetime, dropoff_datetime) as duration") \
&nbsp; &nbsp; .where("duration &gt;= 120 &amp;&amp; duration <= 3600") \
&nbsp; &nbsp; .select("pickup_datetime, dropoff_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O, D, is_same_od(O, D) as same_od, duration") \
&nbsp; &nbsp; .where("same_od == 0") \
&nbsp; &nbsp; .select("pickup_datetime, dropoff_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O, D, duration") \
&nbsp; &nbsp; .insert_into('mySink')
请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
(尝试了去掉where是可以正常运行的)
Reply | Threaded
Open this post in threaded view
|

Re: pyflink的where该如何使用?如何筛选?

Xingbo Huang
Hi,

你说的不行,指的是运行报错了(如果报错了,可以贴下错误的日志),还是出来的结果不符合预期(是不生效,还是啥的)。

Best,
Xingbo

洗你的头 <[hidden email]> 于2020年11月1日周日 上午10:16写道:

> 尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
> 我的代码如下:
> # 处理流程
> t_env.from_path('mySource') \
> &nbsp; &nbsp; .select("pickup_datetime, dropoff_datetime,
> pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude,
> distance_meters(pickup_longitude, pickup_latitude) as O,
> distance_meters(dropoff_longitude, dropoff_latitude) as D,
> compute_duration_time(pickup_datetime, dropoff_datetime) as duration") \
> &nbsp; &nbsp; .where("duration &gt;= 120 &amp;&amp; duration <= 3600") \
> &nbsp; &nbsp; .select("pickup_datetime, dropoff_datetime,
> pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
> D, is_same_od(O, D) as same_od, duration") \
> &nbsp; &nbsp; .where("same_od == 0") \
> &nbsp; &nbsp; .select("pickup_datetime, dropoff_datetime,
> pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
> D, duration") \
> &nbsp; &nbsp; .insert_into('mySink')
> 请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
> (尝试了去掉where是可以正常运行的)
Reply | Threaded
Open this post in threaded view
|

回复: pyflink的where该如何使用?如何筛选?

洗你的头
您好,
我是想在数据处理的过程中,对输出进行筛选,然后使用的是where方法,请问我应该如何更改代码呢?(感谢您的回答)
报错如下:
---------------------------------------------------------------------------
Py4JJavaError&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Traceback (most recent call last)
<ipython-input-11-1b38faf7ede7&gt; in <module&gt;
&nbsp; &nbsp; &nbsp; 1 # 执行与计时
&nbsp; &nbsp; &nbsp; 2 start_time = time.time()
----&gt; 3 t_env.execute("job")
&nbsp; &nbsp; &nbsp; 4 compute_time = time.time() - start_time
&nbsp; &nbsp; &nbsp; 5 print(compute_time, compute_time / 60)


F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\table\table_environment.py in execute(self, job_name)
&nbsp; &nbsp;1055&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"use create_statement_set for multiple sinks.", DeprecationWarning)
&nbsp; &nbsp;1056&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;self._before_execute()
-&gt; 1057&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return JobExecutionResult(self._j_tenv.execute(job_name))
&nbsp; &nbsp;1058&nbsp;
&nbsp; &nbsp;1059&nbsp; &nbsp; &nbsp;def from_elements(self, elements, schema=None, verify_schema=True):


F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
&nbsp; &nbsp;1284&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer = self.gateway_client.send_command(command)
&nbsp; &nbsp;1285&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return_value = get_return_value(
-&gt; 1286&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer, self.gateway_client, self.target_id, self.name)
&nbsp; &nbsp;1287&nbsp;
&nbsp; &nbsp;1288&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;for temp_arg in temp_args:


F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\util\exceptions.py in deco(*a, **kw)
&nbsp; &nbsp; 145&nbsp; &nbsp; &nbsp;def deco(*a, **kw):
&nbsp; &nbsp; 146&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;try:
--&gt; 147&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return f(*a, **kw)
&nbsp; &nbsp; 148&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;except Py4JJavaError as e:
&nbsp; &nbsp; 149&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s = e.java_exception.toString()


F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
&nbsp; &nbsp; 326&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raise Py4JJavaError(
&nbsp; &nbsp; 327&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"An error occurred while calling {0}{1}{2}.\n".
--&gt; 328&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;format(target_id, ".", name), value)
&nbsp; &nbsp; 329&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;else:
&nbsp; &nbsp; 330&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raise Py4JError(


Py4JJavaError: An error occurred while calling o1.execute.
: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
        at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
        at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1717)
        at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
        at org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
        at org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1214)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
        at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
        at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
        at org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
        at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
        at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
        at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
        at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
        at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:892)
        at akka.dispatch.OnComplete.internal(Future.scala:264)
        at akka.dispatch.OnComplete.internal(Future.scala:261)
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
        at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
        at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
        at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
        at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
        at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
        at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
        at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
        at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
        at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
        at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
        at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
        at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
        at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
        at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
        at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
        at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
        at jdk.internal.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
        at akka.actor.ActorCell.invoke(ActorCell.scala:561)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        ... 4 more
Caused by: java.lang.RuntimeException: Failed to create stage bundle factory! INFO:root:Initializing python harness: F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\fn_execution\boot.py --id=29-1 --logging_endpoint=localhost:65021 --artifact_endpoint=localhost:65022 --provision_endpoint=localhost:65023 --control_endpoint=localhost:65020


        at org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:197)
        at org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:164)
        at org.apache.flink.table.runtime.runners.python.scalar.AbstractGeneralPythonScalarFunctionRunner.open(AbstractGeneralPythonScalarFunctionRunner.java:65)
        at org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractStatelessFunctionOperator.java:186)
        at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:143)
        at org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator.open(AbstractStatelessFunctionOperator.java:131)
        at org.apache.flink.table.runtime.operators.python.scalar.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:88)
        at org.apache.flink.table.runtime.operators.python.scalar.AbstractRowDataPythonScalarFunctionOperator.open(AbstractRowDataPythonScalarFunctionOperator.java:80)
        at org.apache.flink.table.runtime.operators.python.scalar.RowDataPythonScalarFunctionOperator.open(RowDataPythonScalarFunctionOperator.java:64)
        at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
        at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Process died with exit code 0
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:331)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:320)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:250)
        at org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:195)
        ... 16 more
Caused by: java.lang.IllegalStateException: Process died with exit code 0
        at org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:72)
        at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:137)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:200)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:184)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
        ... 24 more

------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "user-zh"                                                                                    <[hidden email]&gt;;
发送时间:&nbsp;2020年11月2日(星期一) 上午9:36
收件人:&nbsp;"user-zh"<[hidden email]&gt;;

主题:&nbsp;Re: pyflink的where该如何使用?如何筛选?



Hi,

你说的不行,指的是运行报错了(如果报错了,可以贴下错误的日志),还是出来的结果不符合预期(是不生效,还是啥的)。

Best,
Xingbo

洗你的头 <[hidden email]&gt; 于2020年11月1日周日 上午10:16写道:

&gt; 尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
&gt; 我的代码如下:
&gt; # 处理流程
&gt; t_env.from_path('mySource') \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude,
&gt; distance_meters(pickup_longitude, pickup_latitude) as O,
&gt; distance_meters(dropoff_longitude, dropoff_latitude) as D,
&gt; compute_duration_time(pickup_datetime, dropoff_datetime) as duration") \
&gt; &amp;nbsp; &amp;nbsp; .where("duration &amp;gt;= 120 &amp;amp;&amp;amp; duration <= 3600") \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
&gt; D, is_same_od(O, D) as same_od, duration") \
&gt; &amp;nbsp; &amp;nbsp; .where("same_od == 0") \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
&gt; D, duration") \
&gt; &amp;nbsp; &amp;nbsp; .insert_into('mySink')
&gt; 请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
&gt; (尝试了去掉where是可以正常运行的)
Reply | Threaded
Open this post in threaded view
|

回复: pyflink的where该如何使用?如何筛选?

洗你的头
In reply to this post by Xingbo Huang
您好,
我是想在数据处理的过程中,对输出进行筛选,然后使用的是where方法,请问我应该如何更改代码呢?(感谢您的回答)
报错如下:
---------------------------------------------------------------------------
Py4JJavaError&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Traceback (most recent call last)
<ipython-input-11-1b38faf7ede7&gt; in <module&gt;
&nbsp; &nbsp; &nbsp; 1 # 执行与计时
&nbsp; &nbsp; &nbsp; 2 start_time = time.time()
----&gt; 3 t_env.execute("job")
&nbsp; &nbsp; &nbsp; 4 compute_time = time.time() - start_time
&nbsp; &nbsp; &nbsp; 5 print(compute_time, compute_time / 60)


F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\table\table_environment.py in execute(self, job_name)
&nbsp; &nbsp;1055&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"use create_statement_set for multiple sinks.", DeprecationWarning)
&nbsp; &nbsp;1056&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;self._before_execute()
-&gt; 1057&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return JobExecutionResult(self._j_tenv.execute(job_name))
&nbsp; &nbsp;1058
&nbsp; &nbsp;1059&nbsp; &nbsp; &nbsp;def from_elements(self, elements, schema=None, verify_schema=True):


F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
&nbsp; &nbsp;1284&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer = self.gateway_client.send_command(command)
&nbsp; &nbsp;1285&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return_value = get_return_value(
-&gt; 1286&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer, self.gateway_client, self.target_id, self.name)
&nbsp; &nbsp;1287
&nbsp; &nbsp;1288&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;for temp_arg in temp_args:


F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\util\exceptions.py in deco(*a, **kw)
&nbsp; &nbsp; 145&nbsp; &nbsp; &nbsp;def deco(*a, **kw):
&nbsp; &nbsp; 146&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;try:
--&gt; 147&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return f(*a, **kw)
&nbsp; &nbsp; 148&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;except Py4JJavaError as e:
&nbsp; &nbsp; 149&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s = e.java_exception.toString()


F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
&nbsp; &nbsp; 326&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raise Py4JJavaError(
&nbsp; &nbsp; 327&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"An error occurred while calling {0}{1}{2}.\n".
--&gt; 328&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;format(target_id, ".", name), value)
&nbsp; &nbsp; 329&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;else:
&nbsp; &nbsp; 330&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raise Py4JError(


Py4JJavaError: An error occurred while calling o1.execute.
: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
        at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1717)
        at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
        at org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
        at org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1214)
        at&nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native&nbsp;Method)
        at&nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at&nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at&nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
        at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
        at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
        at&nbsp;java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
        at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
        at org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
        at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
        at&nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
        at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:892)
        at akka.dispatch.OnComplete.internal(Future.scala:264)
        at akka.dispatch.OnComplete.internal(Future.scala:261)
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
        at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
        at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
        at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
        at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
        at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
        at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
        at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
        at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
        at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
        at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
        at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
        at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
        at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
        at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
        at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
        at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
        at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
        at jdk.internal.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
        at&nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at&nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
        at akka.actor.ActorCell.invoke(ActorCell.scala:561)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        ... 4 more
Caused by: java.lang.RuntimeException: Failed to create stage bundle factory! INFO:root:Initializing python harness: F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\fn_execution\boot.py --id=29-1 --logging_endpoint=localhost:65021 --artifact_endpoint=localhost:65022 --provision_endpoint=localhost:65023 --control_endpoint=localhost:65020


        at org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:197)
        at org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:164)
        at org.apache.flink.table.runtime.runners.python.scalar.AbstractGeneralPythonScalarFunctionRunner.open(AbstractGeneralPythonScalarFunctionRunner.java:65)
        at org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractStatelessFunctionOperator.java:186)
        at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:143)
        at org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator.open(AbstractStatelessFunctionOperator.java:131)
        at org.apache.flink.table.runtime.operators.python.scalar.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:88)
        at org.apache.flink.table.runtime.operators.python.scalar.AbstractRowDataPythonScalarFunctionOperator.open(AbstractRowDataPythonScalarFunctionOperator.java:80)
        at org.apache.flink.table.runtime.operators.python.scalar.RowDataPythonScalarFunctionOperator.open(RowDataPythonScalarFunctionOperator.java:64)
        at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
        at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
        at&nbsp;java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Process died with exit code 0
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:331)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:320)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:250)
        at org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:195)
        ... 16 more
Caused by: java.lang.IllegalStateException: Process died with exit code 0
        at org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:72)
        at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:137)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:200)
        at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:184)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
        ... 24 more






------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "user-zh"                                                                                    <[hidden email]&gt;;
发送时间:&nbsp;2020年11月2日(星期一) 上午9:36
收件人:&nbsp;"user-zh"<[hidden email]&gt;;

主题:&nbsp;Re: pyflink的where该如何使用?如何筛选?



Hi,

你说的不行,指的是运行报错了(如果报错了,可以贴下错误的日志),还是出来的结果不符合预期(是不生效,还是啥的)。

Best,
Xingbo

洗你的头 <[hidden email]&gt; 于2020年11月1日周日 上午10:16写道:

&gt; 尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
&gt; 我的代码如下:
&gt; # 处理流程
&gt; t_env.from_path('mySource') \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude,
&gt; distance_meters(pickup_longitude, pickup_latitude) as O,
&gt; distance_meters(dropoff_longitude, dropoff_latitude) as D,
&gt; compute_duration_time(pickup_datetime, dropoff_datetime) as duration") \
&gt; &amp;nbsp; &amp;nbsp; .where("duration &amp;gt;= 120 &amp;amp;&amp;amp; duration <= 3600") \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
&gt; D, is_same_od(O, D) as same_od, duration") \
&gt; &amp;nbsp; &amp;nbsp; .where("same_od == 0") \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
&gt; D, duration") \
&gt; &amp;nbsp; &amp;nbsp; .insert_into('mySink')
&gt; 请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
&gt; (尝试了去掉where是可以正常运行的)
Reply | Threaded
Open this post in threaded view
|

回复: 回复: pyflink的where该如何使用?如何筛选?

Evan
首先你的邮件里有很多“&nbsp” 符号,很影响阅读
根据你的邮件大致判断,是你的where用法用错了,貌似是你的where里边写了两个条件,建议你查一下pyflink的api,查询where的用法



 
发件人: 洗你的头
发送时间: 2020-11-02 10:15
收件人: user-zh
主题: 回复: pyflink的where该如何使用?如何筛选?
您好,
我是想在数据处理的过程中,对输出进行筛选,然后使用的是where方法,请问我应该如何更改代码呢?(感谢您的回答)
报错如下:
---------------------------------------------------------------------------
Py4JJavaError&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Traceback (most recent call last)
<ipython-input-11-1b38faf7ede7&gt; in <module&gt;
&nbsp; &nbsp; &nbsp; 1 # 执行与计时
&nbsp; &nbsp; &nbsp; 2 start_time = time.time()
----&gt; 3 t_env.execute("job")
&nbsp; &nbsp; &nbsp; 4 compute_time = time.time() - start_time
&nbsp; &nbsp; &nbsp; 5 print(compute_time, compute_time / 60)
 
 
F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\table\table_environment.py in execute(self, job_name)
&nbsp; &nbsp;1055&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"use create_statement_set for multiple sinks.", DeprecationWarning)
&nbsp; &nbsp;1056&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;self._before_execute()
-&gt; 1057&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return JobExecutionResult(self._j_tenv.execute(job_name))
&nbsp; &nbsp;1058
&nbsp; &nbsp;1059&nbsp; &nbsp; &nbsp;def from_elements(self, elements, schema=None, verify_schema=True):
 
 
F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
&nbsp; &nbsp;1284&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer = self.gateway_client.send_command(command)
&nbsp; &nbsp;1285&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return_value = get_return_value(
-&gt; 1286&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer, self.gateway_client, self.target_id, self.name)
&nbsp; &nbsp;1287
&nbsp; &nbsp;1288&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;for temp_arg in temp_args:
 
 
F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\util\exceptions.py in deco(*a, **kw)
&nbsp; &nbsp; 145&nbsp; &nbsp; &nbsp;def deco(*a, **kw):
&nbsp; &nbsp; 146&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;try:
--&gt; 147&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return f(*a, **kw)
&nbsp; &nbsp; 148&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;except Py4JJavaError as e:
&nbsp; &nbsp; 149&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s = e.java_exception.toString()
 
 
F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
&nbsp; &nbsp; 326&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raise Py4JJavaError(
&nbsp; &nbsp; 327&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"An error occurred while calling {0}{1}{2}.\n".
--&gt; 328&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;format(target_id, ".", name), value)
&nbsp; &nbsp; 329&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;else:
&nbsp; &nbsp; 330&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;raise Py4JError(
 
 
Py4JJavaError: An error occurred while calling o1.execute.
: java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at&nbsp;java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at&nbsp;java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1717)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
at org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1214)
at&nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native&nbsp;Method)
at&nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at&nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at&nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
at&nbsp;java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
at org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
at&nbsp;java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at&nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at&nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
at&nbsp;java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at&nbsp;java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at&nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at&nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:892)
at akka.dispatch.OnComplete.internal(Future.scala:264)
at akka.dispatch.OnComplete.internal(Future.scala:261)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
at jdk.internal.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at&nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at&nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
... 4 more
Caused by: java.lang.RuntimeException: Failed to create stage bundle factory! INFO:root:Initializing python harness: F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\fn_execution\boot.py --id=29-1 --logging_endpoint=localhost:65021 --artifact_endpoint=localhost:65022 --provision_endpoint=localhost:65023 --control_endpoint=localhost:65020
 
 
at org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:197)
at org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:164)
at org.apache.flink.table.runtime.runners.python.scalar.AbstractGeneralPythonScalarFunctionRunner.open(AbstractGeneralPythonScalarFunctionRunner.java:65)
at org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractStatelessFunctionOperator.java:186)
at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:143)
at org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator.open(AbstractStatelessFunctionOperator.java:131)
at org.apache.flink.table.runtime.operators.python.scalar.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:88)
at org.apache.flink.table.runtime.operators.python.scalar.AbstractRowDataPythonScalarFunctionOperator.open(AbstractRowDataPythonScalarFunctionOperator.java:80)
at org.apache.flink.table.runtime.operators.python.scalar.RowDataPythonScalarFunctionOperator.open(RowDataPythonScalarFunctionOperator.java:64)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at&nbsp;java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: Process died with exit code 0
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:331)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:320)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:250)
at org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:195)
... 16 more
Caused by: java.lang.IllegalStateException: Process died with exit code 0
at org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:72)
at org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:137)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:200)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:184)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
... 24 more
 
 
 
 
 
 
------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "user-zh"                                                                                    <[hidden email]&gt;;
发送时间:&nbsp;2020年11月2日(星期一) 上午9:36
收件人:&nbsp;"user-zh"<[hidden email]&gt;;
 
主题:&nbsp;Re: pyflink的where该如何使用?如何筛选?
 
 
 
Hi,
 
你说的不行,指的是运行报错了(如果报错了,可以贴下错误的日志),还是出来的结果不符合预期(是不生效,还是啥的)。
 
Best,
Xingbo
 
洗你的头 <[hidden email]&gt; 于2020年11月1日周日 上午10:16写道:
 
&gt; 尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
&gt; 我的代码如下:
&gt; # 处理流程
&gt; t_env.from_path('mySource') \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude,
&gt; distance_meters(pickup_longitude, pickup_latitude) as O,
&gt; distance_meters(dropoff_longitude, dropoff_latitude) as D,
&gt; compute_duration_time(pickup_datetime, dropoff_datetime) as duration") \
&gt; &amp;nbsp; &amp;nbsp; .where("duration &amp;gt;= 120 &amp;amp;&amp;amp; duration <= 3600") \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
&gt; D, is_same_od(O, D) as same_od, duration") \
&gt; &amp;nbsp; &amp;nbsp; .where("same_od == 0") \
&gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, O,
&gt; D, duration") \
&gt; &amp;nbsp; &amp;nbsp; .insert_into('mySink')
&gt; 请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
&gt; (尝试了去掉where是可以正常运行的)
Reply | Threaded
Open this post in threaded view
|

Re: 回复: pyflink的where该如何使用?如何筛选?

Xingbo Huang
In reply to this post by 洗你的头
Hi,
你可以看下下面这个JIRA[1],看下是不是你所遇到的问题。

[1] https://issues.apache.org/jira/browse/FLINK-19675

Best,
Xingbo

Evan <[hidden email]> 于2020年11月2日周一 下午2:46写道:

> 首先你的邮件里有很多“&nbsp” 符号,很影响阅读
> 根据你的邮件大致判断,是你的where用法用错了,貌似是你的where里边写了两个条件,建议你查一下pyflink的api,查询where的用法
>
>
>
>
> 发件人: 洗你的头
> 发送时间: 2020-11-02 10:15
> 收件人: user-zh
> 主题: 回复: pyflink的where该如何使用?如何筛选?
> 您好,
> 我是想在数据处理的过程中,对输出进行筛选,然后使用的是where方法,请问我应该如何更改代码呢?(感谢您的回答)
> 报错如下:
> ---------------------------------------------------------------------------
> Py4JJavaError&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Traceback (most recent call
> last)
> <ipython-input-11-1b38faf7ede7&gt; in <module&gt;
> &nbsp; &nbsp; &nbsp; 1 # 执行与计时
> &nbsp; &nbsp; &nbsp; 2 start_time = time.time()
> ----&gt; 3 t_env.execute("job")
> &nbsp; &nbsp; &nbsp; 4 compute_time = time.time() - start_time
> &nbsp; &nbsp; &nbsp; 5 print(compute_time, compute_time / 60)
>
>
> F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\table\table_environment.py
> in execute(self, job_name)
> &nbsp; &nbsp;1055&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp; &nbsp;"use create_statement_set for multiple sinks.",
> DeprecationWarning)
> &nbsp; &nbsp;1056&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;self._before_execute()
> -&gt; 1057&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return
> JobExecutionResult(self._j_tenv.execute(job_name))
> &nbsp; &nbsp;1058
> &nbsp; &nbsp;1059&nbsp; &nbsp; &nbsp;def from_elements(self, elements,
> schema=None, verify_schema=True):
>
>
> F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\java_gateway.py in
> __call__(self, *args)
> &nbsp; &nbsp;1284&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer =
> self.gateway_client.send_command(command)
> &nbsp; &nbsp;1285&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return_value =
> get_return_value(
> -&gt; 1286&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;answer,
> self.gateway_client, self.target_id, self.name)
> &nbsp; &nbsp;1287
> &nbsp; &nbsp;1288&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;for temp_arg in
> temp_args:
>
>
> F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\util\exceptions.py in
> deco(*a, **kw)
> &nbsp; &nbsp; 145&nbsp; &nbsp; &nbsp;def deco(*a, **kw):
> &nbsp; &nbsp; 146&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;try:
> --&gt; 147&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;return f(*a,
> **kw)
> &nbsp; &nbsp; 148&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;except Py4JJavaError as
> e:
> &nbsp; &nbsp; 149&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;s =
> e.java_exception.toString()
>
>
> F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\protocol.py in
> get_return_value(answer, gateway_client, target_id, name)
> &nbsp; &nbsp; 326&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;raise Py4JJavaError(
> &nbsp; &nbsp; 327&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp; &nbsp;"An error occurred while calling {0}{1}{2}.\n".
> --&gt; 328&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp; &nbsp;format(target_id, ".", name), value)
> &nbsp; &nbsp; 329&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;else:
> &nbsp; &nbsp; 330&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
> &nbsp;raise Py4JError(
>
>
> Py4JJavaError: An error occurred while calling o1.execute.
> : java.util.concurrent.ExecutionException:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> at
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1717)
> at
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
> at
> org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1214)
>
> at&nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native&nbsp;Method)
>
> at&nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at&nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at&nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> at
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
> at
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> at
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
> at
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
> at&nbsp;java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job
> execution failed.
> at
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> at
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>
> at&nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:892)
> at akka.dispatch.OnComplete.internal(Future.scala:264)
> at akka.dispatch.OnComplete.internal(Future.scala:261)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
> at
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> at
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> at
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> at
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
> at
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> at
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed
> by NoRestartBackoffTimeStrategy
> at
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> at
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> at
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> at
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> at
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
> at
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
> at
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
> at jdk.internal.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>
> at&nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at&nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
> at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
> at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> ... 4 more
> Caused by: java.lang.RuntimeException: Failed to create stage bundle
> factory! INFO:root:Initializing python harness:
> F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\fn_execution\boot.py
> --id=29-1 --logging_endpoint=localhost:65021
> --artifact_endpoint=localhost:65022 --provision_endpoint=localhost:65023
> --control_endpoint=localhost:65020
>
>
> at
> org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:197)
> at
> org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:164)
> at
> org.apache.flink.table.runtime.runners.python.scalar.AbstractGeneralPythonScalarFunctionRunner.open(AbstractGeneralPythonScalarFunctionRunner.java:65)
> at
> org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractStatelessFunctionOperator.java:186)
> at
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:143)
> at
> org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator.open(AbstractStatelessFunctionOperator.java:131)
> at
> org.apache.flink.table.runtime.operators.python.scalar.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:88)
> at
> org.apache.flink.table.runtime.operators.python.scalar.AbstractRowDataPythonScalarFunctionOperator.open(AbstractRowDataPythonScalarFunctionOperator.java:80)
> at
> org.apache.flink.table.runtime.operators.python.scalar.RowDataPythonScalarFunctionOperator.open(RowDataPythonScalarFunctionOperator.java:64)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
> at&nbsp;java.base/java.lang.Thread.run(Thread.java:834)
> Caused by:
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.IllegalStateException: Process died with exit code 0
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
> at
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:331)
> at
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&gt;(DefaultJobBundleFactory.java:320)
> at
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:250)
> at
> org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:195)
> ... 16 more
> Caused by: java.lang.IllegalStateException: Process died with exit code 0
> at
> org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:72)
> at
> org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:137)
> at
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:200)
> at
> org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:184)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
> at
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
> ... 24 more
>
>
>
>
>
>
> ------------------&nbsp;原始邮件&nbsp;------------------
> 发件人:
>                                                   "user-zh"
>                                                                     <
> [hidden email]&gt;;
> 发送时间:&nbsp;2020年11月2日(星期一) 上午9:36
> 收件人:&nbsp;"user-zh"<[hidden email]&gt;;
>
> 主题:&nbsp;Re: pyflink的where该如何使用?如何筛选?
>
>
>
> Hi,
>
> 你说的不行,指的是运行报错了(如果报错了,可以贴下错误的日志),还是出来的结果不符合预期(是不生效,还是啥的)。
>
> Best,
> Xingbo
>
> 洗你的头 <[hidden email]&gt; 于2020年11月1日周日 上午10:16写道:
>
> &gt; 尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
> &gt; 我的代码如下:
> &gt; # 处理流程
> &gt; t_env.from_path('mySource') \
> &gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
> &gt; pickup_longitude, pickup_latitude, dropoff_longitude,
> dropoff_latitude,
> &gt; distance_meters(pickup_longitude, pickup_latitude) as O,
> &gt; distance_meters(dropoff_longitude, dropoff_latitude) as D,
> &gt; compute_duration_time(pickup_datetime, dropoff_datetime) as
> duration") \
> &gt; &amp;nbsp; &amp;nbsp; .where("duration &amp;gt;= 120
> &amp;amp;&amp;amp; duration <= 3600") \
> &gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
> &gt; pickup_longitude, pickup_latitude, dropoff_longitude,
> dropoff_latitude, O,
> &gt; D, is_same_od(O, D) as same_od, duration") \
> &gt; &amp;nbsp; &amp;nbsp; .where("same_od == 0") \
> &gt; &amp;nbsp; &amp;nbsp; .select("pickup_datetime, dropoff_datetime,
> &gt; pickup_longitude, pickup_latitude, dropoff_longitude,
> dropoff_latitude, O,
> &gt; D, duration") \
> &gt; &amp;nbsp; &amp;nbsp; .insert_into('mySink')
> &gt; 请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
> &gt; (尝试了去掉where是可以正常运行的)
>
Reply | Threaded
Open this post in threaded view
|

回复: 回复: pyflink的where该如何使用?如何筛选?

洗你的头
尊敬的开发者您好,
好像是这个问题吧,我的确是在处理流程中加入了python UDF,
按照官方文档所述,select方法返回的是一个Table,因此按理来说是可以使用where方法的。
但是会报错,请问这是一个Bug吗?应该如何解决呢?
谢谢您。


同时也感谢Evan 您的细心回答,我是直接从命令行中复制的错误信息,不知道为什么会出现格式的问题,下次我会注意的,
您的回答很有帮助,但是似乎问题不是出在这里,即使我在where中使用单个条件也仍会出现错误,十分感谢您。


------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "user-zh"                                                                                    <[hidden email]&gt;;
发送时间:&nbsp;2020年11月2日(星期一) 晚上7:57
收件人:&nbsp;"user-zh"<[hidden email]&gt;;

主题:&nbsp;Re: 回复: pyflink的where该如何使用?如何筛选?



Hi,
你可以看下下面这个JIRA[1],看下是不是你所遇到的问题。

[1] https://issues.apache.org/jira/browse/FLINK-19675

Best,
Xingbo

Evan <[hidden email]&gt; 于2020年11月2日周一 下午2:46写道:

&gt; 首先你的邮件里有很多“&amp;nbsp” 符号,很影响阅读
&gt; 根据你的邮件大致判断,是你的where用法用错了,貌似是你的where里边写了两个条件,建议你查一下pyflink的api,查询where的用法
&gt;
&gt;
&gt;
&gt;
&gt; 发件人: 洗你的头
&gt; 发送时间: 2020-11-02 10:15
&gt; 收件人: user-zh
&gt; 主题: 回复: pyflink的where该如何使用?如何筛选?
&gt; 您好,
&gt; 我是想在数据处理的过程中,对输出进行筛选,然后使用的是where方法,请问我应该如何更改代码呢?(感谢您的回答)
&gt; 报错如下:
&gt; ---------------------------------------------------------------------------
&gt; Py4JJavaError&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Traceback (most recent call
&gt; last)
&gt; <ipython-input-11-1b38faf7ede7&amp;gt; in <module&amp;gt;
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1 # 执行与计时
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; 2 start_time = time.time()
&gt; ----&amp;gt; 3 t_env.execute("job")
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; 4 compute_time = time.time() - start_time
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; 5 print(compute_time, compute_time / 60)
&gt;
&gt;
&gt; F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\table\table_environment.py
&gt; in execute(self, job_name)
&gt; &amp;nbsp; &amp;nbsp;1055&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;"use create_statement_set for multiple sinks.",
&gt; DeprecationWarning)
&gt; &amp;nbsp; &amp;nbsp;1056&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;self._before_execute()
&gt; -&amp;gt; 1057&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;return
&gt; JobExecutionResult(self._j_tenv.execute(job_name))
&gt; &amp;nbsp; &amp;nbsp;1058
&gt; &amp;nbsp; &amp;nbsp;1059&amp;nbsp; &amp;nbsp; &amp;nbsp;def from_elements(self, elements,
&gt; schema=None, verify_schema=True):
&gt;
&gt;
&gt; F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\java_gateway.py in
&gt; __call__(self, *args)
&gt; &amp;nbsp; &amp;nbsp;1284&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;answer =
&gt; self.gateway_client.send_command(command)
&gt; &amp;nbsp; &amp;nbsp;1285&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;return_value =
&gt; get_return_value(
&gt; -&amp;gt; 1286&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;answer,
&gt; self.gateway_client, self.target_id, self.name)
&gt; &amp;nbsp; &amp;nbsp;1287
&gt; &amp;nbsp; &amp;nbsp;1288&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;for temp_arg in
&gt; temp_args:
&gt;
&gt;
&gt; F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\util\exceptions.py in
&gt; deco(*a, **kw)
&gt; &amp;nbsp; &amp;nbsp; 145&amp;nbsp; &amp;nbsp; &amp;nbsp;def deco(*a, **kw):
&gt; &amp;nbsp; &amp;nbsp; 146&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;try:
&gt; --&amp;gt; 147&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;return f(*a,
&gt; **kw)
&gt; &amp;nbsp; &amp;nbsp; 148&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;except Py4JJavaError as
&gt; e:
&gt; &amp;nbsp; &amp;nbsp; 149&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;s =
&gt; e.java_exception.toString()
&gt;
&gt;
&gt; F:\Anaconda3\envs\pyflink\lib\site-packages\py4j\protocol.py in
&gt; get_return_value(answer, gateway_client, target_id, name)
&gt; &amp;nbsp; &amp;nbsp; 326&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&gt; &amp;nbsp;raise Py4JJavaError(
&gt; &amp;nbsp; &amp;nbsp; 327&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp;"An error occurred while calling {0}{1}{2}.\n".
&gt; --&amp;gt; 328&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&gt; &amp;nbsp; &amp;nbsp;format(target_id, ".", name), value)
&gt; &amp;nbsp; &amp;nbsp; 329&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;else:
&gt; &amp;nbsp; &amp;nbsp; 330&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;
&gt; &amp;nbsp;raise Py4JError(
&gt;
&gt;
&gt; Py4JJavaError: An error occurred while calling o1.execute.
&gt; : java.util.concurrent.ExecutionException:
&gt; org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
&gt; at
&gt; org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1717)
&gt; at
&gt; org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
&gt; at
&gt; org.apache.flink.table.planner.delegation.ExecutorBase.execute(ExecutorBase.java:52)
&gt; at
&gt; org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1214)
&gt;
&gt; at&amp;nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native&amp;nbsp;Method)
&gt;
&gt; at&amp;nbsp;java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
&gt;
&gt; at&amp;nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
&gt; at&amp;nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
&gt; at
&gt; org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
&gt; at
&gt; org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
&gt; at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
&gt; at
&gt; org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
&gt; at
&gt; org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
&gt; at
&gt; org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
&gt; at&amp;nbsp;java.base/java.lang.Thread.run(Thread.java:834)
&gt; Caused by: org.apache.flink.runtime.client.JobExecutionException: Job
&gt; execution failed.
&gt; at
&gt; org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
&gt; at
&gt; org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
&gt; at
&gt; org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
&gt;
&gt; at&amp;nbsp;java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
&gt; at
&gt; org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:892)
&gt; at akka.dispatch.OnComplete.internal(Future.scala:264)
&gt; at akka.dispatch.OnComplete.internal(Future.scala:261)
&gt; at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
&gt; at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
&gt; at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
&gt; at
&gt; org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
&gt; at
&gt; scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
&gt; at
&gt; scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
&gt; at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
&gt; at
&gt; akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
&gt; at
&gt; akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
&gt; at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
&gt; at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
&gt; at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
&gt; at
&gt; akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
&gt; at
&gt; akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
&gt; at
&gt; akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
&gt; at
&gt; akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
&gt; at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
&gt; at
&gt; akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
&gt; at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
&gt; at
&gt; akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
&gt; at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
&gt; at
&gt; akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
&gt; at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
&gt; at
&gt; akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
&gt; Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed
&gt; by NoRestartBackoffTimeStrategy
&gt; at
&gt; org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
&gt; at
&gt; org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
&gt; at
&gt; org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
&gt; at
&gt; org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
&gt; at
&gt; org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
&gt; at
&gt; org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
&gt; at
&gt; org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
&gt; at jdk.internal.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
&gt;
&gt; at&amp;nbsp;java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
&gt; at&amp;nbsp;java.base/java.lang.reflect.Method.invoke(Method.java:566)
&gt; at
&gt; org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
&gt; at
&gt; org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
&gt; at
&gt; org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
&gt; at
&gt; org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
&gt; at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
&gt; at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
&gt; at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
&gt; at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
&gt; at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
&gt; at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
&gt; at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
&gt; at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
&gt; at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
&gt; at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
&gt; at akka.actor.ActorCell.invoke(ActorCell.scala:561)
&gt; at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
&gt; at akka.dispatch.Mailbox.run(Mailbox.scala:225)
&gt; at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
&gt; ... 4 more
&gt; Caused by: java.lang.RuntimeException: Failed to create stage bundle
&gt; factory! INFO:root:Initializing python harness:
&gt; F:\Anaconda3\envs\pyflink\lib\site-packages\pyflink\fn_execution\boot.py
&gt; --id=29-1 --logging_endpoint=localhost:65021
&gt; --artifact_endpoint=localhost:65022 --provision_endpoint=localhost:65023
&gt; --control_endpoint=localhost:65020
&gt;
&gt;
&gt; at
&gt; org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:197)
&gt; at
&gt; org.apache.flink.python.AbstractPythonFunctionRunner.open(AbstractPythonFunctionRunner.java:164)
&gt; at
&gt; org.apache.flink.table.runtime.runners.python.scalar.AbstractGeneralPythonScalarFunctionRunner.open(AbstractGeneralPythonScalarFunctionRunner.java:65)
&gt; at
&gt; org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator$ProjectUdfInputPythonScalarFunctionRunner.open(AbstractStatelessFunctionOperator.java:186)
&gt; at
&gt; org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.open(AbstractPythonFunctionOperator.java:143)
&gt; at
&gt; org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator.open(AbstractStatelessFunctionOperator.java:131)
&gt; at
&gt; org.apache.flink.table.runtime.operators.python.scalar.AbstractPythonScalarFunctionOperator.open(AbstractPythonScalarFunctionOperator.java:88)
&gt; at
&gt; org.apache.flink.table.runtime.operators.python.scalar.AbstractRowDataPythonScalarFunctionOperator.open(AbstractRowDataPythonScalarFunctionOperator.java:80)
&gt; at
&gt; org.apache.flink.table.runtime.operators.python.scalar.RowDataPythonScalarFunctionOperator.open(RowDataPythonScalarFunctionOperator.java:64)
&gt; at
&gt; org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
&gt; at
&gt; org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
&gt; at
&gt; org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
&gt; at
&gt; org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
&gt; at
&gt; org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
&gt; at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
&gt; at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
&gt; at&amp;nbsp;java.base/java.lang.Thread.run(Thread.java:834)
&gt; Caused by:
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
&gt; java.lang.IllegalStateException: Process died with exit code 0
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
&gt; at
&gt; org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&amp;gt;(DefaultJobBundleFactory.java:331)
&gt; at
&gt; org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init&amp;gt;(DefaultJobBundleFactory.java:320)
&gt; at
&gt; org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:250)
&gt; at
&gt; org.apache.flink.python.AbstractPythonFunctionRunner.createStageBundleFactory(AbstractPythonFunctionRunner.java:195)
&gt; ... 16 more
&gt; Caused by: java.lang.IllegalStateException: Process died with exit code 0
&gt; at
&gt; org.apache.beam.runners.fnexecution.environment.ProcessManager$RunningProcess.isAliveOrThrow(ProcessManager.java:72)
&gt; at
&gt; org.apache.beam.runners.fnexecution.environment.ProcessEnvironmentFactory.createEnvironment(ProcessEnvironmentFactory.java:137)
&gt; at
&gt; org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:200)
&gt; at
&gt; org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:184)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
&gt; at
&gt; org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
&gt; ... 24 more
&gt;
&gt;
&gt;
&gt;
&gt;
&gt;
&gt; ------------------&amp;nbsp;原始邮件&amp;nbsp;------------------
&gt; 发件人:
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "user-zh"
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <
&gt; [hidden email]&amp;gt;;
&gt; 发送时间:&amp;nbsp;2020年11月2日(星期一) 上午9:36
&gt; 收件人:&amp;nbsp;"user-zh"<[hidden email]&amp;gt;;
&gt;
&gt; 主题:&amp;nbsp;Re: pyflink的where该如何使用?如何筛选?
&gt;
&gt;
&gt;
&gt; Hi,
&gt;
&gt; 你说的不行,指的是运行报错了(如果报错了,可以贴下错误的日志),还是出来的结果不符合预期(是不生效,还是啥的)。
&gt;
&gt; Best,
&gt; Xingbo
&gt;
&gt; 洗你的头 <[hidden email]&amp;gt; 于2020年11月1日周日 上午10:16写道:
&gt;
&gt; &amp;gt; 尊敬的开发者您好:我想要在输出表中进行条件筛选,使用了where语句,结果不行
&gt; &amp;gt; 我的代码如下:
&gt; &amp;gt; # 处理流程
&gt; &amp;gt; t_env.from_path('mySource') \
&gt; &amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; &amp;gt; pickup_longitude, pickup_latitude, dropoff_longitude,
&gt; dropoff_latitude,
&gt; &amp;gt; distance_meters(pickup_longitude, pickup_latitude) as O,
&gt; &amp;gt; distance_meters(dropoff_longitude, dropoff_latitude) as D,
&gt; &amp;gt; compute_duration_time(pickup_datetime, dropoff_datetime) as
&gt; duration") \
&gt; &amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; .where("duration &amp;amp;gt;= 120
&gt; &amp;amp;amp;&amp;amp;amp; duration <= 3600") \
&gt; &amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; &amp;gt; pickup_longitude, pickup_latitude, dropoff_longitude,
&gt; dropoff_latitude, O,
&gt; &amp;gt; D, is_same_od(O, D) as same_od, duration") \
&gt; &amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; .where("same_od == 0") \
&gt; &amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; .select("pickup_datetime, dropoff_datetime,
&gt; &amp;gt; pickup_longitude, pickup_latitude, dropoff_longitude,
&gt; dropoff_latitude, O,
&gt; &amp;gt; D, duration") \
&gt; &amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; .insert_into('mySink')
&gt; &amp;gt; 请问我这样使用where为什么不行呢?我应该如何去筛选出想要的结果呢?
&gt; &amp;gt; (尝试了去掉where是可以正常运行的)
&gt;