flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

jsqf
Hi:
在使用 ParquetAvroWriters.forGenericRecord(Schema schema)
写parquet文件的时候 出现 类转化异常:
下面是我的代码:

// //transfor 2 dataStream // TupleTypeInfo tupleTypeInfo = new
TupleTypeInfo(GenericData.Record.class,
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
TupleTypeInfo tupleTypeInfo = new
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO,
BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);
ArrayList<org.apache.avro.Schema.Field> fields = new
ArrayList<org.apache.avro.Schema.Field>();
 fields.add(new org.apache.avro.Schema.Field("id",
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING),
"id", JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time",
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING),
"time", JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema =
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink",
"flink.parquet", true, fields);
 String fileSinkPath = "./xxx.text/rs6/";
StreamingFileSink<GenericRecord> parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");


下面是异常:

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task
          - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)
switched from RUNNING to FAILED.09:29:50,283 INFO
org.apache.flink.runtime.taskmanager.Task                     - Sink:
Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING
to FAILED.java.lang.ClassCastException:
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to
org.apache.avro.generic.IndexedRecord

at org.apache.avro.generic.GenericData.getField(GenericData.java:697)

at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)

at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)

at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)

at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)

at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)

at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)

at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)

at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)

at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)

at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)

at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)

at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)

at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)

at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)

at java.lang.Thread.run(Thread.java:748)09:29:50,284

INFO  org.apache.flink.runtime.taskmanager.Task                     -
Freeing task resources for Sink: Unnamed (1/1)
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO
org.apache.flink.runtime.taskmanager.Task                     -
Ensuring all FileSystem streams are closed for task Sink: Unnamed
(1/1) (79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289

INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor            -
Un-registering task and sending final execution state FAILED to
JobManager for task Sink: Unnamed (1/1)
79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink:
Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)

switched from RUNNING to FAILED.java.lang.ClassCastException:
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to
org.apache.avro.generic.IndexedRecord at
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)


请问是否是使用方式不对?还是什么问题?
Reply | Threaded
Open this post in threaded view
|

回复:flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

Shuai Xia
你好,这个问题从异常来看是使用TupleTypeInfo导致的,可以试下使用GenericRecordAvroTypeInfo


------------------------------------------------------------------
发件人:yingbo yang <[hidden email]>
发送时间:2020年6月28日(星期日) 17:38
收件人:user-zh <[hidden email]>
主 题:flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

Hi:
在使用 ParquetAvroWriters.forGenericRecord(Schema schema)
写parquet文件的时候 出现 类转化异常:
下面是我的代码:

// //transfor 2 dataStream // TupleTypeInfo tupleTypeInfo = new
TupleTypeInfo(GenericData.Record.class,
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
TupleTypeInfo tupleTypeInfo = new
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO,
BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);
ArrayList<org.apache.avro.Schema.Field> fields = new
ArrayList<org.apache.avro.Schema.Field>();
 fields.add(new org.apache.avro.Schema.Field("id",
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING),
"id", JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time",
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING),
"time", JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema =
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink",
"flink.parquet", true, fields);
 String fileSinkPath = "./xxx.text/rs6/";
StreamingFileSink<GenericRecord> parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");


下面是异常:

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task
          - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)
switched from RUNNING to FAILED.09:29:50,283 INFO
org.apache.flink.runtime.taskmanager.Task                     - Sink:
Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING
to FAILED.java.lang.ClassCastException:
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to
org.apache.avro.generic.IndexedRecord

at org.apache.avro.generic.GenericData.getField(GenericData.java:697)

at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)

at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)

at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)

at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)

at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)

at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)

at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)

at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)

at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)

at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)

at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)

at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)

at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)

at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)

at java.lang.Thread.run(Thread.java:748)09:29:50,284

INFO  org.apache.flink.runtime.taskmanager.Task                     -
Freeing task resources for Sink: Unnamed (1/1)
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO
org.apache.flink.runtime.taskmanager.Task                     -
Ensuring all FileSystem streams are closed for task Sink: Unnamed
(1/1) (79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289

INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor            -
Un-registering task and sending final execution state FAILED to
JobManager for task Sink: Unnamed (1/1)
79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink:
Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)

switched from RUNNING to FAILED.java.lang.ClassCastException:
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to
org.apache.avro.generic.IndexedRecord at
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)


请问是否是使用方式不对?还是什么问题?

Reply | Threaded
Open this post in threaded view
|

回复:flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

Shuai Xia
你好,我试了一下,纯DataStream的方式是可以使用的,具体使用参考`flink-formats\flink-parquet\src\test\java\org\apache\flink\formats\parquet\avro\ParquetStreamingFileSinkITCase`

在Table转DataStream的方式中,我是先将Table转换为DataStream[Row],然后再进行转换生成DataStream[GenericRecord]
dataStream.map(x => {
  ...val fields = new util.ArrayList[Schema.Field]
  fields.add(new Schema.Field("platform", create(org.apache.avro.Schema.Type.STRING), "platform", null))
  fields.add(new Schema.Field("event", create(org.apache.avro.Schema.Type.STRING), "event", null))
  fields.add(new Schema.Field("dt", create(org.apache.avro.Schema.Type.STRING), "dt", null))
  val parquetSinkSchema: Schema = createRecord("pi", "flinkParquetSink",
    "flink.parquet", true, fields)
  val record = new GenericData.Record(parquetSinkSchema).asInstanceOf[GenericRecord]
  record.put("platform", x.get(0))
  record.put("event", x.get(1))
  record.put("dt", x.get(2))
  record
})



------------------------------------------------------------------
发件人:yingbo yang <[hidden email]>
发送时间:2020年6月29日(星期一) 10:04
收件人:夏帅 <[hidden email]>
抄 送:user-zh <[hidden email]>
主 题:Re: flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

你好:
可以使用 GenericRecordAvroTypeInfo 这个类型,但是这个类型只适合于 table 中只有一个 字段的情况;否则会出现异常:
代码:
ArrayList<org.apache.avro.Schema.Field> fields = new ArrayList<org.apache.avro.Schema.Field>();
fields.add(new org.apache.avro.Schema.Field("id", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", JsonProperties.NULL_VALUE));
fields.add(new org.apache.avro.Schema.Field("time", org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", JsonProperties.NULL_VALUE));
org.apache.avro.Schema parquetSinkSchema = org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", "flink.parquet", true, fields);
String fileSinkPath = "./xxx.text/rs6/";


GenericRecordAvroTypeInfo genericRecordAvroTypeInfo = new GenericRecordAvroTypeInfo(parquetSinkSchema);
DataStream testDataStream1 = flinkTableEnv.toAppendStream(test, genericRecordAvroTypeInfo);

testDataStream1.print().setParallelism(1);


StreamingFileSink<GenericRecord> parquetSink = StreamingFileSink.
        forBulkFormat(new Path(fileSinkPath),
                ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
        .withRollingPolicy(OnCheckpointRollingPolicy.build())
        .build();
testDataStream1.addSink(parquetSink).setParallelism(1);
flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");

异常:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/yyb/Software/localRepository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/yyb/Software/localRepository/org/apache/logging/log4j/log4j-slf4j-impl/2.6.2/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
root
 |-- id: STRING
 |-- time: STRING

09:40:35,872 INFO  org.apache.flink.api.java.typeutils.TypeExtractor             - class org.apache.flink.types.Row does not contain a getter for field fields
09:40:35,874 INFO  org.apache.flink.api.java.typeutils.TypeExtractor             - class org.apache.flink.types.Row does not contain a setter for field fields
09:40:35,874 INFO  org.apache.flink.api.java.typeutils.TypeExtractor             - Class class org.apache.flink.types.Row cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance.
09:40:36,191 INFO  org.apache.flink.api.java.typeutils.TypeExtractor             - class org.apache.flink.types.Row does not contain a getter for field fields
09:40:36,191 INFO  org.apache.flink.api.java.typeutils.TypeExtractor             - class org.apache.flink.types.Row does not contain a setter for field fields
09:40:36,191 INFO  org.apache.flink.api.java.typeutils.TypeExtractor             - Class class org.apache.flink.types.Row cannot be used as a POJO type because not all fields are valid POJO fields, and must be processed as GenericType. Please read the Flink documentation on "Data Types & Serialization" for details of the effect on performance.
Exception in thread "main" org.apache.flink.table.api.TableException: Arity [2] of result [[Lorg.apache.flink.api.common.typeinfo.TypeInformation;@2149594a] does not match the number[1] of requested type [GenericRecord("{"type":"error","name":"pi","namespace":"flink.parquet","doc":"flinkParquetSink","fields":[{"name":"id","type":"string","doc":"id","default":null},{"name":"time","type":"string","doc":"time","default":null}]}")].
 at org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:66)
 at org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
 at org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
 at org.apache.flink.table.planner.StreamPlanner.translateOptimized(StreamPlanner.scala:413)
 at org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:402)
 at org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:180)
 at org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:893)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
 at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
 at scala.collection.AbstractTraversable.map(Traversable.scala:104)
 at org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117)
 at org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.java:351)
 at org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.java:259)
 at com.yyb.flink10.table.blink.stream.FileSystem.ReadFromKafkaConnectorWriteToLocalParquetFileJava.main(ReadFromKafkaConnectorWriteToLocalParquetFileJava.java:96)

Process finished with exit code 1

夏帅 <[hidden email]> 于2020年6月28日周日 下午6:27写道:

你好,这个问题从异常来看是使用TupleTypeInfo导致的,可以试下使用GenericRecordAvroTypeInfo

------------------------------------------------------------------
发件人:yingbo yang <[hidden email]>
发送时间:2020年6月28日(星期日) 17:38
收件人:user-zh <[hidden email]>
主 题:flink1.10 使用 ParquetAvroWriters schema 模式写数据问题

Hi:
在使用 ParquetAvroWriters.forGenericRecord(Schema schema)
写parquet文件的时候 出现 类转化异常:
下面是我的代码:

// //transfor 2 dataStream // TupleTypeInfo tupleTypeInfo = new
TupleTypeInfo(GenericData.Record.class,
BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
TupleTypeInfo tupleTypeInfo = new
TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO,
BasicTypeInfo.STRING_TYPE_INFO);
 DataStream testDataStream = flinkTableEnv.toAppendStream(test, tupleTypeInfo);
 testDataStream.print().setParallelism(1);
ArrayList<org.apache.avro.Schema.Field> fields = new
ArrayList<org.apache.avro.Schema.Field>();
 fields.add(new org.apache.avro.Schema.Field("id",
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING),
"id", JsonProperties.NULL_VALUE));
 fields.add(new org.apache.avro.Schema.Field("time",
org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING),
"time", JsonProperties.NULL_VALUE));
 org.apache.avro.Schema parquetSinkSchema =
org.apache.avro.Schema.createRecord("pi", "flinkParquetSink",
"flink.parquet", true, fields);
 String fileSinkPath = "./xxx.text/rs6/";
StreamingFileSink<GenericRecord> parquetSink = StreamingFileSink.
 forBulkFormat(new Path(fileSinkPath),
 ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
 .withRollingPolicy(OnCheckpointRollingPolicy.build())
 .build();
 testDataStream.addSink(parquetSink).setParallelism(1);
 flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");


下面是异常:

09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task
          - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)
switched from RUNNING to FAILED.09:29:50,283 INFO
org.apache.flink.runtime.taskmanager.Task                     - Sink:
Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING
to FAILED.java.lang.ClassCastException:
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to
org.apache.avro.generic.IndexedRecord

at org.apache.avro.generic.GenericData.getField(GenericData.java:697)

at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)

at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)

at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)

at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)

at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)

at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)

at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)

at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)

at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)

at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)

at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)

at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)

at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)

at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)

at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)

at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)

at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)

at java.lang.Thread.run(Thread.java:748)09:29:50,284

INFO  org.apache.flink.runtime.taskmanager.Task                     -
Freeing task resources for Sink: Unnamed (1/1)
(79505cb6ab2df38886663fd99461315a).09:29:50,285 INFO
org.apache.flink.runtime.taskmanager.Task                     -
Ensuring all FileSystem streams are closed for task Sink: Unnamed
(1/1) (79505cb6ab2df38886663fd99461315a) [FAILED]09:29:50,289

INFO  org.apache.flink.runtime.taskexecutor.TaskExecutor            -
Un-registering task and sending final execution state FAILED to
JobManager for task Sink: Unnamed (1/1)
79505cb6ab2df38886663fd99461315a.09:29:50,293 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Sink:
Unnamed (1/1) (79505cb6ab2df38886663fd99461315a)

switched from RUNNING to FAILED.java.lang.ClassCastException:
org.apache.flink.api.java.tuple.Tuple2 cannot be cast to
org.apache.avro.generic.IndexedRecord at
org.apache.avro.generic.GenericData.getField(GenericData.java:697) at
org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)


请问是否是使用方式不对?还是什么问题?