public class GroupReadSupport extends ReadSupport<Group>
ReadSupport.ReadContextPARQUET_READ_SCHEMA| Constructor and Description |
|---|
GroupReadSupport() |
| Modifier and Type | Method and Description |
|---|---|
ReadSupport.ReadContext |
init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema)
called in
InputFormat.getSplits(org.apache.hadoop.mapreduce.JobContext) in the front end |
RecordMaterializer<Group> |
prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema,
ReadSupport.ReadContext readContext)
called in
RecordReader.initialize(org.apache.hadoop.mapreduce.InputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext) in the back end
the returned RecordMaterializer will materialize the records and add them to the destination |
getSchemaForRead, getSchemaForRead, initpublic ReadSupport.ReadContext init(org.apache.hadoop.conf.Configuration configuration, Map<String,String> keyValueMetaData, MessageType fileSchema)
ReadSupportInputFormat.getSplits(org.apache.hadoop.mapreduce.JobContext) in the front endinit in class ReadSupport<Group>configuration - the job configurationkeyValueMetaData - the app specific metadata from the filefileSchema - the schema of the filepublic RecordMaterializer<Group> prepareForRead(org.apache.hadoop.conf.Configuration configuration, Map<String,String> keyValueMetaData, MessageType fileSchema, ReadSupport.ReadContext readContext)
ReadSupportRecordReader.initialize(org.apache.hadoop.mapreduce.InputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext) in the back end
the returned RecordMaterializer will materialize the records and add them to the destinationprepareForRead in class ReadSupport<Group>configuration - the job configurationkeyValueMetaData - the app specific metadata from the filefileSchema - the schema of the filereadContext - returned by the init methodCopyright © 2019 The Apache Software Foundation. All rights reserved.