Package org.apache.parquet.hadoop
Class ParquetRecordWriter<T>
- java.lang.Object
-
- org.apache.hadoop.mapreduce.RecordWriter<Void,T>
-
- org.apache.parquet.hadoop.ParquetRecordWriter<T>
-
- Type Parameters:
T- the type of the materialized records
public class ParquetRecordWriter<T> extends org.apache.hadoop.mapreduce.RecordWriter<Void,T>
Writes records to a Parquet file- See Also:
ParquetOutputFormat
-
-
Constructor Summary
Constructors Constructor Description ParquetRecordWriter(ParquetFileWriter w, WriteSupport<T> writeSupport, org.apache.parquet.schema.MessageType schema, Map<String,String> extraMetaData, int blockSize, int pageSize, CodecFactory.BytesCompressor compressor, int dictionaryPageSize, boolean enableDictionary, boolean validating, org.apache.parquet.column.ParquetProperties.WriterVersion writerVersion)Deprecated.ParquetRecordWriter(ParquetFileWriter w, WriteSupport<T> writeSupport, org.apache.parquet.schema.MessageType schema, Map<String,String> extraMetaData, long blockSize, int pageSize, CodecFactory.BytesCompressor compressor, int dictionaryPageSize, boolean enableDictionary, boolean validating, org.apache.parquet.column.ParquetProperties.WriterVersion writerVersion, MemoryManager memoryManager)Deprecated.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description voidclose(org.apache.hadoop.mapreduce.TaskAttemptContext context)voidwrite(Void key, T value)
-
-
-
Constructor Detail
-
ParquetRecordWriter
@Deprecated public ParquetRecordWriter(ParquetFileWriter w, WriteSupport<T> writeSupport, org.apache.parquet.schema.MessageType schema, Map<String,String> extraMetaData, int blockSize, int pageSize, CodecFactory.BytesCompressor compressor, int dictionaryPageSize, boolean enableDictionary, boolean validating, org.apache.parquet.column.ParquetProperties.WriterVersion writerVersion)
Deprecated.- Parameters:
w- the file to write towriteSupport- the class to convert incoming recordsschema- the schema of the recordsextraMetaData- extra meta data to write in the footer of the fileblockSize- the size of a block in the file (this will be approximate)pageSize- the size of a page in the file (this will be approximate)compressor- the compressor used to compress the pagesdictionaryPageSize- the threshold for dictionary sizeenableDictionary- to enable the dictionaryvalidating- if schema validation should be turned onwriterVersion- writer compatibility version
-
ParquetRecordWriter
@Deprecated public ParquetRecordWriter(ParquetFileWriter w, WriteSupport<T> writeSupport, org.apache.parquet.schema.MessageType schema, Map<String,String> extraMetaData, long blockSize, int pageSize, CodecFactory.BytesCompressor compressor, int dictionaryPageSize, boolean enableDictionary, boolean validating, org.apache.parquet.column.ParquetProperties.WriterVersion writerVersion, MemoryManager memoryManager)
Deprecated.- Parameters:
w- the file to write towriteSupport- the class to convert incoming recordsschema- the schema of the recordsextraMetaData- extra meta data to write in the footer of the fileblockSize- the size of a block in the file (this will be approximate)pageSize- the size of a page in the file (this will be approximate)compressor- the compressor used to compress the pagesdictionaryPageSize- the threshold for dictionary sizeenableDictionary- to enable the dictionaryvalidating- if schema validation should be turned onwriterVersion- writer compatibility versionmemoryManager- memory manager for the write
-
-
Method Detail
-
close
public void close(org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException- Specified by:
closein classorg.apache.hadoop.mapreduce.RecordWriter<Void,T>- Throws:
IOExceptionInterruptedException
-
write
public void write(Void key, T value) throws IOException, InterruptedException
- Specified by:
writein classorg.apache.hadoop.mapreduce.RecordWriter<Void,T>- Throws:
IOExceptionInterruptedException
-
-