org.apache.spark.sql.execution.datasources.binaryfile
BinaryFileFormat
Companion object BinaryFileFormat
class BinaryFileFormat extends FileFormat with DataSourceRegister
The binary file data source.
It reads binary files and converts each file into a single record that contains the raw content and metadata of the file.
Example:
// Scala val df = spark.read.format("binaryFile") .load("/path/to/fileDir") // Java Dataset<Row> df = spark.read().format("binaryFile") .load("/path/to/fileDir");
- Alphabetic
- By Inheritance
- BinaryFileFormat
- DataSourceRegister
- FileFormat
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new BinaryFileFormat()
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) => Iterator[InternalRow]
Returns a function that can be used to read a single file in as an Iterator of InternalRow.
Returns a function that can be used to read a single file in as an Iterator of InternalRow.
- dataSchema
The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.
- partitionSchema
The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.
- requiredSchema
The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.
- filters
A set of filters than can optionally be used to reduce the number of rows output
- options
A set of string -> string configuration options.
- Attributes
- protected
- Definition Classes
- BinaryFileFormat → FileFormat
- def buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) => Iterator[InternalRow]
Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.
Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.
- Definition Classes
- FileFormat
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def inferSchema(sparkSession: SparkSession, options: Map[String, String], files: Seq[FileStatus]): Option[StructType]
When possible, this method should return the schema of the given
files.When possible, this method should return the schema of the given
files. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.- Definition Classes
- BinaryFileFormat → FileFormat
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean
Returns whether a file with
pathcould be split or not.Returns whether a file with
pathcould be split or not.- Definition Classes
- BinaryFileFormat → FileFormat
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory
Prepares a write job and returns an OutputWriterFactory.
Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.
- Definition Classes
- BinaryFileFormat → FileFormat
- def shortName(): String
The string that represents the format that this data source provider uses.
The string that represents the format that this data source provider uses. This is overridden by children to provide a nice alias for the data source. For example:
override def shortName(): String = "parquet"
- Definition Classes
- BinaryFileFormat → DataSourceRegister
- Since
1.5.0
- def supportBatch(sparkSession: SparkSession, dataSchema: StructType): Boolean
Returns whether this format supports returning columnar batch or not.
Returns whether this format supports returning columnar batch or not. If columnar batch output is requested, users shall supply FileFormat.OPTION_RETURNING_BATCH -> true in relation options when calling buildReaderWithPartitionValues. This should only be passed as true if it can actually be supported. For ParquetFileFormat and OrcFileFormat, passing this option is required.
TODO: we should just have different traits for the different formats.
- Definition Classes
- FileFormat
- def supportDataType(dataType: DataType): Boolean
Returns whether this format supports the given DataType in read/write path.
Returns whether this format supports the given DataType in read/write path. By default all data types are supported.
- Definition Classes
- FileFormat
- def supportFieldName(name: String): Boolean
Returns whether this format supports the given filed name in read/write path.
Returns whether this format supports the given filed name in read/write path. By default all field name is supported.
- Definition Classes
- FileFormat
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- def vectorTypes(requiredSchema: StructType, partitionSchema: StructType, sqlConf: SQLConf): Option[Seq[String]]
Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.
Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.
- Definition Classes
- FileFormat
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()