FileSystem SQL Connector

    The file system connector itself is included in Flink and does not require an additional dependency. A corresponding format needs to be specified for reading and writing rows from and to a file system.

    The file system connector allows for reading and writing from a local or distributed filesystem. A filesystem table can be defined as:

    Flink’s file system partition support uses the standard hive format. However, it does not require partitions to be pre-registered with a table catalog. Partitions are discovered and inferred based on directory structure. For example, a table partitioned based on the directory below would be inferred to contain datetime and hour partitions.

    1. path
    2. └── datetime=2019-08-25
    3. └── hour=11
    4. ├── part-0.parquet
    5. ├── part-1.parquet
    6. └── hour=12
    7. ├── part-0.parquet
    8. └── datetime=2019-08-26
    9. └── hour=6
    10. ├── part-0.parquet

    The file system table supports both partition inserting and overwrite inserting. See INSERT Statement. When you insert overwrite to a partitioned table, only the corresponding partition will be overwritten, not the entire table.

    File Formats

    The file system connector supports multiple formats:

    • CSV: RFC-4180. Uncompressed.
    • JSON: Note JSON format for file system connector is not a typical JSON file but uncompressed .
    • Avro: Apache Avro. Support compression by configuring avro.codec.
    • Parquet: . Compatible with Hive.
    • Orc: Apache Orc. Compatible with Hive.
    • Debezium-JSON: .
    • Canal-JSON: canal-json.
    • Raw: .

    The file system connector can be used to read single files or entire directories into a single table.

    When using a directory as the source path, there is no defined order of ingestion for the files inside the directory.

    Streaming Sink

    You can write SQL directly, insert the stream data into the non-partitioned table. If it is a partitioned table, you can configure partition related operations. See for details.

    Data within the partition directories are split into part files. Each partition will contain at least one part file for each subtask of the sink that has received data for that partition. The in-progress part file will be closed and additional part file will be created according to the configurable rolling policy. The policy rolls part files based on size, a timeout that specifies the maximum duration for which a file can be open.

    NOTE: For bulk formats (parquet, orc, avro), the rolling policy in combination with the checkpoint interval(pending files become finished on the next checkpoint) control the size and number of these parts.

    NOTE: For row formats (csv, json), you can set the parameter sink.rolling-policy.file-size or sink.rolling-policy.rollover-interval in the connector properties and parameter execution.checkpointing.interval in flink-conf.yaml together if you don’t want to wait a long period before observe the data exists in file system. For other formats (avro, orc), you can just set parameter execution.checkpointing.interval in flink-conf.yaml.

    The file sink supports file compactions, which allows applications to have smaller checkpoint intervals without generating a large number of files.

    KeyDefaultTypeDescription
    auto-compaction
    falseBooleanWhether to enable automatic compaction in streaming sink or not. The data will be written to temporary files. After the checkpoint is completed, the temporary files generated by a checkpoint will be compacted. The temporary files are invisible before compaction.
    compaction.file-size
    (none)MemorySizeThe compaction target file size, the default value is the rolling file size.

    If enabled, file compaction will merge multiple small files into larger files based on the target file size. When running file compaction in production, please be aware that:

    • Only files in a single checkpoint are compacted, that is, at least the same number of files as the number of checkpoints is generated.
    • The file before merging is invisible, so the visibility of the file may be: checkpoint interval + compaction time.
    • If the compaction takes too long, it will backpressure the job and extend the time period of checkpoint.

    After writing a partition, it is often necessary to notify downstream applications. For example, add the partition to a Hive metastore or writing a _SUCCESS file in the directory. The file system sink contains a partition commit feature that allows configuring custom policies. Commit actions are based on a combination of triggers and policies.

    • Trigger: The timing of the commit of the partition can be determined by the watermark with the time extracted from the partition, or by processing time.

    NOTE: Partition Commit only works in dynamic partition inserting.

    Partition commit trigger

    To define when to commit a partition, providing partition commit trigger:

    There are two types of trigger:

    • The first is partition processing time. It neither requires partition time extraction nor watermark generation. The trigger of partition commit according to partition creation time and current system time. This trigger is more universal, but not so precise. For example, data delay or failover will lead to premature partition commit.
    • The second is the trigger of partition commit according to the time that extracted from partition values and watermark. This requires that your job has watermark generation, and the partition is divided according to time, such as hourly partition or daily partition.
    • ‘sink.partition-commit.trigger’=‘process-time’ (Default value)
    • ‘sink.partition-commit.delay’=‘0s’ (Default value) Once there is data in the partition, it will immediately commit. Note: the partition may be committed multiple times.

    If you want to let downstream see the partition only when its data is complete, and your job has watermark generation, and you can extract the time from partition values:

    • ‘sink.partition-commit.trigger’=‘partition-time’
    • ‘sink.partition-commit.delay’=‘1h’ (‘1h’ if your partition is hourly partition, depends on your partition type) This is the most accurate way to commit partition, and it will try to ensure that the committed partitions are as data complete as possible.

    If you want to let downstream see the partition only when its data is complete, but there is no watermark, or the time cannot be extracted from partition values:

    • ‘sink.partition-commit.trigger’=‘process-time’ (Default value)
    • ‘sink.partition-commit.delay’=‘1h’ (‘1h’ if your partition is hourly partition, depends on your partition type) Try to commit partition accurately, but data delay or failover will lead to premature partition commit.

    Late data processing: The record will be written into its partition when a record is supposed to be written into a partition that has already been committed, and then the committing of this partition will be triggered again.

    Partition Time Extractor

    Time extractors define extracting time from partition values.

    KeyDefaultTypeDescription
    partition.time-extractor.kind
    defaultStringTime extractor to extract time from partition values. Support default and custom. For default, can configure timestamp pattern. For custom, should configure extractor class.
    partition.time-extractor.class
    (none)StringThe extractor class for implement PartitionTimeExtractor interface.
    partition.time-extractor.timestamp-pattern
    (none)StringThe ‘default’ construction way allows users to use partition fields to get a legal timestamp pattern. Default support ‘yyyy-MM-dd hh:mm:ss’ from first field. If timestamp should be extracted from a single partition field ‘dt’, can configure: ‘$dt’. If timestamp should be extracted from multiple partition fields, say ‘year’, ‘month’, ‘day’ and ‘hour’, can configure: ‘$year-$month-$day $hour:00:00’. If timestamp should be extracted from two partition fields ‘dt’ and ‘hour’, can configure: ‘$dt $hour:00:00’.

    The default extractor is based on a timestamp pattern composed of your partition fields. You can also specify an implementation for fully custom partition extraction based on the PartitionTimeExtractor interface.

    Partition Commit Policy

    The partition commit policy defines what action is taken when partitions are committed.

    • The first is metastore, only hive table supports metastore policy, file system manages partitions through directory structure.
    • The second is the success file, which will write an empty file in the directory corresponding to the partition.

    You can extend the implementation of commit policy, The custom commit policy implementation like:

    1. public class AnalysisCommitPolicy implements PartitionCommitPolicy {
    2. private HiveShell hiveShell;
    3. @Override
    4. if (hiveShell == null) {
    5. hiveShell = createHiveShell(context.catalogName());
    6. }
    7. hiveShell.execute(String.format(
    8. "ALTER TABLE %s ADD IF NOT EXISTS PARTITION (%s = '%s') location '%s'",
    9. context.tableName(),
    10. context.partitionKeys().get(0),
    11. context.partitionValues().get(0),
    12. context.partitionPath()));
    13. hiveShell.execute(String.format(
    14. "ANALYZE TABLE %s PARTITION (%s = '%s') COMPUTE STATISTICS FOR COLUMNS",
    15. context.tableName(),
    16. context.partitionKeys().get(0),
    17. context.partitionValues().get(0)));
    18. }
    19. }

    The parallelism of writing files into external file system (including Hive) can be configured by the corresponding table option, which is supported both in streaming mode and in batch mode. By default, the parallelism is configured to being the same as the parallelism of its last upstream chained operator. When the parallelism which is different from the parallelism of the upstream parallelism is configured, the operator of writing files and the operator compacting files (if used) will apply the parallelism.

    KeyDefaultTypeDescription
    sink.parallelism
    (none)IntegerParallelism of writing files into external file system. The value should greater than zero otherwise exception will be thrown.

    NOTE: Currently, Configuring sink parallelism is supported if and only if the changelog mode of upstream is INSERT-ONLY. Otherwise, exception will be thrown.

    Full Example

    The below examples show how the file system connector can be used to write a streaming query to write data from Kafka into a file system and runs a batch query to read that data back out.

    1. user_id STRING,
    2. order_amount DOUBLE,
    3. ts BIGINT, -- time in epoch milliseconds
    4. ts_ltz AS TO_TIMESTAMP_LTZ(ts, 3),
    5. WATERMARK FOR ts_ltz AS ts_ltz - INTERVAL '5' SECOND -- Define watermark on TIMESTAMP_LTZ column
    6. ) WITH (...);
    7. CREATE TABLE fs_table (
    8. user_id STRING,
    9. order_amount DOUBLE,
    10. dt STRING,
    11. `hour` STRING
    12. ) PARTITIONED BY (dt, `hour`) WITH (
    13. 'connector'='filesystem',
    14. 'path'='...',
    15. 'format'='parquet',
    16. 'partition.time-extractor.timestamp-pattern'='$dt $hour:00:00',
    17. 'sink.partition-commit.delay'='1 h',
    18. 'sink.partition-commit.trigger'='partition-time',
    19. 'sink.partition-commit.watermark-time-zone'='Asia/Shanghai', -- Assume user configured time zone is 'Asia/Shanghai'
    20. 'sink.partition-commit.policy.kind'='success-file'
    21. );
    22. -- streaming sql, insert into file system table
    23. INSERT INTO fs_table
    24. SELECT
    25. user_id,
    26. order_amount,
    27. DATE_FORMAT(ts_ltz, 'yyyy-MM-dd'),
    28. DATE_FORMAT(ts_ltz, 'HH')
    29. FROM kafka_table;
    30. -- batch sql, select with partition pruning
    31. SELECT * FROM fs_table WHERE dt='2020-05-20' and `hour`='12';