Hadoop formats

    Add the following dependency to your pom.xml to use hadoop

    If you want to run your Flink application locally (e.g. from your IDE), you also need to add a hadoop-client dependency such as:

    1. <dependency>
    2. <groupId>org.apache.hadoop</groupId>
    3. <artifactId>hadoop-client</artifactId>
    4. <version>2.8.3</version>
    5. <scope>provided</scope>

    To use Hadoop InputFormats with Flink the format must first be wrapped using either readHadoopFile or createHadoopInput of the HadoopInputs utility class. The former is used for input formats derived from while the latter has to be used for general purpose input formats. The resulting InputFormat can be used to create a data source by using ExecutionEnvironmen#createInput.

    The following example shows how to use Hadoop’s TextInputFormat.

    Java

    Scala

    1. val env = StreamExecutionEnvironment.getExecutionEnvironment
    2. val input: DataStream[(LongWritable, Text)] =
    3. env.createInput(HadoopInputs.readHadoopFile(
    4. new TextInputFormat, classOf[LongWritable], classOf[Text], textPath))
    5. // Do something with the data.
    6. [...]

    The following example shows how to use Hadoop’s TextOutputFormat.

    Java

    Scala

    1. val hadoopResult: DataStream[(Text, IntWritable)] = [...]
    2. val hadoopOF = new HadoopOutputFormat[Text,IntWritable](
    3. new TextOutputFormat[Text, IntWritable],
    4. new JobConf)
    5. hadoopOF.getJobConf.set("mapred.textoutputformat.separator", " ")
    6. FileOutputFormat.setOutputPath(hadoopOF.getJobConf, new Path(resultPath))
    7. hadoopResult.output(hadoopOF)