Data Prepper

    Data Prepper lets users build custom pipelines to improve the operational view of applications. Two common uses for Data Prepper are trace and log analytics. Trace analytics can help you visualize the flow of events and identify performance problems, and can improve searching, analyzing and provide insights into your application.

    Data Prepper includes one or more pipelines that collect and filter data based on the components set within the pipeline. Each component is pluggable, enabling you to use your own custom implementation of each component. These components include the following:

    • One or more
    • (Optional) One buffer
    • (Optional) One or more

    A single instance of Data Prepper can have one or more pipelines.

    Source is the input component that defines the mechanism through which a Data Prepper pipeline will consume events. A pipeline can have only one source. The source can consume events either by receiving the events over HTTP or HTTPS or by reading from external endpoints like OTeL Collector for traces and metrics and Amazon Simple Storage Service (Amazon S3). Sources have their own configuration options based on the format of the events (such as string, JSON, Amazon CloudWatch logs, or open telemetry trace). The source component consumes events and writes them to the buffer component.

    Buffer

    The buffer component acts as the layer between the source and the sink. Buffer can be either in-memory or disk based. The default buffer uses an in-memory queue called that is bounded by the number of events. If the buffer component is not explicitly mentioned in the pipeline configuration, Data Prepper uses the default bounded_blocking.

    Sink is the output component that defines the destination(s) to which a Data Prepper pipeline publishes events. A sink destination could be a service, such as OpenSearch or Amazon S3, or another Data Prepper pipeline. When using another Data Prepper pipeline as the sink, you can chain multiple pipelines together based on the needs of the data. Sink contains its own configuration options based on the destination type.

    Processor

    To understand how all pipeline components function within a Data Prepper configuration, see the following examples. Each pipeline configuration uses a yaml file format.

    This pipeline configuration reads from the file source and writes to another file in the same path. It uses the default options for the buffer and processor.

    All components

    The following pipeline uses a source that reads string events from the input-file. The source then pushes the data to the buffer, bounded by a max size of 1024. The pipeline is configured to have 4 workers, each of them reading a maximum of 256 events from the buffer for every 100 milliseconds. Each worker runs the processor and writes the output of the processor to the output-file.

    1. sample-pipeline:
    2. workers: 4 #Number of workers
    3. delay: 100 # in milliseconds, how often the workers should run
    4. source:
    5. file:
    6. path: <path/to/input-file>
    7. bounded_blocking:
    8. buffer_size: 1024 # max number of events the buffer will accept
    9. batch_size: 256 # max number of events the buffer will drain for each read
    10. processor:
    11. - string_converter:
    12. upper_case: true
    13. sink:
    14. path: <path/to/output-file>