Kafka Connector Tutorial

    Installation

    This tutorial assumes familiarity with Presto and a working local Presto installation (see ). It will focus on setting up Apache Kafka and integrating it with Presto.

    Download and extract Apache Kafka.

    Note

    This tutorial was tested with Apache Kafka 0.8.1. It should work with any 0.8.x version of Apache Kafka.

    Start ZooKeeper and the Kafka server:

    1. [2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
    2. [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
    3. ...

    This will start Zookeeper on port 2181 and Kafka on port 9092.

    Step 2: Load data

    Download the tpch-kafka loader from Maven central:

    1. $ curl -o kafka-tpch https://repo1.maven.org/maven2/de/softwareforge/kafka_tpch_0811/1.0/kafka_tpch_0811-1.0.sh
    2. $ chmod 755 kafka-tpch

    Now run the kafka-tpch program to preload a number of topics with tpch data:

    1. $ ./kafka-tpch load --brokers localhost:9092 --prefix tpch. --tpch-type tiny
    2. 2014-07-28T17:17:07.594-0700 INFO main com.facebook.airlift.log.Logging Logging to stderr
    3. 2014-07-28T17:17:07.623-0700 INFO main de.softwareforge.kafka.LoadCommand Processing tables: [customer, orders, lineitem, part, partsupp, supplier, nation, region]
    4. 2014-07-28T17:17:07.981-0700 INFO pool-1-thread-1 de.softwareforge.kafka.LoadCommand Loading table 'customer' into topic 'tpch.customer'...
    5. 2014-07-28T17:17:07.981-0700 INFO pool-1-thread-2 de.softwareforge.kafka.LoadCommand Loading table 'orders' into topic 'tpch.orders'...
    6. 2014-07-28T17:17:07.981-0700 INFO pool-1-thread-3 de.softwareforge.kafka.LoadCommand Loading table 'lineitem' into topic 'tpch.lineitem'...
    7. 2014-07-28T17:17:07.982-0700 INFO pool-1-thread-4 de.softwareforge.kafka.LoadCommand Loading table 'part' into topic 'tpch.part'...
    8. 2014-07-28T17:17:07.982-0700 INFO pool-1-thread-5 de.softwareforge.kafka.LoadCommand Loading table 'partsupp' into topic 'tpch.partsupp'...
    9. 2014-07-28T17:17:07.982-0700 INFO pool-1-thread-6 de.softwareforge.kafka.LoadCommand Loading table 'supplier' into topic 'tpch.supplier'...
    10. 2014-07-28T17:17:07.982-0700 INFO pool-1-thread-7 de.softwareforge.kafka.LoadCommand Loading table 'nation' into topic 'tpch.nation'...
    11. 2014-07-28T17:17:07.982-0700 INFO pool-1-thread-8 de.softwareforge.kafka.LoadCommand Loading table 'region' into topic 'tpch.region'...
    12. 2014-07-28T17:17:10.612-0700 ERROR pool-1-thread-8 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.region
    13. 2014-07-28T17:17:10.781-0700 INFO pool-1-thread-8 de.softwareforge.kafka.LoadCommand Generated 5 rows for table 'region'.
    14. 2014-07-28T17:17:10.797-0700 ERROR pool-1-thread-3 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.lineitem
    15. 2014-07-28T17:17:10.932-0700 ERROR pool-1-thread-1 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.customer
    16. 2014-07-28T17:17:11.068-0700 ERROR pool-1-thread-2 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.orders
    17. 2014-07-28T17:17:11.200-0700 ERROR pool-1-thread-6 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.supplier
    18. 2014-07-28T17:17:11.319-0700 INFO pool-1-thread-6 de.softwareforge.kafka.LoadCommand Generated 100 rows for table 'supplier'.
    19. 2014-07-28T17:17:11.333-0700 ERROR pool-1-thread-4 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.part
    20. 2014-07-28T17:17:11.466-0700 ERROR pool-1-thread-5 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.partsupp
    21. 2014-07-28T17:17:11.597-0700 ERROR pool-1-thread-7 kafka.producer.async.DefaultEventHandler Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: tpch.nation
    22. 2014-07-28T17:17:11.706-0700 INFO pool-1-thread-7 de.softwareforge.kafka.LoadCommand Generated 25 rows for table 'nation'.
    23. 2014-07-28T17:17:12.180-0700 INFO pool-1-thread-1 de.softwareforge.kafka.LoadCommand Generated 1500 rows for table 'customer'.
    24. 2014-07-28T17:17:12.251-0700 INFO pool-1-thread-4 de.softwareforge.kafka.LoadCommand Generated 2000 rows for table 'part'.
    25. 2014-07-28T17:17:12.905-0700 INFO pool-1-thread-2 de.softwareforge.kafka.LoadCommand Generated 15000 rows for table 'orders'.
    26. 2014-07-28T17:17:12.919-0700 INFO pool-1-thread-5 de.softwareforge.kafka.LoadCommand Generated 8000 rows for table 'partsupp'.
    27. 2014-07-28T17:17:13.877-0700 INFO pool-1-thread-3 de.softwareforge.kafka.LoadCommand Generated 60175 rows for table 'lineitem'.

    Kafka now has a number of topics that are preloaded with data to query.

    In your Presto installation, add a catalog properties file etc/catalog/kafka.properties for the Kafka connector. This file lists the Kafka nodes and topics:

    1. connector.name=kafka
    2. kafka.nodes=localhost:9092
    3. kafka.table-names=tpch.customer,tpch.orders,tpch.lineitem,tpch.part,tpch.partsupp,tpch.supplier,tpch.nation,tpch.region
    4. kafka.hide-internal-columns=false

    Now start Presto:

    1. $ bin/launcher start

    Start the Presto CLI:

    1. $ ./presto --catalog kafka --schema tpch

    List the tables to verify that things are working:

    Step 4: Basic data querying

    Kafka data is unstructured and it has no metadata to describe the format of the messages. Without further configuration, the Kafka connector can access the data and map it in raw form but there are no actual columns besides the built-in ones:

    1. presto:tpch> DESCRIBE customer;
    2. Column | Type | Extra | Comment
    3. -------------------+---------+-------+---------------------------------------------
    4. _partition_id | bigint | | Partition Id
    5. _partition_offset | bigint | | Offset for the message within the partition
    6. _key | varchar | | Key text
    7. _key_corrupt | boolean | | Key data is corrupt
    8. _key_length | bigint | | Total number of key bytes
    9. _message | varchar | | Message text
    10. _message_corrupt | boolean | | Message data is corrupt
    11. _message_length | bigint | | Total number of message bytes
    12. (11 rows)
    13. presto:tpch> SELECT count(*) FROM customer;
    14. _col0
    15. -------
    16. 1500
    17. presto:tpch> SELECT _message FROM customer LIMIT 5;
    18. _message
    19. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    20. {"rowNumber":1,"customerKey":1,"name":"Customer#000000001","address":"IVhzIApeRb ot,c,E","nationKey":15,"phone":"25-989-741-2988","accountBalance":711.56,"marketSegment":"BUILDING","comment":"to the even, regular platelets. regular, ironic epitaphs nag e"}
    21. {"rowNumber":3,"customerKey":3,"name":"Customer#000000003","address":"MG9kdTD2WBHm","nationKey":1,"phone":"11-719-748-3364","accountBalance":7498.12,"marketSegment":"AUTOMOBILE","comment":" deposits eat slyly ironic, even instructions. express foxes detect slyly. blithel
    22. {"rowNumber":5,"customerKey":5,"name":"Customer#000000005","address":"KvpyuHCplrB84WgAiGV6sYpZq7Tj","nationKey":3,"phone":"13-750-942-6364","accountBalance":794.47,"marketSegment":"HOUSEHOLD","comment":"n accounts will have to unwind. foxes cajole accor"}
    23. {"rowNumber":7,"customerKey":7,"name":"Customer#000000007","address":"TcGe5gaZNgVePxU5kRrvXBfkasDTea","nationKey":18,"phone":"28-190-982-9759","accountBalance":9561.95,"marketSegment":"AUTOMOBILE","comment":"ainst the ironic, express theodolites. express, even pinto bean
    24. {"rowNumber":9,"customerKey":9,"name":"Customer#000000009","address":"xKiAFTjUsCuxfeleNqefumTrjS","nationKey":8,"phone":"18-338-906-3675","accountBalance":8324.07,"marketSegment":"FURNITURE","comment":"r theodolites according to the requests wake thinly excuses: pending
    25. (5 rows)
    26. presto:tpch> SELECT sum(cast(json_extract_scalar(_message, '$.accountBalance') AS double)) FROM customer LIMIT 10;
    27. _col0
    28. ------------
    29. 6681865.59
    30. (1 row)

    The data from Kafka can be queried using Presto but it is not yet in actual table shape. The raw data is available through the _message and _key columns but it is not decoded into columns. As the sample data is in JSON format, the JSON Functions and Operators built into Presto can be used to slice the data.

    The Kafka connector supports topic description files to turn raw data into table format. These files are located in the etc/kafka folder in the Presto installation and must end with .json. It is recommended that the file name matches the table name but this is not necessary.

    Add the following file as etc/kafka/tpch.customer.json and restart Presto:

    1. {
    2. "tableName": "customer",
    3. "schemaName": "tpch",
    4. "topicName": "tpch.customer",
    5. "key": {
    6. "dataFormat": "raw",
    7. "fields": [
    8. {
    9. "name": "kafka_key",
    10. "dataFormat": "LONG",
    11. "type": "BIGINT",
    12. "hidden": "false"
    13. }
    14. ]
    15. }
    16. }

    The customer table now has an additional column: kafka_key.

    1. presto:tpch> DESCRIBE customer;
    2. Column | Type | Extra | Comment
    3. -------------------+---------+-------+---------------------------------------------
    4. kafka_key | bigint | |
    5. _partition_id | bigint | | Partition Id
    6. _partition_offset | bigint | | Offset for the message within the partition
    7. _key | varchar | | Key text
    8. _key_corrupt | boolean | | Key data is corrupt
    9. _key_length | bigint | | Total number of key bytes
    10. _message | varchar | | Message text
    11. _message_corrupt | boolean | | Message data is corrupt
    12. _message_length | bigint | | Total number of message bytes
    13. (12 rows)
    14. presto:tpch> SELECT kafka_key FROM customer ORDER BY kafka_key LIMIT 10;
    15. kafka_key
    16. -----------
    17. 0
    18. 1
    19. 2
    20. 4
    21. 5
    22. 6
    23. 8
    24. 9
    25. (10 rows)

    The topic definition file maps the internal Kafka key (which is a raw long in eight bytes) onto a Presto BIGINT column.

    Step 6: Map all the values from the topic message onto columns

    Update the etc/kafka/tpch.customer.json file to add fields for the message and restart Presto. As the fields in the message are JSON, it uses the json data format. This is an example where different data formats are used for the key and the message.

    1. {
    2. "tableName": "customer",
    3. "schemaName": "tpch",
    4. "topicName": "tpch.customer",
    5. "key": {
    6. "dataFormat": "raw",
    7. "fields": [
    8. {
    9. "name": "kafka_key",
    10. "dataFormat": "LONG",
    11. "type": "BIGINT",
    12. "hidden": "false"
    13. }
    14. ]
    15. },
    16. "message": {
    17. "dataFormat": "json",
    18. "fields": [
    19. {
    20. "name": "row_number",
    21. "mapping": "rowNumber",
    22. "type": "BIGINT"
    23. },
    24. {
    25. "name": "customer_key",
    26. "mapping": "customerKey",
    27. "type": "BIGINT"
    28. },
    29. {
    30. "name": "name",
    31. "mapping": "name",
    32. "type": "VARCHAR"
    33. },
    34. {
    35. "name": "address",
    36. "mapping": "address",
    37. "type": "VARCHAR"
    38. },
    39. {
    40. "name": "nation_key",
    41. "mapping": "nationKey",
    42. "type": "BIGINT"
    43. },
    44. {
    45. "name": "phone",
    46. "mapping": "phone",
    47. "type": "VARCHAR"
    48. },
    49. {
    50. "name": "account_balance",
    51. "mapping": "accountBalance",
    52. "type": "DOUBLE"
    53. },
    54. {
    55. "name": "market_segment",
    56. "mapping": "marketSegment",
    57. "type": "VARCHAR"
    58. },
    59. {
    60. "name": "comment",
    61. "mapping": "comment",
    62. "type": "VARCHAR"
    63. }
    64. ]
    65. }
    66. }

    Now for all the fields in the JSON of the message, columns are defined and the sum query from earlier can operate on the account_balance column directly:

    1. presto:tpch> DESCRIBE customer;
    2. Column | Type | Extra | Comment
    3. -------------------+---------+-------+---------------------------------------------
    4. kafka_key | bigint | |
    5. row_number | bigint | |
    6. customer_key | bigint | |
    7. name | varchar | |
    8. address | varchar | |
    9. nation_key | bigint | |
    10. phone | varchar | |
    11. account_balance | double | |
    12. market_segment | varchar | |
    13. comment | varchar | |
    14. _partition_id | bigint | | Partition Id
    15. _partition_offset | bigint | | Offset for the message within the partition
    16. _key | varchar | | Key text
    17. _key_corrupt | boolean | | Key data is corrupt
    18. _key_length | bigint | | Total number of key bytes
    19. _message | varchar | | Message text
    20. _message_corrupt | boolean | | Message data is corrupt
    21. _message_length | bigint | | Total number of message bytes
    22. (21 rows)
    23. presto:tpch> SELECT * FROM customer LIMIT 5;
    24. kafka_key | row_number | customer_key | name | address | nation_key | phone | account_balance | market_segment | comment
    25. -----------+------------+--------------+--------------------+---------------------------------------+------------+-----------------+-----------------+----------------+---------------------------------------------------------------------------------------------------------
    26. 1 | 2 | 2 | Customer#000000002 | XSTf4,NCwDVaWNe6tEgvwfmRchLXak | 13 | 23-768-687-3665 | 121.65 | AUTOMOBILE | l accounts. blithely ironic theodolites integrate boldly: caref
    27. 3 | 4 | 4 | Customer#000000004 | XxVSJsLAGtn | 4 | 14-128-190-5944 | 2866.83 | MACHINERY | requests. final, regular ideas sleep final accou
    28. 5 | 6 | 6 | Customer#000000006 | sKZz0CsnMD7mp4Xd0YrBvx,LREYKUWAh yVn | 20 | 30-114-968-4951 | 7638.57 | AUTOMOBILE | tions. even deposits boost according to the slyly bold packages. final accounts cajole requests. furious
    29. 7 | 8 | 8 | Customer#000000008 | I0B10bB0AymmC, 0PrRYBCP1yGJ8xcBPmWhl5 | 17 | 27-147-574-9335 | 6819.74 | BUILDING | among the slyly regular theodolites kindle blithely courts. carefully even theodolites haggle slyly alon
    30. 9 | 10 | 10 | Customer#000000010 | 6LrEaV6KR6PLVcgl2ArL Q3rqzLzcT1 v2 | 5 | 15-741-346-9870 | 2753.54 | HOUSEHOLD | es regular deposits haggle. fur
    31. (5 rows)
    32. presto:tpch> SELECT sum(account_balance) FROM customer LIMIT 10;
    33. _col0
    34. ------------
    35. 6681865.59
    36. (1 row)

    Now all the fields from the customer topic messages are available as Presto table columns.

    Setup a live Twitter feed

    • Download the twistr tool
    1. $ curl -o twistr https://repo1.maven.org/maven2/de/softwareforge/twistr_kafka_0811/1.2/twistr_kafka_0811-1.2.sh
    2. $ chmod 755 twistr
    • Create a developer account at and set up an access and consumer token.

    • Create a twistr.properties file and put the access and consumer key and secrets into it:

    Create a tweets table on Presto

    Add the tweets table to the etc/catalog/kafka.properties file:

    1. connector.name=kafka
    2. kafka.nodes=localhost:9092
    3. kafka.table-names=tpch.customer,tpch.orders,tpch.lineitem,tpch.part,tpch.partsupp,tpch.supplier,tpch.nation,tpch.region,tweets
    4. kafka.hide-internal-columns=false

    Add a topic definition file for the Twitter feed as etc/kafka/tweets.json:

    1. {
    2. "topicName": "twitter_feed",
    3. "dataFormat": "json",
    4. "dataFormat": "raw",
    5. "fields": [
    6. {
    7. "name": "kafka_key",
    8. "dataFormat": "LONG",
    9. "type": "BIGINT",
    10. "hidden": "false"
    11. }
    12. ]
    13. },
    14. "message": {
    15. "dataFormat":"json",
    16. "fields": [
    17. {
    18. "name": "text",
    19. "mapping": "text",
    20. "type": "VARCHAR"
    21. },
    22. {
    23. "name": "user_name",
    24. "mapping": "user/screen_name",
    25. "type": "VARCHAR"
    26. },
    27. {
    28. "name": "lang",
    29. "mapping": "lang",
    30. "type": "VARCHAR"
    31. },
    32. {
    33. "name": "created_at",
    34. "mapping": "created_at",
    35. "type": "TIMESTAMP",
    36. "dataFormat": "rfc2822"
    37. },
    38. {
    39. "name": "favorite_count",
    40. "mapping": "favorite_count",
    41. "type": "BIGINT"
    42. },
    43. {
    44. "name": "retweet_count",
    45. "mapping": "retweet_count",
    46. "type": "BIGINT"
    47. },
    48. {
    49. "name": "favorited",
    50. "mapping": "favorited",
    51. "type": "BOOLEAN"
    52. },
    53. {
    54. "name": "id",
    55. "mapping": "id_str",
    56. "type": "VARCHAR"
    57. },
    58. {
    59. "name": "in_reply_to_screen_name",
    60. "mapping": "in_reply_to_screen_name",
    61. "type": "VARCHAR"
    62. },
    63. {
    64. "name": "place_name",
    65. "mapping": "place/full_name",
    66. "type": "VARCHAR"
    67. }
    68. ]
    69. }
    70. }

    As this table does not have an explicit schema name, it will be placed into the default schema.

    Feed live data

    Start the twistr tool:

    1. $ java -Dness.config.location=file:$(pwd) -Dness.config=twistr -jar ./twistr

    twistr connects to the Twitter API and feeds the “sample tweet” feed into a Kafka topic called twitter_feed.

    Now run queries against live data:

    1. $ ./presto-cli --catalog kafka --schema default
    2. presto:default> SELECT count(*) FROM tweets;
    3. _col0
    4. -------
    5. 4467
    6. (1 row)
    7. presto:default> SELECT count(*) FROM tweets;
    8. _col0
    9. -------
    10. 4517
    11. (1 row)
    12. presto:default> SELECT count(*) FROM tweets;
    13. _col0
    14. -------
    15. 4572
    16. (1 row)
    17. presto:default> SELECT kafka_key, user_name, lang, created_at FROM tweets LIMIT 10;
    18. kafka_key | user_name | lang | created_at
    19. --------------------+-----------------+------+-------------------------
    20. 494227746231685121 | burncaniff | en | 2014-07-29 14:07:31.000
    21. 494227746214535169 | gu8tn | ja | 2014-07-29 14:07:31.000
    22. 494227746219126785 | pequitamedicen | es | 2014-07-29 14:07:31.000
    23. 494227746201931777 | josnyS | ht | 2014-07-29 14:07:31.000
    24. 494227746219110401 | Cafe510 | en | 2014-07-29 14:07:31.000
    25. 494227746210332673 | Da_JuanAnd_Only | en | 2014-07-29 14:07:31.000
    26. 494227746193956865 | Smile_Kidrauhl6 | pt | 2014-07-29 14:07:31.000
    27. 494227750426017793 | CashforeverCD | en | 2014-07-29 14:07:32.000
    28. 494227750396653569 | FilmArsivimiz | tr | 2014-07-29 14:07:32.000
    29. 494227750388256769 | jmolas | es | 2014-07-29 14:07:32.000
    30. (10 rows)

    There is now a live feed into Kafka which can be queried using Presto.

    Epilogue: Time stamps

    The tweets feed that was set up in the last step contains a time stamp in RFC 2822 format as created_at attribute in each tweet.

    1. presto:default> SELECT DISTINCT json_extract_scalar(_message, '$.created_at')) AS raw_date
    2. -> FROM tweets LIMIT 5;
    3. raw_date
    4. --------------------------------
    5. Tue Jul 29 21:07:31 +0000 2014
    6. Tue Jul 29 21:07:32 +0000 2014
    7. Tue Jul 29 21:07:33 +0000 2014
    8. Tue Jul 29 21:07:34 +0000 2014
    9. Tue Jul 29 21:07:35 +0000 2014
    10. (5 rows)

    The topic definition file for the tweets table contains a mapping onto a timestamp using the rfc2822 converter:

    1. ...
    2. {
    3. "name": "created_at",
    4. "mapping": "created_at",
    5. "type": "TIMESTAMP",
    6. "dataFormat": "rfc2822"
    7. ...

    The Kafka connector contains converters for ISO 8601, RFC 2822 text formats and for number-based timestamps using seconds or milliseconds since the epoch. There is also a generic, text-based formatter which uses Joda-Time format strings to parse text columns.