Hive External Table of Doris
- support for Hive data sources to access Doris
- Support joint queries between Doris and Hive data sources to perform more complex analysis operations
- Support access to kerberos-enabled Hive data sources
This document introduces how to use this feature and the considerations.
- FE: Frontend, the front-end node of Doris, responsible for metadata management and request access.
- BE: Backend, the backend node of Doris, responsible for query execution and data storage
Parameter Description
- External Table Columns
- The order of the columns should be the same as the Hive table
- Must contain all the columns in the Hive table
- Hive table partition columns do not need to be specified, they can be defined as normal columns.
- ENGINE should be specified as HIVE
- PROPERTIES attribute.
- : Hive Metastore service address
database
: the name of the database to which Hive is mountedtable
: the name of the table to which Hive is mountedhadoop.username
: the username to visit HDFS (need to specify it when the authentication type is simple)dfs.nameservices
: the logical name for this new nameservice. See hdfs-site.xmldfs.ha.namenodes.[nameservice ID]
:unique identifiers for each NameNode in the nameservice. See hdfs-site.xml- :the fully-qualified RPC address for each NameNode to listen on. See hdfs-site.xml
dfs.client.failover.proxy.provider.[nameservice ID]
:the Java class that HDFS clients use to contact the Active NameNode, usually it is org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
- To enable Doris to access the hadoop cluster with kerberos authentication enabled, you need to deploy the Kerberos client kinit on the Doris all FE and BE nodes, configure krb5.conf, and fill in the KDC service information.
- The value of the PROPERTIES property
hadoop.kerberos.keytab
needs to specify the absolute path of the keytab local file and allow the Doris process to access it. - The configuration of the HDFS cluster can be written into the hdfs-site.xml file. The configuration file is in the conf directory of fe and be. When users create a Hive table, they do not need to fill in the relevant information of the HDFS cluster configuration.
The supported Hive column types correspond to Doris in the following table.
- Hive table Schema changes are not automatically synchronized and require rebuilding the Hive external table in Doris.
- The current Hive storage format only supports Text, Parquet and ORC types
After you finish building the Hive external table in Doris, it is no different from a normal Doris OLAP table except that you cannot use the data model in Doris (rollup, preaggregation, materialized view, etc.)