The default hive modelling is available in org.apache.atlas.hive.model.HiveDataModelGenerator. It defines the following types:
hive_db(ClassType) - super types [Referenceable] - attributes [name, clusterName, description, locationUri, parameters, ownerName, ownerType] hive_storagedesc(ClassType) - super types [Referenceable] - attributes [cols, location, inputFormat, outputFormat, compressed, numBuckets, serdeInfo, bucketCols, sortCols, parameters, storedAsSubDirectories] hive_column(ClassType) - super types [Referenceable] - attributes [name, type, comment, table] hive_table(ClassType) - super types [DataSet] - attributes [name, db, owner, createTime, lastAccessTime, comment, retention, sd, partitionKeys, columns, aliases, parameters, viewOriginalText, viewExpandedText, tableType, temporary] hive_process(ClassType) - super types [Process] - attributes [name, startTime, endTime, userName, operationType, queryText, queryPlan, queryId] hive_principal_type(EnumType) - values [USER, ROLE, GROUP] hive_order(StructType) - attributes [col, order] hive_serde(StructType) - attributes [name, serializationLib, parameters]
The entities are created and de-duped using unique qualified name. They provide namespace and can be used for querying/lineage as well. Note that dbName, tableName and columnName should be in lower case. clusterName is explained below.
org.apache.atlas.hive.bridge.HiveMetaStoreBridge imports the Hive metadata into Atlas using the model defined in org.apache.atlas.hive.model.HiveDataModelGenerator. import-hive.sh command can be used to facilitate this. The script needs Hadoop and Hive classpath jars. * For Hadoop jars, please make sure that the environment variable HADOOP_CLASSPATH is set. Another way is to set HADOOP_HOME to point to root directory of your Hadoop installation * Similarly, for Hive jars, set HIVE_HOME to the root of Hive installation * Set environment variable HIVE_CONF_DIR to Hive configuration directory * Copy <atlas-conf>/atlas-application.properties to the hive conf directory
Usage: <atlas package>/bin/import-hive.sh
The logs are in <atlas package>/logs/import-hive.log
If you you are importing metadata in a kerberized cluster you need to run the command like this:
<atlas package>/bin/import-hive.sh -Dsun.security.jgss.debug=true -Djavax.security.auth.useSubjectCredsOnly=false -Djava.security.krb5.conf=[krb5.conf location] -Djava.security.auth.login.config=[jaas.conf location]
Hive supports listeners on hive command execution using hive hooks. This is used to add/update/remove entities in Atlas using the model defined in org.apache.atlas.hive.model.HiveDataModelGenerator. The hook submits the request to a thread pool executor to avoid blocking the command execution. The thread submits the entities as message to the notification server and atlas server reads these messages and registers the entities. Follow these instructions in your hive set-up to add hive hook for Atlas:
<property> <name>hive.exec.post.hooks</name> <value>org.apache.atlas.hive.hook.HiveHook</value> </property>
<property> <name>atlas.cluster.name</name> <value>primary</value> </property>
The following properties in <atlas-conf>/atlas-application.properties control the thread pool and notification details:
Refer Configuration for notification related configurations