Design Recommendations and Guidelines for Adopting Unity Catalog in Your Existing Azure Data Lakehouse + FAQ

Nnaemezue Obi-Eyisi
6 min readNov 5

In this blog post, my aim is to guide you on how to effectively utilize the Unity Catalog within your current data lakehouse and offer my recommended best practices. These practices are designed to help you fully leverage the new features of the Unity Catalog without causing disruptions to your existing data architecture. Additionally, I will outline the recommended design patterns for new Data Lakehouse implementation using Managed Tables including an FAQ section for clarity.

Description of Existing Data Lakehouse Architecture: Legacy Method Using Mount Points

Over the past few years, I have worked with various clients in the Azure space who have embraced and implemented the Medallion architecture for their enterprise Data Lake. This architecture typically involves having at least three different Azure Data Lake storage containers (bronze, silver, gold) to store various stages of data refinement. Databricks is commonly used in conjunction with a data lake to ingest, process, and write data back to the data lake containers or zones. In this architecture, it is common to create Databricks mount points against the different data lake zones.

To be specific, when creating mount points on Azure Data Lake Storage Gen2, the following steps are required:

  1. Create a Service Principal in Azure Active Directory, which acts as an application identity used to facilitate authentication between the Databricks application and Azure Data Lake Storage.
  2. Once the service principal is correctly configured, it is used to set up the mount points within the Databricks workspace. These mount points have a file path format such as ‘/mnt/bronzecontainername’.

With this configuration in place, you can easily read and write transformed data into the data lake using Databricks. However, if there’s a need to create a schema over the files existing in your data lake, you can use the Create External Tables feature to query them like regular SQL tables. It’s important to note that the table metadata will be stored in the Hive Metastore catalog within your workspace. Additionally, please keep in…

Nnaemezue Obi-Eyisi

I am passionate about empowering, educating, and encouraging individuals pursuing a career in data engineering. Currently a Senior Data Engineer at Capgemini