Coding Tutorials Blog

Handling Cross-Origin Cookies with ExpressJS

August 23, 2023

Data is becoming the cornerstone of modern businesses. As businesses scale, so does their data, and this leads to the need for efficient data storage, retrieval, and processing systems. This is where Data Lakehouses come into the picture.

What is a Data Lakehouse?

A Data Lakehouse combines the best of both data lakes and data warehouses. It provides data lakes’ raw storage capabilities and data warehouses’ structured querying capabilities. This hybrid model ensures flexibility in storing vast amounts of unstructured data, while also enabling SQL-based structured queries, ensuring that businesses can derive meaningful insights from their data.

A Data Lake vs a Data Lakehouse

We will be implementing a basic lakehouse locally on your laptop with the following components:

  • Apache Spark, which can be used for ingesting streaming or batch data into our lakehouse
  • Minio to act as our data lake repository and be the s3 compatible storage layer.
  • Apache Iceberg, as our table format, allows query engines to plan smarter, faster queries on our data lakehouse data.
  • Nessie is our data catalog allowing our datasets to be discoverable and accessible across our tools. Nessie also provides git-like features for data quality, experimentation and disaster recovery. (Get even more features like an intuitive UI and automated table optimization when you use a Dremio Arctic cloud-managed Nessie based catalog.)
  • Dremio will be our query engine which we can use performantly query our data but also document, organize and deliver our data to our consumers, such as data analysts doing ad-hoc analytics or building out BI dashboards.

Create a Docker Compose File and Running it

Building a Data Lakehouse on your local machine requires orchestrating multiple services, and Docker Compose is a perfect tool for this task. This section will create a docker-compose.yml file to define and run multi-container Docker applications for our Data Lakehouse environment.


Install Docker: Ensure that you have Docker installed on your laptop. If not, you can download it from the official Docker website.

Create Docker Compose File: Navigate to a directory where you want to create your project. Here, create a file named docker-compose.yml. Populate this file with the following content:

version: "3.9"

    platform: linux/x86_64
    image: dremio/dremio-oss:latest
      - 9047:9047
      - 31010:31010
      - 32010:32010
    container_name: dremio

    image: minio/minio
      - 9000:9000
      - 9001:9001
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    container_name: minio
    command: server /data --console-address ":9001"

    image: alexmerced/spark33-notebook
      - 8888:8888
    env_file: .env
    container_name: notebook
    image: projectnessie/nessie
    container_name: nessie
      - "19120:19120"

    name: iceberg_env
    driver: bridge 

Create Environment File:

Alongside your docker-compose.yml file, create an .env file to manage your environment variables. Start with the given template and fill in the details as you progress.

# Fill in Details

# AWS_REGION is used by Spark
# This must match if using minio
# Used by pyIceberg
# AWS Credentials (this can use minio credential, to be filled in later)
# If using Minio, this should be the API address of Minio Server
# Location where files will be written when creating new tables
# URI of Nessie Catalog

Run the Docker Compose:

Open four terminals for clearer log visualization. Each service should have its terminal. Ensure you are in the directory containing the docker-compose.yml file, and then run:

In the first terminal:

docker-compose up minioserver

Headover to localhost:9001 in your browser and login with credentials minioadmin/minioadmin. Once your logged in do the following:

  • create a bucket called “warehouse”
  • create an access key and copy the access key and secret key to your .env file

In the second terminal:

docker-compose up nessie

In the third terminal:

docker-compose up spark_notebook 

In the logs when this container open look for output the looks like the following and copy and paste the URL into your browser.

notebook  |  or

In the fourth terminal:

docker-compose up dremio 

After completing these steps, you will have Dremio, Minio, and a Spark Notebook running in separate Docker containers on your laptop, ready for further Data Lakehouse operations!

Creating a Table in Spark

From the jupyter notebook window you opened in the browser earlier, create a new notebook with the following code.

import pyspark
from pyspark.sql import SparkSession
import os

NESSIE_URI = os.environ.get("NESSIE_URI") ## Nessie Server URI


conf = (
        .set('spark.jars.packages', 'org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.3.1,org.projectnessie.nessie-integrations:nessie-spark-extensions-3.3_2.12:0.67.0,,')
        .set('spark.sql.extensions', 'org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,org.projectnessie.spark.extensions.NessieSparkSessionExtensions')
        .set('spark.sql.catalog.nessie', 'org.apache.iceberg.spark.SparkCatalog')
        .set('spark.sql.catalog.nessie.uri', NESSIE_URI)
        .set('spark.sql.catalog.nessie.ref', 'main')
        .set('spark.sql.catalog.nessie.authentication.type', 'NONE')
        .set('spark.sql.catalog.nessie.catalog-impl', 'org.apache.iceberg.nessie.NessieCatalog')
        .set('spark.sql.catalog.nessie.s3.endpoint', AWS_S3_ENDPOINT)
        .set('spark.sql.catalog.nessie.warehouse', WAREHOUSE)
        .set('', '')
        .set('spark.hadoop.fs.s3a.access.key', AWS_ACCESS_KEY)
        .set('spark.hadoop.fs.s3a.secret.key', AWS_SECRET_KEY)

## Start Spark Session
spark = SparkSession.builder.config(conf=conf).getOrCreate()
print("Spark Running")

## Create a Table
spark.sql("CREATE TABLE nessie.names (name STRING) USING iceberg;").show()

## Insert Some Data
spark.sql("INSERT INTO nessie.names VALUES ('Alex Merced'), ('Dipankar Mazumdar'), ('Jason Hughes')").show()

## Query the Data
spark.sql("SELECT * FROM nessie.names;").show()

Run the code, creating a table you can confirm by opening up the minio dashboard.

The Minio Dashboard

Querying the Data in Dremio

Now that the data has been created, catalogs like Nessie for your Apache Iceberg tables offer easy visibility of your tables from tool to tool. We open Dremio and see we can immediately query the data we have in our Nessie catalog.

  • head over to localhost:9047 where Dremio web application resides
  • create your admin user account
  • once on the dashboard click on “add source” in the bottom left
  • select “nessie”
  • Fill out the first two tabs as shown in the images below:

Nessie Connector Main Tab Nessie Connected Storage Tab

  • We use the docker network URL for our Nessie server since it’s on the same docker network as our Dremio container
  • None for authentication unless you setup your Nessie server to use tokens (default is none)
  • The second tab allows us to provide credentials for the storage of the data the catalog tracks where we the keys we got from minio
  • we set the, fs.s3a.endpoint and dremio.s3.compat settings in order to use S3 compatible storage solutions like Minio
  • Since we are working with a local non-SSL network, turn off encryption on the connection.

Now you can see our names table on the Dremio dashboard under our new connected source.

Seeing Your Data in Dremio

You can now click on it and run a query:

Querying the Data in Dremio

Keep in mind Dremio also has full DML support for Apache Iceberg so you can do creates and inserts directly from Dremio like in the screenshot below.

Creating and Inserting in Dremio

So now you get the performance of Dremio and Apache Iceberg along with the git-like capabilities that the Nessie catalog brings, allowing you to isolate ingestion on branches, create zero-clones for experimentation, and rollback your catalog for disaster recovery all on your data lakehouse.

© 2020