title: "Ingest data from Azure DB" description: "How to stream data from Azure DB for PostgreSQL to Materialize" aliases:
This page shows you how to stream data from Azure DB for PostgreSQL to Materialize using the PostgreSQL source.
{{< tip >}} {{< guided-tour-blurb-for-ingest-data >}} {{< /tip >}}
{{% postgres-direct/before-you-begin %}}
Materialize uses PostgreSQL's logical replication protocol to track changes in your database and propagate them to Materialize.
To enable logical replication in Azure DB, see the Azure documentation.
{{% postgres-direct/create-a-publication-other %}}
{{< note >}} If you are prototyping and your AzureDB instance is publicly accessible, you can skip this step. For production scenarios, we recommend configuring one of the network security options below. {{</ note >}}
There are various ways to configure your database's network to allow Materialize to connect:
Allow Materialize IPs: If your database is publicly accessible, you can configure your database's firewall to allow connections from a set of static Materialize IP addresses.
Use an SSH tunnel: If your database is running in a private network, you can use an SSH tunnel to connect Materialize to the database.
Select the option that works best for you.
{{< tabs >}}
{{< tab "Allow Materialize IPs">}}
In the Materialize console's SQL Shell, or your preferred SQL client connected to Materialize, find the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
Update your Azure DB firewall rules to allow traffic from each IP address from the previous step.
{{< /tab >}}
{{< tab "Use an SSH tunnel">}}
To create an SSH tunnel from Materialize to your database, you launch an instance to serve as an SSH bastion host, configure the bastion host to allow traffic only from Materialize, and then configure your database's private network to allow traffic from the bastion host.
Launch an Azure VM with a static public IP address to serve as your SSH bastion host.
Configure the SSH bastion host to allow traffic only from Materialize.
In the Materialize console's SQL Shell, or your preferred SQL client connected to Materialize, get the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
Update your SSH bastion host's firewall rules to allow traffic from each IP address from the previous step.
Update your Azure DB firewall rules to allow traffic from the SSH bastion host.
{{< /tab >}}
{{< /tabs >}}
{{< note >}}
If you are prototyping and already have a cluster to host your PostgreSQL
source (e.g. quickstart
), you can skip this step. For production
scenarios, we recommend separating your workloads into multiple clusters for
resource isolation.
{{< /note >}}
{{% postgres-direct/create-a-cluster %}}
Now that you've configured your database network and created an ingestion cluster, you can connect Materialize to your PostgreSQL database and start ingesting data. The exact steps depend on your networking configuration, so start by selecting the relevant option.
{{< tabs >}}
{{< tab "Allow Materialize IPs">}}
In the Materialize console's SQL Shell,
or your preferred SQL client connected to Materialize, use the CREATE
SECRET
command to securely store the password for the
materialize
PostgreSQL user you created
earlier:
CREATE SECRET pgpass AS '<PASSWORD>';
Use the CREATE CONNECTION
command to create a
connection object with access and authentication details for Materialize to
use:
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '<host>',
PORT 5432,
USER 'materialize',
PASSWORD SECRET pgpass,
SSL MODE 'require',
DATABASE '<database>'
);
Replace <host>
with your Azure DB endpoint.
Replace <database>
with the name of the database containing the tables
you want to replicate to Materialize.
Use the CREATE SOURCE
command to connect Materialize
to your Azure instance and start ingesting data from the publication you
created earlier:
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
By default, the source will be created in the active cluster; to use a
different cluster, use the IN CLUSTER
clause. To ingest data from
specific schemas or tables in your publication, use FOR SCHEMAS
(<schema1>,<schema2>)
or FOR TABLES (<table1>, <table2>)
instead of FOR
ALL TABLES
.
After source creation, you can handle upstream schema changes
for specific replicated tables using the ALTER SOURCE...ADD SUBSOURCE
and DROP SOURCE
syntax.
{{< /tab >}}
{{< tab "Use an SSH tunnel">}}
In the Materialize console's SQL Shell,
or your preferred SQL client connected to Materialize, use the CREATE
CONNECTION
command to create an SSH
tunnel connection:
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '<SSH_BASTION_HOST>',
PORT <SSH_BASTION_PORT>,
USER '<SSH_BASTION_USER>'
);
Replace <SSH_BASTION_HOST>
and <SSH_BASTION_PORT
> with the public IP
address and port of the SSH bastion host you created earlier.
Replace <SSH_BASTION_USER>
with the username for the key pair you
created for your SSH bastion host.
Get Materialize's public keys for the SSH tunnel connection you just created:
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
FROM
mz_connections JOIN
mz_ssh_tunnel_connections USING(id)
WHERE
mz_connections.name = 'ssh_connection';
Log in to your SSH bastion host and add Materialize's public keys to the
authorized_keys
file, for example:
# Command for Linux
echo "ssh-ed25519 AAAA...76RH materialize" >> ~/.ssh/authorized_keys
echo "ssh-ed25519 AAAA...hLYV materialize" >> ~/.ssh/authorized_keys
Back in the SQL client connected to Materialize, validate the SSH tunnel
connection you created using the VALIDATE CONNECTION
command:
VALIDATE CONNECTION ssh_connection;
If no validation error is returned, move to the next step.
Use the CREATE SECRET
command to securely store the
password for the materialize
PostgreSQL user you created earlier:
CREATE SECRET pgpass AS '<PASSWORD>';
Use the CREATE CONNECTION
command to create
another connection object, this time with database access and authentication
details for Materialize to use:
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '<host>',
PORT 5432,
USER 'materialize',
PASSWORD SECRET pgpass,
DATABASE '<database>',
SSH TUNNEL ssh_connection
);
Replace <host>
with your Azure DB endpoint.
Replace <database>
with the name of the database containing the tables
you want to replicate to Materialize.
Use the CREATE SOURCE
command to connect Materialize
to your Azure instance and start ingesting data from the publication you
created earlier:
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
By default, the source will be created in the active cluster; to use a
different cluster, use the IN CLUSTER
clause. To ingest data from
specific schemas or tables in your publication, use FOR SCHEMAS
(<schema1>,<schema2>)
or FOR TABLES (<table1>, <table2>)
instead of FOR
ALL TABLES
.
{{< /tab >}}
{{< /tabs >}}
{{% postgres-direct/check-the-ingestion-status %}}
{{% postgres-direct/right-size-the-cluster %}}
{{% postgres-direct/next-steps %}}
{{% include-md file="shared-content/postgres-considerations.md" %}}