title: "Ingest data from self-hosted PostgreSQL" description: "How to stream data from self-hosted PostgreSQL database to Materialize" aliases:
This page shows you how to stream data from a self-hosted PostgreSQL database to Materialize using the PostgreSQL source.
{{< tip >}} {{< guided-tour-blurb-for-ingest-data >}} {{< /tip >}}
{{% postgres-direct/before-you-begin %}}
Materialize uses PostgreSQL's logical replication protocol to track changes in your database and propagate them to Materialize. Enable your PostgreSQL's logical replication.
As a superuser, use psql
(or your preferred SQL client) to connect to
your PostgreSQL database.
Check if logical replication is enabled; that is, check if the wal_level
is
set to logical
:
SHOW wal_level;
If wal_level
setting is not set to logical
:
In the database configuration file (postgresql.conf
), set wal_level
value to logical
.
Restart the database in order for the new wal_level
to take effect.
Restarting can affect database performance.
In the SQL client connected to PostgreSQL, verify that replication is now
enabled (i.e., verify wal_level
setting is set to logical
).
SHOW wal_level;
{{% postgres-direct/create-a-publication-other %}}
{{< note >}} If you are prototyping and your PostgreSQL instance is publicly accessible, you can skip this step. For production scenarios, we recommend configuring one of the network security options below. {{</ note >}}
There are various ways to configure your database's network to allow Materialize to connect:
Allow Materialize IPs: If your database is publicly accessible, you can configure your database's firewall to allow connections from a set of static Materialize IP addresses.
Use an SSH tunnel: If your database is running in a private network, you can use an SSH tunnel to connect Materialize to the database.
Select the option that works best for you.
{{< tabs >}}
{{< tab "Allow Materialize IPs">}}
In the Materialize console's SQL Shell, or your preferred SQL client connected to Materialize, find the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
Update your database firewall rules to allow traffic from each IP address from the previous step.
{{< /tab >}}
{{< tab "Use AWS PrivateLink">}}
Materialize can connect to a PostgreSQL database through an AWS PrivateLink service. Your PostgreSQL database must be running on AWS in order to use this option.
Create a dedicated target group for your Postgres instance with the following details:
a. Target type as IP address.
b. Protocol as TCP.
c. Port as 5432, or the port that you are using in case it is not 5432.
d. Make sure that the target group is in the same VPC as the PostgreSQL instance.
e. Click next, and register the respective PostgreSQL instance to the target group using its IP address.
Create a Network Load Balancer that is enabled for the same subnets that the PostgreSQL instance is in.
Create a TCP listener for your PostgreSQL instance that forwards to the corresponding target group you created.
Once the TCP listener has been created, make sure that the health checks are passing and that the target is reported as healthy.
If you have set up a security group for your PostgreSQL instance, you must ensure that it allows traffic on the health check port.
Remarks:
a. Network Load Balancers do not have associated security groups. Therefore, the security groups for your targets must use IP addresses to allow traffic.
b. You can't use the security groups for the clients as a source in the security groups for the targets. Therefore, the security groups for your targets must use the IP addresses of the clients to allow traffic. For more details, check the AWS documentation.
Create a VPC endpoint service and associate it with the Network Load Balancer that you’ve just created.
Note the service name that is generated for the endpoint service.
Remarks:
By disabling Acceptance Required, while still strictly managing who can view your endpoint via IAM, Materialze will be able to seamlessly recreate and migrate endpoints as we work to stabilize this feature.
In Materialize, create a AWS PRIVATELINK
connection that references the
endpoint service that you created in the previous step.
CREATE CONNECTION privatelink_svc TO AWS PRIVATELINK (
SERVICE NAME 'com.amazonaws.vpce.<region_id>.vpce-svc-<endpoint_service_id>',
AVAILABILITY ZONES ('use1-az1', 'use1-az2', 'use1-az3')
);
Update the list of the availability zones to match the ones that you are using in your AWS account.
Retrieve the AWS principal for the AWS PrivateLink connection you just created:
SELECT principal
FROM mz_aws_privatelink_connections plc
JOIN mz_connections c ON plc.id = c.id
WHERE c.name = 'privatelink_svc';
principal
---------------------------------------------------------------------------
arn:aws:iam::664411391173:role/mz_20273b7c-2bbe-42b8-8c36-8cc179e9bbc3_u1
Follow the instructions in the AWS PrivateLink documentation to configure your VPC endpoint service to accept connections from the provided AWS principal.
If your AWS PrivateLink service is configured to require acceptance of
connection requests, you must manually approve the connection request from
Materialize after executing the CREATE CONNECTION
statement. For more
details, check the AWS PrivateLink documentation.
Note: It might take some time for the endpoint service connection to show up, so you would need to wait for the endpoint service connection to be ready before you create a source.
{{< /tab >}}
{{< tab "Use an SSH tunnel">}}
To create an SSH tunnel from Materialize to your database, you launch an VM to serve as an SSH bastion host, configure the bastion host to allow traffic only from Materialize, and then configure your database's private network to allow traffic from the bastion host.
Launch a VM to serve as your SSH bastion host.
Configure the SSH bastion host to allow traffic only from Materialize.
In the Materialize console's SQL Shell, or your preferred SQL client connected to Materialize, get the static egress IP addresses for the Materialize region you are running in:
SELECT * FROM mz_egress_ips;
Update your SSH bastion host's firewall rules to allow traffic from each IP address from the previous step.
Update your database firewall rules to allow traffic from the SSH bastion host.
{{< /tab >}}
{{< /tabs >}}
{{< note >}}
If you are prototyping and already have a cluster to host your PostgreSQL
source (e.g. quickstart
), you can skip this step. For production
scenarios, we recommend separating your workloads into multiple clusters for
resource isolation.
{{< /note >}}
{{% postgres-direct/create-a-cluster %}}
Now that you've configured your database network and created an ingestion cluster, you can connect Materialize to your PostgreSQL database and start ingesting data. The exact steps depend on your networking configuration, so start by selecting the relevant option.
{{< tabs >}}
{{< tab "Allow Materialize IPs">}}
In the SQL client connected to Materialize, use the CREATE
SECRET
command to securely store the password for the
materialize
PostgreSQL user you created
earlier:
CREATE SECRET pgpass AS '<PASSWORD>';
Use the CREATE CONNECTION
command to create a
connection object with access and authentication details for Materialize to
use:
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '<host>',
PORT 5432,
USER 'materialize',
PASSWORD SECRET pgpass,
SSL MODE 'require',
DATABASE '<database>'
);
Replace <host>
with your database endpoint.
Replace <database>
with the name of the database containing the tables
you want to replicate to Materialize.
Use the CREATE SOURCE
command to connect Materialize
to your database and start ingesting data from the publication you created
earlier:
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
By default, the source will be created in the active cluster; to use a
different cluster, use the IN CLUSTER
clause. To ingest data from
specific schemas or tables in your publication, use FOR SCHEMAS
(<schema1>,<schema2>)
or FOR TABLES (<table1>, <table2>)
instead of FOR
ALL TABLES
.
{{< /tab >}}
{{< tab "Use an SSH tunnel">}}
In the Materialize console's SQL Shell,
or your preferred SQL client connected to Materialize, use the CREATE
CONNECTION
command to create an SSH
tunnel connection:
CREATE CONNECTION ssh_connection TO SSH TUNNEL (
HOST '<SSH_BASTION_HOST>',
PORT <SSH_BASTION_PORT>,
USER '<SSH_BASTION_USER>'
);
Replace <SSH_BASTION_HOST>
and <SSH_BASTION_PORT
> with the public IP
address and port of the SSH bastion host you created earlier.
Replace <SSH_BASTION_USER>
with the username for the key pair you
created for your SSH bastion host.
Get Materialize's public keys for the SSH tunnel connection you just created:
SELECT
mz_connections.name,
mz_ssh_tunnel_connections.*
FROM
mz_connections JOIN
mz_ssh_tunnel_connections USING(id)
WHERE
mz_connections.name = 'ssh_connection';
Log in to your SSH bastion host and add Materialize's public keys to the
authorized_keys
file, for example:
# Command for Linux
echo "ssh-ed25519 AAAA...76RH materialize" >> ~/.ssh/authorized_keys
echo "ssh-ed25519 AAAA...hLYV materialize" >> ~/.ssh/authorized_keys
Back in the SQL client connected to Materialize, validate the SSH tunnel
connection you created using the VALIDATE CONNECTION
command:
VALIDATE CONNECTION ssh_connection;
If no validation error is returned, move to the next step.
Use the CREATE SECRET
command to securely store the
password for the materialize
PostgreSQL user you created earlier:
CREATE SECRET pgpass AS '<PASSWORD>';
Use the CREATE CONNECTION
command to create
another connection object, this time with database access and authentication
details for Materialize to use:
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '<host>',
PORT 5432,
USER 'materialize',
PASSWORD SECRET pgpass,
DATABASE '<database>',
SSH TUNNEL ssh_connection
);
Replace <host>
with your database endpoint.
Replace <database>
with the name of the database containing the tables
you want to replicate to Materialize.
Use the CREATE SOURCE
command to connect Materialize
to your Azure instance and start ingesting data from the publication you
created earlier:
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
By default, the source will be created in the active cluster; to use a
different cluster, use the IN CLUSTER
clause. To ingest data from
specific schemas or tables in your publication, use FOR SCHEMAS
(<schema1>,<schema2>)
or FOR TABLES (<table1>, <table2>)
instead of FOR
ALL TABLES
.
After source creation, you can handle upstream schema changes
for specific replicated tables using the ALTER SOURCE...ADD SUBSOURCE
and DROP SOURCE
syntax.
{{< /tab >}}
{{< tab "AWS PrivateLink">}}
Back in the SQL client connected to Materialize, use the CREATE SECRET
command to securely store the password for the materialize
PostgreSQL user you
created earlier:
CREATE SECRET pgpass AS '<PASSWORD>';
Use the CREATE CONNECTION
command to create
another connection object, this time with database access and authentication
details for Materialize to use:
CREATE CONNECTION pg_connection TO POSTGRES (
HOST '<host>',
PORT 5432,
USER postgres,
PASSWORD SECRET pgpass,
DATABASE <database>,
AWS PRIVATELINK privatelink_svc
);
Replace <host>
with your database endpoint.
Replace <database>
with the name of the database containing the tables
you want to replicate to Materialize.
Use the CREATE SOURCE
command to connect Materialize
to your database and start ingesting data from the publication you created
earlier:
CREATE SOURCE mz_source
IN CLUSTER ingest_postgres
FROM POSTGRES CONNECTION pg_connection (PUBLICATION 'mz_source')
FOR ALL TABLES;
By default, the source will be created in the active cluster; to use a
different cluster, use the IN CLUSTER
clause. To ingest data from
specific schemas or tables in your publication, use FOR SCHEMAS
(<schema1>,<schema2>)
or FOR TABLES (<table1>, <table2>)
instead of FOR
ALL TABLES
.
{{< /tab >}}
{{< /tabs >}}
{{% postgres-direct/check-the-ingestion-status %}}
{{% postgres-direct/right-size-the-cluster %}}
{{% postgres-direct/next-steps %}}
{{% include-md file="shared-content/postgres-considerations.md" %}}