title: "Amazon Managed Streaming for Apache Kafka (Amazon MSK)" description: "How to securely connect an Amazon MSK cluster as a source to Materialize." aliases:
[//]: # "TODO(morsapaes) The Kafka guides need to be rewritten for consistency with the PostgreSQL ones. We should add information about using AWS IAM authentication then."
This guide goes through the required steps to connect Materialize to an Amazon MSK cluster.
{{< tip >}} {{< guided-tour-blurb-for-ingest-data >}} {{< /tip >}}
Before you begin, you must have:
There are various ways to configure your Kafka network to allow Materialize to connect:
Allow Materialize IPs: If your Kafka cluster is publicly accessible, you can configure your firewall to allow connections from a set of static Materialize IP addresses.
Use AWS PrivateLink: If your Kafka cluster is running in a private network, you can use AWS PrivateLink to connect Materialize to the cluster. For details, see AWS PrivateLink.
Use an SSH tunnel: If your Kafka cluster is running in a private network, you can use an SSH tunnel to connect Materialize to the cluster.
{{< tabs tabID="1" >}}
{{< tab "PrivateLink">}}
{{< note >}} Materialize provides a Terraform module that automates the creation and configuration of AWS resources for a PrivateLink connection. For more details, see the Terraform module repositories for Amazon MSK and self-managed Kafka clusters. {{</ note >}}
{{% network-security/privatelink-kafka %}}
{{< /tab >}}
{{< tab "SSH Tunnel">}}
{{% network-security/ssh-tunnel %}}
In Materialize, create a source connection that uses the SSH tunnel connection you configured in the previous section:
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER 'broker1:9092',
SSH TUNNEL ssh_connection
);
{{< /tab >}}
{{< tab "Public cluster">}}
This section goes through the required steps to connect Materialize to an Amazon MSK cluster, including some of the more complicated bits around configuring security settings in Amazon MSK.
If you already have an Amazon MSK cluster, you can skip step 1 and directly move on to Make the cluster public and enable SASL. You can also skip steps 3 and 4 if you already have Apache Kafka installed and running, and have created a topic that you want to create a source for.
The process to connect Materialize to Amazon MSK consists of the following steps:
If you already have an Amazon MSK cluster set up, then you can skip this step.
a. Sign in to the AWS Management Console and open the Amazon MSK console
b. Choose Create cluster
c. Enter a cluster name, and leave all other settings unchanged
d. From the table under All cluster settings, copy the values of the following settings and save them because you need them later in this tutorial: VPC, Subnets, Security groups associated with VPC
e. Choose Create cluster
Note: This creation can take about 15 minutes.
a. Navigate to the Amazon MSK console
b. Choose the MSK cluster you just created in Step 1
c. Click on the Properties tab
d. In the Security settings section, choose Edit
e. Check the checkbox next to SASL/SCRAM authentication
f. Click Save changes
You can find more details about updating a cluster's security configurations here.
a. Now go to the AWS Key Management Service (AWS KMS) console
b. Click Create Key
c. Choose Symmetric and click Next
d. Give the key and Alias and click Next
e. Under Administrative permissions, check the checkbox next to the AWSServiceRoleForKafka and click Next
f. Under Key usage permissions, again check the checkbox next to the AWSServiceRoleForKafka and click Next
g. Click on Create secret
h. Review the details and click Finish
You can find more details about creating a symmetric key here.
a. Go to the AWS Secrets Manager console
b. Click Store a new secret
c. Choose Other type of secret (e.g. API key) for the secret type
d. Under Key/value pairs click on Plaintext
e. Paste the following in the space below it and replace <your-username>
and <your-password>
with the username and password you want to set for the cluster
{
"username": "<your-username>",
"password": "<your-password>"
}
f. On the next page, give a Secret name that starts with AmazonMSK_
g. Under Encryption Key, select the symmetric key you just created in the previous sub-section from the dropdown
h. Go forward to the next steps and finish creating the secret. Once created, record the ARN (Amazon Resource Name) value for your secret
You can find more details about creating a secret using AWS Secrets Manager here.
a. Navigate back to the Amazon MSK console and click on the cluster you created in Step 1
b. Click on the Properties tab
c. In the Security settings section, under SASL/SCRAM authentication, click on Associate secrets
d. Paste the ARN you recorded in the previous subsection and click Associate secrets
a. Go to the Amazon CloudShell console
b. Create a file (eg. msk-config.txt) with the following line
allow.everyone.if.no.acl.found = false
c. Run the following AWS CLI command, replacing <config-file-path>
with the path to the file where you saved your configuration in the previous step
aws kafka create-configuration --name "MakePublic" \
--description "Set allow.everyone.if.no.acl.found = false" \
--kafka-versions "2.6.2" \
--server-properties fileb://<config-file-path>/msk-config.txt
You can find more information about making your cluster public here.
If you already have a client machine set up that can interact with your cluster, then you can skip this step.
If not, you can create an EC2 client machine and then add the security group of the client to the inbound rules of the cluster's security group from the VPC console. You can find more details about how to do that here.
To start using Materialize with Apache Kafka, you need to create a Materialize source over an Apache Kafka topic. If you already have Apache Kafka installed and a topic created, you can skip this step.
Otherwise, you can install Apache Kafka on your client machine from the previous step and create a topic. You can find more information about how to do that here.
As allow.everyone.if.no.acl.found
is set to false
, you must create ACLs for the cluster and topics configured in the previous step to set appropriate access permissions. For more information, see the Amazon MSK documentation.
a. Open the Amazon MSK console and select your cluster
b. Click on View client information
c. Copy the url under Private endpoint and against SASL/SCRAM. This will be your <broker-url>
going forward.
d. Connect to Materialize using the SQL Shell, or your preferred SQL client.
e. Create a connection using the command below. The broker URL is what you copied in step c of this subsection. The <topic-name>
is the name of the topic you created in Step 4. The <your-username>
and <your-password>
is from Store a new secret under Step 2.
CREATE SECRET msk_password AS '<your-password>';
CREATE CONNECTION kafka_connection TO KAFKA (
BROKER '<broker-url>',
SASL MECHANISMS = 'SCRAM-SHA-512',
SASL USERNAME = '<your-username>',
SASL PASSWORD = SECRET msk_password
);
f. If the command executes without an error and outputs CREATE SOURCE, it means that you have successfully connected Materialize to your cluster.
Note: The example above walked through creating a source which is a way of connecting Materialize to an external data source. We created a connection to Amazon MSK using SASL authentication, using credentials securely stored as secrets in Materialize's secret management system. For input formats, we used text
, however, Materialize supports various other options as well. For example, you can ingest messages formatted in JSON, Avro and Protobuf. You can find more details about the various different supported formats and possible configurations here.
{{< /tab >}} {{< /tabs >}}
The Kafka connection created in the previous section can then be reused across
multiple CREATE SOURCE
statements. By default,
the source will be created in the active cluster; to use a different cluster,
use the IN CLUSTER
clause.
CREATE SOURCE json_source
FROM KAFKA CONNECTION kafka_connection (TOPIC 'test_topic')
FORMAT JSON;