Materialize Kubernetes Operator Helm Chart
This Helm chart deploys the Materialize operator on a Kubernetes cluster. The operator manages Materialize environments within your Kubernetes infrastructure.
Materialize requires fast, locally-attached NVMe storage for optimal performance. Network-attached storage (like EBS volumes) is not supported.
We recommend using OpenEBS with LVM Local PV for managing local volumes. While other storage solutions may work, we have tested and recommend OpenEBS for optimal performance.
# Install OpenEBS operator
helm repo add openebs https://openebs.github.io/openebs
helm repo update
# Install only the Local PV Storage Engines
helm install openebs --namespace openebs openebs/openebs \
--set engines.replicated.mayastor.enabled=false \
--create-namespace
Verify the installation:
kubectl get pods -n openebs -l role=openebs-lvm
LVM setup varies by environment. Below is our tested and recommended configuration:
Tested configurations:
Setup process:
instance-store-vg
Note: While LVM setup may work on other instance types with local storage (like i3.xlarge, i4i.xlarge, r5d.xlarge), we have not extensively tested these configurations.
Once LVM is configured, set up the storage class (for example in misc/helm-charts/operator/values.yaml):
storage:
storageClass:
create: true
name: "openebs-lvm-instance-store-ext4"
provisioner: "local.csi.openebs.io"
parameters:
storage: "lvm"
fsType: "ext4"
volgroup: "instance-store-vg"
While OpenEBS is our recommended solution, you can use any storage provisioner that meets your performance requirements by overriding the provisioner and parameters values.
For example, to use a different storage provider:
storage:
storageClass:
create: true
name: "your-storage-class"
provisioner: "your.storage.provisioner"
parameters:
# Parameters specific to your chosen storage provisioner
To install the chart with the release name my-materialize-operator
:
helm install my-materialize-operator misc/helm-charts/operator --namespace materialize --create-namespace
This command deploys the Materialize operator on the Kubernetes cluster with default configuration. The Parameters section lists the parameters that can be configured during installation.
To uninstall/delete the my-materialize-operator
deployment:
helm delete my-materialize-operator
This command removes all the Kubernetes components associated with the chart and deletes the release.
The following table lists the configurable parameters of the Materialize operator chart and their default values.
Parameter | Description | Default |
---|---|---|
balancerd.affinity |
Affinity to use for balancerd pods spawned by the operator | nil |
balancerd.enabled |
Flag to indicate whether to create balancerd pods for the environments | true |
balancerd.nodeSelector |
Node selector to use for balancerd pods spawned by the operator | nil |
balancerd.tolerations |
Tolerations to use for balancerd pods spawned by the operator | nil |
clusterd.affinity |
Affinity to use for clusterd pods spawned by the operator | nil |
clusterd.nodeSelector |
Node selector to use for clusterd pods spawned by the operator | nil |
clusterd.tolerations |
Tolerations to use for clusterd pods spawned by the operator | nil |
console.affinity |
Affinity to use for console pods spawned by the operator | nil |
console.enabled |
Flag to indicate whether to create console pods for the environments | true |
console.imageTagMapOverride |
Override the mapping of environmentd versions to console versions | {} |
console.nodeSelector |
Node selector to use for console pods spawned by the operator | nil |
console.tolerations |
Tolerations to use for console pods spawned by the operator | nil |
environmentd.affinity |
Affinity to use for environmentd pods spawned by the operator | nil |
environmentd.nodeSelector |
Node selector to use for environmentd pods spawned by the operator | nil |
environmentd.tolerations |
Tolerations to use for environmentd pods spawned by the operator | nil |
networkPolicies.egress |
egress from Materialize pods to sources and sinks | {"cidrs":["0.0.0.0/0"],"enabled":false} |
networkPolicies.enabled |
Whether to enable network policies for securing communication between pods | false |
networkPolicies.ingress |
Whether to enable ingress to the SQL and HTTP interfaces on environmentd or balancerd | {"cidrs":["0.0.0.0/0"],"enabled":false} |
networkPolicies.internal |
Whether to enable internal communication between Materialize pods | {"enabled":false} |
observability.enabled |
Whether to enable observability features | true |
observability.podMetrics.enabled |
Whether to enable the pod metrics scraper which populates the Environment Overview Monitoring tab in the web console (requires metrics-server to be installed) | false |
observability.prometheus.scrapeAnnotations.enabled |
Whether to annotate pods with common keys used for prometheus scraping. | true |
operator.additionalMaterializeCRDColumns |
Additional columns to display when printing the Materialize CRD in table format. | nil |
operator.affinity |
Affinity to use for the operator pod | nil |
operator.args.enableInternalStatementLogging |
true |
|
operator.args.startupLogFilter |
Log filtering settings for startup logs | "INFO,mz_orchestratord=TRACE" |
operator.cloudProvider.providers.aws.accountID |
When using AWS, accountID is required | "" |
operator.cloudProvider.providers.aws.enabled |
false |
|
operator.cloudProvider.providers.aws.iam.roles.connection |
ARN for CREATE CONNECTION feature | "" |
operator.cloudProvider.providers.aws.iam.roles.environment |
ARN of the IAM role for environmentd | "" |
operator.cloudProvider.providers.gcp |
GCP Configuration (placeholder for future use) | {"enabled":false} |
operator.cloudProvider.region |
Common cloud provider settings | "kind" |
operator.cloudProvider.type |
Specifies cloud provider. Valid values are 'aws', 'gcp', 'azure' , 'generic', or 'local' | "local" |
operator.clusters.defaultReplicationFactor.analytics |
0 |
|
operator.clusters.defaultReplicationFactor.probe |
0 |
|
operator.clusters.defaultReplicationFactor.support |
0 |
|
operator.clusters.defaultReplicationFactor.system |
0 |
|
operator.clusters.defaultSizes.analytics |
"25cc" |
|
operator.clusters.defaultSizes.catalogServer |
"25cc" |
|
operator.clusters.defaultSizes.default |
"25cc" |
|
operator.clusters.defaultSizes.probe |
"mz_probe" |
|
operator.clusters.defaultSizes.support |
"25cc" |
|
operator.clusters.defaultSizes.system |
"25cc" |
|
operator.image.pullPolicy |
Policy for pulling the image: "IfNotPresent" avoids unnecessary re-pulling of images | "IfNotPresent" |
operator.image.repository |
The Docker repository for the operator image | "materialize/orchestratord" |
operator.image.tag |
The tag/version of the operator image to be used | "v0.152.0" |
operator.nodeSelector |
Node selector to use for the operator pod | nil |
operator.resources.limits |
Resource limits for the operator's CPU and memory | {"memory":"512Mi"} |
operator.resources.requests |
Resources requested by the operator for CPU and memory | {"cpu":"100m","memory":"512Mi"} |
operator.secretsController |
Which secrets controller to use for storing secrets. Valid values are 'kubernetes' and 'aws-secrets-manager'. Setting 'aws-secrets-manager' requires a configured AWS cloud provider and IAM role for the environment with Secrets Manager permissions. | "kubernetes" |
operator.tolerations |
Tolerations to use for the operator pod | nil |
rbac.create |
Whether to create necessary RBAC roles and bindings | true |
schedulerName |
Optionally use a non-default kubernetes scheduler. | nil |
serviceAccount.create |
Whether to create a new service account for the operator | true |
serviceAccount.name |
The name of the service account to be created | "orchestratord" |
storage.storageClass.allowVolumeExpansion |
false |
|
storage.storageClass.create |
Set to false to use an existing StorageClass instead. Refer to the Kubernetes StorageClass documentation | false |
storage.storageClass.name |
Name of the StorageClass to create/use: eg "openebs-lvm-instance-store-ext4" | "" |
storage.storageClass.parameters |
Parameters for the CSI driver | {"fsType":"ext4","storage":"lvm","volgroup":"instance-store-vg"} |
storage.storageClass.provisioner |
CSI driver to use, eg "local.csi.openebs.io" | "" |
storage.storageClass.reclaimPolicy |
"Delete" |
|
storage.storageClass.volumeBindingMode |
"WaitForFirstConsumer" |
|
telemetry.enabled |
true |
|
telemetry.segmentApiKey |
"hMWi3sZ17KFMjn2sPWo9UJGpOQqiba4A" |
|
telemetry.segmentClientSide |
true |
|
tls.defaultCertificateSpecs |
{} |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example:
helm install my-materialize-operator \
--set operator.image.tag=v0.153.0-dev.0 \
materialize/materialize-operator
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example:
helm install my-materialize-operator -f values.yaml materialize/materialize-operator
To deploy a Materialize environment, create a Materialize
custom resource definition with the desired configuration.
apiVersion: v1
kind: Namespace
metadata:
name: materialize-environment
---
apiVersion: v1
kind: Secret
metadata:
name: materialize-backend
namespace: materialize-environment
stringData:
metadata_backend_url: "postgres://materialize_user:materialize_pass@postgres.materialize.svc.cluster.local:5432/materialize_db?sslmode=disable"
persist_backend_url: "s3://minio:minio123@bucket/12345678-1234-1234-1234-123456789012?endpoint=http%3A%2F%2Fminio.materialize.svc.cluster.local%3A9000®ion=minio"
---
apiVersion: materialize.cloud/v1alpha1
kind: Materialize
metadata:
name: 12345678-1234-1234-1234-123456789012
namespace: materialize-environment
spec:
environmentdImageRef: materialize/environmentd:v0.153.0-dev.0
backendSecretName: materialize-backend
environmentdResourceRequirements:
limits:
memory: 16Gi
requests:
cpu: "2"
memory: 16Gi
balancerdResourceRequirements:
limits:
memory: 256Mi
requests:
cpu: 100m
memory: 256Mi
The chart creates a ClusterRole
and ClusterRoleBinding
by default. To use an existing ClusterRole
, set rbac.create=false
and specify the name of the existing ClusterRole
using the rbac.clusterRole
parameter.
To enable observability features, set observability.enabled=true
. This will create the necessary resources for monitoring the operator. If you want to use Prometheus, also set observability.prometheus.enabled=true
.
Network policies can be enabled by setting networkPolicies.enabled=true
. By default, the chart uses native Kubernetes network policies. To use Cilium network policies instead, set networkPolicies.useNativeKubernetesPolicy=false
.
If you encounter issues with the Materialize operator, check the operator logs:
kubectl logs -l app.kubernetes.io/name=materialize-operator -n materialize
For more detailed information on using and troubleshooting the Materialize operator, refer to the Materialize documentation.
Once you have the Materialize operator installed and managing your Materialize instances, you can upgrade both components. While the operator and instances can be upgraded independently, you should ensure version compatibility between them. The operator can typically manage instances within a certain version range - upgrading the operator too far ahead of your instances may cause compatibility issues.
We recommend:
To upgrade the Materialize operator to a new version:
helm upgrade my-materialize-operator materialize/misc/helm-charts/operator
If you have custom values, make sure to include your values file:
helm upgrade my-materialize-operator materialize/misc/helm-charts/operator -f my-values.yaml
To upgrade your Materialize instances, you'll need to update the Materialize custom resource and trigger a rollout.
By default, the operator performs rolling upgrades (inPlaceRollout: false
) which minimize downtime but require additional Kubernetes cluster resources during the transition. However, keep in mind that rolling upgrades typically take longer to complete due to the sequential rollout process. For environments where downtime is acceptable, you can opt for in-place upgrades (inPlaceRollout: true
).
The compatible version for your Materialize instances is specified in the Helm chart's appVersion
. For the installed chart version, you can run:
helm list -n materialize
Or check the Chart.yaml
file in the misc/helm-charts/operator
directory:
apiVersion: v2
name: materialize-operator
# ...
version: v25.3.0-beta-1
appVersion: v0.147.0 # Use this version for your Materialize instances
Use the appVersion
(v0.147.0
in this case) when updating your Materialize instances to ensure compatibility.
kubectl
patchFor standard upgrades such as image updates:
# For version updates, first update the image reference
kubectl patch materialize <instance-name> \
-n <materialize-instance-namespace> \
--type='merge' \
-p "{\"spec\": {\"environmentdImageRef\": \"materialize/environmentd:v0.147.0\"}}"
# Then trigger the rollout with a new UUID
kubectl patch materialize <instance-name> \
-n <materialize-instance-namespace> \
--type='merge' \
-p "{\"spec\": {\"requestRollout\": \"$(uuidgen)\"}}"
You can combine both operations in a single command if preferred:
kubectl patch materialize 12345678-1234-1234-1234-123456789012 \
-n materialize-environment \
--type='merge' \
-p "{\"spec\": {\"environmentdImageRef\": \"materialize/environmentd:v0.147.0\", \"requestRollout\": \"$(uuidgen)\"}}"
Alternatively, you can update your Materialize custom resource definition directly:
apiVersion: materialize.cloud/v1alpha1
kind: Materialize
metadata:
name: 12345678-1234-1234-1234-123456789012
namespace: materialize-environment
spec:
environmentdImageRef: materialize/environmentd:v0.147.0 # Update version as needed
requestRollout: 22222222-2222-2222-2222-222222222222 # Generate new UUID
forceRollout: 33333333-3333-3333-3333-333333333333 # Optional: for forced rollouts
inPlaceRollout: false # When false, performs a rolling upgrade rather than in-place
backendSecretName: materialize-backend
Apply the updated definition:
kubectl apply -f materialize.yaml
If you need to force a rollout even when there are no changes to the instance:
kubectl patch materialize <instance-name> \
-n materialize-environment \
--type='merge' \
-p "{\"spec\": {\"requestRollout\": \"$(uuidgen)\", \"forceRollout\": \"$(uuidgen)\"}}"
The behavior of a forced rollout follows your inPlaceRollout
setting:
inPlaceRollout: false
(default): Creates new instances before terminating the old ones, temporarily requiring twice the resources during the transitioninPlaceRollout: true
: Directly replaces the instances, causing downtime but without requiring additional resourcesAfter initiating the rollout, you can monitor the status:
# Watch the status of your Materialize environment
kubectl get materialize -n materialize-environment -w
# Check the logs of the operator
kubectl logs -l app.kubernetes.io/name=materialize-operator -n materialize
requestRollout
triggers a rollout only if there are actual changes to the instance (like image updates)forceRollout
triggers a rollout regardless of whether there are changes, which can be useful for debugging or when you need to force a rollout for other reasonsinPlaceRollout
:
false
(default): Performs a rolling upgrade by spawning new instances before terminating old ones. While this minimizes downtime, there may still be a brief interruption during the transition.true
: Directly replaces existing instances, which will cause downtime.Beyond the Helm configuration, there are other important knobs to tune to get the best out of Materialize within a Kubernetes environment.
Materialize has been vetted to work on instances with the following properties:
When operating in AWS, we recommend using the r7gd
and r6gd
families of instances (and r8gd
once available)
when running with local disk, and the r8g
, r7g
, and r6g
families when running without local disk.
It is strongly recommended to enable the Kubernetes static
CPU management policy.
This ensures that each worker thread of Materialize is given exclusively access to a vCPU. Our benchmarks have shown this
to substantially improve the performance of compute-bound workloads.
Autogenerated from chart metadata using helm-docs v1.14.2