title: "Materialize v0.37" date: 2022-12-21 released: true
Add support for connecting to Confluent Schema Registry using an SSH tunnel connection to an SSH bastion server.
Add mz_internal.mz_cluster_replica_utilization
to the system catalog. This view allows you to monitor the resource utilization for
all extant cluster replicas as a % of the total resource allocation:
SELECT * FROM mz_internal.mz_cluster_replica_utilization;
replica_id | process_id | cpu_percent | memory_percent
------------+------------+----------------------+----------------------
1 | 0 | 9.6629961 | 0.6772994995117188
2 | 0 | 0.10735560000000001 | 0.3876686096191406
3 | 0 | 46.8730398 | 0.7110595703125
It's important to note that these tables are part of an unstable interface of
Materialize (mz_internal
), which means that their values may change at any
time, and you should not rely on them for tasks like capacity planning for the
time being.
mz_internal.mz_storage_host_metrics
and
mz_internal.mz_storage_host_sizes
to the system catalog. These objects
respectively expose the last known CPU and RAM utilization statistics for all
processes of all extant storage hosts, and a mapping of logical sizes
(e.g. xlarge
) to the number of processes, as well as CPU and memory
allocations for each process.The concept of a storage host is not user-facing, and is intentionally undocumented. It refers to the physical resource allocation on which Materialize can schedule multiple sources and sinks behind the scenes.
| Schema | Old name | New name |
| ------------- | ------------------ | --------------------- |
| mz_internal
| mz_source_status
| mz_source_statuses
|
| mz_internal
| mz_sink_status
| mz_sink_statuses
|
Support JSON
as an output format of EXPLAIN TIMESTAMP
.
Fix a bug where the to_timestamp
function would truncate the fractional part
of negative timestamps (i.e. prior to the Unix epoch: January 1st, 1970 at
00:00:00 UTC) {{% gh 16610 %}}, and return an error instead of NULL
when the
timestamp is out of range. The new behavior matches PostgreSQL.
Private preview. Add a WebSocket API endpoint which supports interactive SQL queries over WebSockets.
Change the JSON serialization for rows emitted by the HTTP API endpoint
to exactly match the JSON serialization used by FORMAT JSON
Kafka sinks.
Previously, the HTTP SQL endpoint serialized datums using slightly different
rules.