Following are the coding conventions that Materialize developers should strive to adhere to.
The PostgreSQL manual adheres to a consistent style for its SQL statements, and we strive to do the same.
The basic idea is to capitalize keywords, but lowercase identifiers. What counts as a keyword and what counts as an identifier is often surprising, especially when the identifier refers to a built-in function or type.
For example, here is a properly formatted CREATE TABLE
statement:
CREATE TABLE distributors (
id integer PRIMARY KEY,
name varchar(40)
)
CREATE
and TABLE
are obviously both keywords, and are capitalized as such.
Perhaps surprisingly, integer
is a type name—an identifier—and therefore is
not capitalized. The same is true for varchar
. But PRIMARY
and KEY
are
keywords, and are again capitalized.
The formatting of SQL statements is generally straightforward. For example:
SELECT
sum(l_extendedprice * l_discount) AS revenue,
EXTRACT (DAY FROM l_shipdate)
FROM
lineitem
WHERE
l_quantity < 24
AND l_shipdate >= DATE '1994-01-01'
AND l_shipdate < DATE '1994-01-01' + INTERVAL '1' YEAR
AND l_discount BETWEEN 0.06 - 0.01 AND 0.07
A few tokens require special attention.
Aggregate functions, like sum
, are not capitalized. In general, functions are
never capitalized. EXTRACT
, however, is capitalized, because EXTRACT
is a
function-like operator that has special syntax—notice how it its argument is
DAY FROM l_shipdate
, where DAY
and FROM
are keywords. If it were a
normal function, it would be formatted as extract('day', l_shipdate)
.
Type constructors like DATE '1994-01-01
and INTERVAL '1' YEAR
do
capitalize the type name, because in this case the type name is a keyword.
These date-time related keywords are strange corner of SQL's syntax. There
is no INTEGER '7'
syntax, for example.
The keywords TRUE
, FALSE
, and NULL
are always capitalized, even though
they are often rendered in output as lowercase.
Function calls, including type constructors that take arguments, should not have
a space between the name of the function and the open parenthesis, but
keywords followed by an open parenthesis should have a space in between.
Symbolic operators, like =
and +
, should always be surrounded by spaces,
and commas should always be followed by a space. Following are several examples
of queries that obey the spacing rules
CREATE TABLE t (a varchar(40));
CREATE CONNECTION kafka_conn TO KAFKA (BROKER 'localhost:9092');
CREATE SOURCE test FROM KAFKA CONNECTION kafka_conn (TOPIC 'top') WITH (config_param = 'yes') FORMAT BYTES;
SELECT coalesce(1, NULL, 2);
SELECT CAST (1 AS text); -- note the space after CAST
SELECT (1 + 2) - (7 * 4);
and several queries that don't:
CREATE TABLE t (a varchar (40));
CREATE CONNECTION kafka_conn TO KAFKA (BROKER 'localhost:9092');
CREATE SOURCE test FROM KAFKA CONNECTION kafka_conn )TOPIC 'top') WITH (tail=true);
SELECT coalesce (1, NULL,2);
SELECT CAST(1 AS text);
SELECT 1+2;
There are no rules about how to break a SQL query across multiple lines, and how to indent the resulting pieces, so just do whatever you think looks best.
These rules are admittedly somewhat obtuse, and even the PostgreSQL manual is
not entirely consistent on the finer points, so don't worry about being exact.
But please at least capitalize the obvious keywords, like SELECT
, INSERT
,
FROM
, WHERE
, AS
, AND
, etc.
Portions of this section are copied from the PostgreSQL documentation and subject to the PostgreSQL license, a copy of which is available in the LICENSE file at the root of this repository.
For error message style, we strive to adhere to the guidelines in the PostgreSQL manual, and use a format that consists of a required primary message followed by optional detail and hint messages.
Primary message: short, factual, and avoids reference to implementation details. Lowercase the first letter and do not end with any punctuation.
Detail message: factual, use if needed to keep the primary message short or to mention implementation details. Use complete sentences, each of which starts with a capitalized letter and ends with a period. Separate sentences with two spaces.
Hint message: suggestions about what to do to fix the problem. This should be actionable, and can provide a shortlink to direct the user to more information. Use complete sentences, each of which starts with a capitalized letter and ends with a period. Separate sentences with two spaces.
Primary: could not parse column "bar.foo"
Detail: Column "bar.foo" contains an unsupported type.
Hint: Try using CAST in the CREATE SOURCE statement or excluding this column from replication.
To use a shortlink in a Hint, add the shortlink to this file in your PR to add/modify the hint message. Be mindful that we are committed to the upkeep of the referenced link.
Some key principles to highlight are:
We adhere to standards for our system catalog relations (tables, views), which includes both the stable mz_catalog
relations and the unstable mz_internal
relations.
Modeling standards:
mz_kafka_sources
table does not include the name of the source, since that information is available in the mz_sources
table.
SHOW
command or a view to spare users from typing out the joins for common queries.interval
type to represent durations. Do not use integers where the unit is indicated as a suffix on the column name. E.g., use startup_time interval
, not startup_time_ms integer
. The only exception is durations with nanosecond precision. Since the interval
type only has millisecond precision, it is acceptable to use <$name>_ns integer
when necessary (e.g., delay_ns
).Naming standards:
position
instead of pos
. Say return
instead of ret
. Say definition
instead of def
.
id
, oid
, ip
, and url
.kebab-case
for enum values. E.g., the type
of a Confluent Schema Registry connection is confluent-schema-registry
and the type
of a materialized view is materialized-view
. Only use hyphens to separate multiple words. Don’t introduce hyphens for CamelCased proper nouns. For example, the “AWS PrivateLink” connection is represented as aws-privatelink
._at
suffix, e.g., occurred_at
.is_
prefix. E.g., say indexed
, not is_indexed
.mz_sources
and mz_connections
.
_usage
, _history
, and _utilization
.mz_sources
, mz_source_statuses
, and mz_source_status_history
.We adhere to standards for our SQL functions.
Naming standards:
mz_
. For example, mz_now()
.mz_
. However, it should not collide with an existing PostgreSQL function.We use the tracing
crate's log
macros. Observe the following guidelines:
Use sentence fragment style (no initial capitalization and no terminating period). Say "beginning operation", not "Beginning operation."
Prefer structured fields to string formatting when it may be valuable to search by the value of said field, e.g.:
fn handle_persist_thingy(shard_id: ShardId) {
// Preferred.
info!(shard_id = ?shard_id, "handling persist thingy");
// Acceptable.
info!("handling persist thingy for shard {}", shard_id);
}
(We're still working on deploying the log aggregator that will allow for searching by structured fields.)
Messages at each level must meet the indicated standard:
ERROR
: Reports an error that certainly indicates data corruption, data
loss, unavailability, or an invariant violation that results in undefined
behavior.Use judiciously. These errors are reported to Sentry. The goal is to
eventually have every occurrence of an error!
log message in production to
automatically start an incident and page the on-call engineer.
Consider instead using panic!
or unreachable!
. The scenarios in which a
correctness invariant is violated but it is safe to proceed without crashing
are quite rare.
Examples:
A source ingested the deletion of a record that does not exist, causing a "negative multiplicity."
WARN
: Either:
ERROR
, but where there is some uncertainty about the error
condition.These warnings are not reported to Sentry and do not require urgent attention on our end, but they should be the first events looked at when a problem has been identified.
Any changes to the occurrence of WARN
-level log events that are noticed
during release qualification should be investigated before proceeding with the
release.
A Kafka source cannot connect to any of its brokers.
INFO
: Reports normal system status changes. Messages at this level are
likely to be of interest to the support or engineering team during an
investigation of a problem, but do not otherwise require attention.
Examples:
A view was dropped.
DEBUG
: Like INFO
, but for events where the likelihood of usefulness
is too low for the event to be emitted by default.
The idea is that when you are debugging a problem in a particular crate or
module, you can temporarily enable DEBUG
-level logs for that specific
crate or module in the affected environment.
Examples:
http_proxy
environment variable.An S3 object downloaded by an S3 source had an invalid Content-Encoding
header that was ignored, but the object was nonetheless decoded
successfully.
TRACE
: Like DEBUG
, but for events that meet an even lower standard of
relevance or importance.
Enabling TRACE
logs can generate multiple gigabytes of log messages per
hour. Use extreme caution when enabling this log level. You should generally
only enable this log level when developing locally, and only for a single
module at a time.
Examples:
Clippy and rustfmt are enforced via CI, and they are quite opinionated on many matters of style.
Consider also familiarizing yourself with the Rust API guidelines, which summarize many lessons learned by the Rust team as they designed the standard library. Not all of these guidelines generalize well to application code, so use your judgement.1
1 For example, the C-EXAMPLE
guideline suggests that
all public items (modules, traits, structs, enums, functions, methods,
macros, and type definitions) should have rustdoc examples. This is a great
standard for libraries, but total overkill for a fast-moving startup like
Materialize.
You should use the spawn
and spawn_blocking
functions in the ore::task
module as opposed to the native. There is also a RuntimeExt
trait that adds
spawn_named
and spawn_blocking_named
to Arc<Runtime>
and Handle
from tokio
.
They are named different as to avoid clashing with inherent methods.
These functions require a closure that produces a name for the task, to improve the use of
tokio-console
You should prefer to panic in tests rather than returning an error. Tests that return errors do not produce useful backtraces and instead just point to the line in libtest that asserts that the test didn't return an error. Panics will produce useful backtraces that include the line where the panic occurred. This is especially useful for debugging flaky tests in CI.
Common case: Use thiserror
to define structured errors instead of using
strings as errors. Meaning, you should not use anyhow!("this is my error")
.
When your error wraps other errors or contain other errors as the underlying
cause or source, you should make sure to tag them with #[from]
or
#[source]
.
If you write your own error enums, make sure to implement the standard
Error
trait and to properly implement source()
. This makes it so that we
can get the chain of causing/underlying errors so that they can be reported
correctly when needed. With thiserror
this will happen automatically when
you use the #[from]
attribute.
The Display
impl of your error type should not print the chain of errors
but only print itself and rely on callers to print the error chain when needed.
Again, this is the behaviour you will get by just using thiserror
. There
could be exceptions when your error type is wrapping another error type or is
compositionally including one or multiple other errors, but those should be
very rare!
As mentioned above, the Display
impl of an error should not print the chain
of source errors. Whenever you do need to print an error with its chain of
errors, say when tracing/logging or surfacing an error to users, you should
make that explicit. We have the ResultExt
and ErrorExt
traits that provide
map_err_to_string_with_causes
/err_to_string_with_causes
(for results) and
display_with_causes
/to_string_with_causes
(for errors), that do this for
you.
There are unfortunately no good ways of enforcing a consistent import style in CI. In particular, most of the import formatting features of rustfmt are unstable and will presumably remain so for some time.
In absence of opinionated auto-formatting, we instead use an import style that is compatible with rust-analyzer's auto import feature. The intention is to reduce the friction experienced during normal development as much as possible. A match between the code editor's auto import style and our preferred code style also makes it less likely that incorrectly formatted imports end up in PRs.
You should follow these rules for formatting imports in Rust:
Group imports in the following order:
// `std`/`core` imports
use std::collections::{BTreeMap, BTreeSet};
use std::iter::repeat;
// imports from external crates (including `mz_` crates)
use itertools::Itertools;
use mz_ore::collections::CollectionExt;
use tracing::info;
// `crate` imports
use crate::util::{index_sql, ResultExt};
use crate::{AdapterError, ExecuteResponse};
Merge imports from the same module into a single use
statement.
Do not merge imports from different modules.
Use absolute paths for crate-level imports, i.e. crate::
rather than
super::
or self::
.
use super::*;
to make the code under test available inside a tests
submodule.rust-analyzer's auto import feature can be configured to adhere to the above rules by setting the following options:
imports.granularity.group
= module
imports.prefix
= crate
Use the mz_ore::test
macro instead of test
, since it automatically
initializes logging. For tokio tests you can use
#[mz_ore::test(tokio::test)]
.
Before:
#[test]
fn test_auth_base() {
mz_ore::test::init_logging();
// Test body here
}
After:
#[mz_ore::test]
fn test_auth_base() {
// Test body here
}
For tokio tests with parameters:
#[mz_ore::test(tokio::test(flavor = "multi_thread"))]
async fn state_cache_concurrency() {
// Test body here
}
By default the "info"
logging level is used. You can still set a different
logging level inside your test manually if you wish:
mz_ore::test::init_logging_default("warn");