Skip to content

Commit

Permalink
Merge pull request #2982 from rockwotj/pikachu-is-evolving
Browse files Browse the repository at this point in the history
  • Loading branch information
rockwotj authored Nov 7, 2024
2 parents d42fb13 + f12ab28 commit 44d711a
Show file tree
Hide file tree
Showing 10 changed files with 521 additions and 83 deletions.
62 changes: 55 additions & 7 deletions docs/modules/components/pages/outputs/snowflake_streaming.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Common::
output:
label: ""
snowflake_streaming:
account: AAAAAAA-AAAAAAA # No default (required)
account: ORG-ACCOUNT # No default (required)
user: "" # No default (required)
role: ACCOUNTADMIN # No default (required)
database: "" # No default (required)
Expand All @@ -51,6 +51,17 @@ output:
mapping: "" # No default (optional)
init_statement: | # No default (optional)
CREATE TABLE IF NOT EXISTS mytable (amount NUMBER);
schema_evolution:
enabled: false # No default (required)
new_column_type_mapping: |-
root = match this.value.type() {
this == "string" => "STRING"
this == "bytes" => "BINARY"
this == "number" => "DOUBLE"
this == "bool" => "BOOLEAN"
this == "timestamp" => "TIMESTAMP"
_ => "VARIANT"
}
batching:
count: 0
byte_size: 0
Expand All @@ -69,7 +80,7 @@ Advanced::
output:
label: ""
snowflake_streaming:
account: AAAAAAA-AAAAAAA # No default (required)
account: ORG-ACCOUNT # No default (required)
user: "" # No default (required)
role: ACCOUNTADMIN # No default (required)
database: "" # No default (required)
Expand All @@ -81,6 +92,17 @@ output:
mapping: "" # No default (optional)
init_statement: | # No default (optional)
CREATE TABLE IF NOT EXISTS mytable (amount NUMBER);
schema_evolution:
enabled: false # No default (required)
new_column_type_mapping: |-
root = match this.value.type() {
this == "string" => "STRING"
this == "bytes" => "BINARY"
this == "number" => "DOUBLE"
this == "bool" => "BOOLEAN"
this == "timestamp" => "TIMESTAMP"
_ => "VARIANT"
}
build_parallelism: 1
batching:
count: 0
Expand Down Expand Up @@ -170,6 +192,8 @@ output:
schema: "PUBLIC"
table: "MYTABLE"
private_key_file: "my/private/key.p8"
schema_evolution:
enabled: true
```
--
Expand Down Expand Up @@ -214,10 +238,7 @@ output:
=== `account`
Account name, which is the same as the https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#where-are-account-identifiers-used[Account Identifier^].
However, when using an https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#using-an-account-locator-as-an-identifier[Account Locator^],
the Account Identifier is formatted as `<account_locator>.<region_id>.<cloud>` and this field needs to be
populated using the `<account_locator>` part.
The Snowflake https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#using-an-account-locator-as-an-identifier[Account name^]. Which should be formatted as `<orgname>-<account_name>` where `<orgname>` is the name of your Snowflake organization and `<account_name>` is the unique name of your account within your organization.
*Type*: `string`
Expand All @@ -226,7 +247,7 @@ Account name, which is the same as the https://docs.snowflake.com/en/user-guide/
```yml
# Examples
account: AAAAAAA-AAAAAAA
account: ORG-ACCOUNT
```
=== `user`
Expand Down Expand Up @@ -336,6 +357,33 @@ init_statement: |2
ALTER TABLE t1 ADD COLUMN a2 NUMBER;
```
=== `schema_evolution`
Options to control schema evolution within the pipeline as new columns are added to the pipeline.
*Type*: `object`
=== `schema_evolution.enabled`
Whether schema evolution is enabled.
*Type*: `bool`
=== `schema_evolution.new_column_type_mapping`
The mapping function from Redpanda Connect type to column type in Snowflake. Overriding this can allow for customization of the datatype if there is specific information that you know about the data types in use. This mapping should result in the `root` variable being assigned a string with the data type for the new column in Snowflake.
The input to this mapping is an object with the value and the name of the new column, for example: `{"value": 42.3, "name":"new_data_field"}"
*Type*: `string`
*Default*: `"root = match this.value.type() {\n this == \"string\" =\u003e \"STRING\"\n this == \"bytes\" =\u003e \"BINARY\"\n this == \"number\" =\u003e \"DOUBLE\"\n this == \"bool\" =\u003e \"BOOLEAN\"\n this == \"timestamp\" =\u003e \"TIMESTAMP\"\n _ =\u003e \"VARIANT\"\n}"`
=== `build_parallelism`
The maximum amount of parallelism to use when building the output for Snowflake. The metric to watch to see if you need to change this is `snowflake_build_output_latency_ns`.
Expand Down
Loading

0 comments on commit 44d711a

Please sign in to comment.