Releases: redpanda-data/connect
v4.8.0
For installation instructions check out the getting started guide.
Added
- All
sql
components now support Oracle DB.
Fixed
- All SQL components now accept an empty or unspecified
args_mapping
as an alias for no arguments. - Field
unsafe_dynamic_query
added to thesql_raw
output. - Fixed a regression in 4.7.0 where HTTP client components were sending duplicate request headers.
The full change log can be found here.
v4.7.0
For installation instructions check out the getting started guide.
Added
- Field
avro_raw_json
added to theschema_registry_decode
processor. - Field
priority
added to thegcp_bigquery_select
input. - The
hash
bloblang method now supportscrc32
. - New
tracing_span
bloblang function. - All
sql
components now support SQLite. - New
beanstalkd
input and output. - Field
json_marshal_mode
added to themongodb
input. - The
schema_registry_encode
andschema_registry_decode
processors now support Basic, OAuth and JWT authentication.
Fixed
- The streams mode
/ready
endpoint no longer returns status503
for streams that gracefully finished. - The performance of the bloblang
.explode
method now scales linearly with the target size. - The
influxdb
andlogger
metrics outputs should no longer mix up tag names. - Fix a potential race condition in the
read_until
connect check on terminated input. - The
parse_parquet
bloblang method andparquet_decode
processor now automatically parseBYTE_ARRAY
values as strings when the logical type is UTF8. - The
gcp_cloud_storage
output now correctly cleans up temporary files on error conditions when the collision mode is set to append.
The full change log can be found here.
v4.6.0
For installation instructions check out the getting started guide.
Added
- New
squash
bloblang method. - New top-level config field
shutdown_delay
for delaying graceful termination. - New
snowflake_id
bloblang function. - Field
wait_time_seconds
added to theaws_sqs
input. - New
json_path
bloblang method. - New
file_json_contains
predicate for unit tests. - The
parquet_encode
processor now supports theUTF8
logical type for columns.
Fixed
- The
schema_registry_encode
processor now correctly assumes Avro JSON encoded documents by default. - The
redis
processorretry_period
no longer shows linting errors for duration strings. - The
/inputs
and/outputs
endpoints for dynamic inputs and outputs now correctly render configs, both structured within the JSON response and the raw config string. - Go API: The stream builder no longer ignores
http
configuration. Instead, the value ofhttp.enabled
is set tofalse
by default.
The full change log can be found here.
v4.5.1
For installation instructions check out the getting started guide.
Fixed
- Reverted
kafka_franz
dependency back to1.3.1
due to a regression in TLS/SASL commit retention. - Fixed an unintentional linting error when using interpolation functions in the
elasticsearch
outputsaction
field.
The full change log can be found here.
v4.5.0
For installation instructions check out the getting started guide.
Added
- Field
batch_size
added to thegenerate
input. - The
amqp_0_9
output now supports setting thetimeout
of publish. - New experimental input codec
avro-ocf:marshaler=x
. - New
mapping
andmutation
processors. - New
parse_form_url_encoded
bloblang method. - The
amqp_0_9
input now supports setting theauto-delete
bit during queue declaration. - New
open_telemetry_collector
tracer. - The
kafka_franz
input and output now supports no-op SASL options with the mechanismnone
. - Field
content_type
added to thegcp_cloud_storage
cache.
Fixed
- The
mongodb
processor and output defaultwrite_concern.w_timeout
empty value no longer causes configuration issues. - Field
message_name
added to the logger config. - The
amqp_1
input and output should no longer spam logs with timeout errors during graceful termination. - Fixed a potential crash when the
contains
bloblang method was used to compare complex types. - Fixed an issue where the
kafka_franz
input or output wouldn't use TLS connections without custom certificate configuration. - Fixed structural cycle in the CUE representation of the
retry
output. - Tracing headers from HTTP requests to the
http_server
input are now correctly extracted.
Changed
- The
broker
input no longer applies processors before batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their pre-batching processors at the level of the child inputs of the broker. - The
broker
output no longer applies processors after batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their post-batching processors at the level of the child outputs of the broker.
The full change log can be found here.
v4.5.0-rc1
For installation instructions check out the getting started guide.
Added
- Field
batch_size
added to thegenerate
input. - The
amqp_0_9
output now supports setting thetimeout
of publish. - New experimental input codec
avro-ocf:marshaler=x
. - New
mapping
andmutation
processors. - New
parse_form_url_encoded
bloblang method. - The
amqp_0_9
input now supports setting theauto-delete
bit during queue declaration. - New
open_telemetry_collector
tracer.
Fixed
- The
mongodb
processor and output defaultwrite_concern.w_timeout
empty value no longer causes configuration issues. - Field
message_name
added to the logger config. - The
amqp_1
input and output should no longer spam logs with timeout errors during graceful termination. - Fixed a potential crash when the
contains
bloblang method was used to compare complex types. - Fixed an issue where the
kafka_franz
input or output wouldn't use TLS connections without custom certificate configuration. - Fixed structural cycle in the CUE representation of the
retry
output.
Changed
- The
broker
input no longer applies processors before batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their pre-batching processors at the level of the child inputs of the broker. - The
broker
output no longer applies processors after batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their post-batching processors at the level of the child outputs of the broker.
The full change log can be found here.
v4.4.1
For installation instructions check out the getting started guide.
Fixed
- Fixed an issue where an
http_server
input or output would fail to register prometheus metrics when combined with other inputs/outputs. - Fixed an issue where the
jaeger
tracer was incapable of sending traces to agents outside of the default port.
The full change log can be found here.
v4.4.0
For installation instructions check out the getting started guide.
Added
- The service-wide
http
config now supports basic authentication. - The
elasticsearch
output now supports upsert operations. - New
fake
bloblang function. - New
parquet_encode
andparquet_decode
processors. - New
parse_parquet
bloblang method. - CLI flag
--prefix-stream-endpoints
added for disabling streams mode API prefixing. - Field
timestamp_name
added to the logger config.
The full change log can be found here.
v4.3.0
For installation instructions check out the getting started guide.
Added
- Timestamp Bloblang methods are now able to emit and process
time.Time
values. - New
ts_tz
method for switching the timezone of timestamp values. - The
elasticsearch
output fieldtype
now supports interpolation functions. - The
redis
processor has been reworked to be more generally useful, the oldoperator
andkey
fields are now deprecated in favour of newcommand
andargs_mapping
fields. - Go API: Added component bundle
./public/components/aws
for all AWS components, including aRunLambda
function. - New
cached
processor. - Go API: New APIs for registering both metrics exporters and open telemetry tracer plugins.
- Go API: The stream builder API now supports configuring a tracer, and tracer configuration is now isolated to the stream being executed.
- Go API: Plugin components can now access input and output resources.
- The
redis_streams
output fieldstream
field now supports interpolation functions. - The
kafka_franz
input and outputs now supportAWS_MSK_IAM
as a SASL mechanism. - New
pusher
output. - Field
input_batches
added to config unit tests for injecting a series of message batches.
Fixed
- Corrected an issue where Prometheus metrics from batching at the buffer level would be skipped when combined with input/output level batching.
- Go API: Fixed an issue where running the CLI API without importing a component package would result in template init crashing.
- The
http
processor andhttp_client
input and output no longer have default headers as part of their configuration. AContent-Type
header will be added to requests with a default value ofapplication/octet-stream
when a message body is being sent and the configuration has not added one explicitly. - Logging in
logfmt
mode withadd_timestamp
enabled now works.
The full change log can be found here.
v4.2.0
For installation instructions check out the getting started guide.
Added
- Field
credentials.from_ec2_role
added to all AWS based components. - The
mongodb
input now supports aggregation filters by setting the newoperation
field. - New
gcp_cloudtrace
tracer. - New
slug
bloblang string method. - The
elasticsearch
output now supports thecreate
action. - Field
tls.root_cas_file
added to thepulsar
input and output. - The
fallback
output now adds a metadata fieldfallback_error
to messages when shifted. - New bloblang methods
ts_round
,ts_parse
,ts_format
,ts_strptime
,ts_strftime
,ts_unix
andts_unix_nano
. Most are aliases of (now deprecated) time methods withtimestamp_
prefixes. - Ability to write logs to a file (with optional rotation) instead of stdout.
Fixed
- The default docker image no longer throws configuration errors when running streams mode without an explicit general config.
- The field
metrics.mapping
now allows environment functions such ashostname
andenv
. - Fixed a lock-up in the
amqp_0_9
output caused when messages sent with theimmediate
ormandatory
flags were rejected. - Fixed a race condition upon creating dynamic streams that self-terminate, this was causing panics in cases where the stream finishes immediately.
The full change log can be found here.