-
Notifications
You must be signed in to change notification settings - Fork 99
feat(ourlogs): Adjust 'log' protocol per sdk feedback #4592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
94acbee
to
e551796
Compare
418ea5f
to
f133144
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some tiny nits
Now that we've mostly finalized the logs protocol with sdks (see develop doc for more info), we want to update Relay to allow the 'log' item type to be sent in this format. Also in this PR is some shimming between the updated protocol / event schema and the kafka consumers. This shimming is temporary until the generic EAP items consumers are ready, at which point we'll have to transform the event-schema Relay accepts into the generic eap item kafka.
f64c4c8
to
69b8290
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation has a differing implementation for FromValue
and ToValue
with the implicit assumption that the only serialization happens for Kafka. But an event can flow through multiple Relay instances, this will not work. FromValue
and ToValue
must agree on the format. I recommend updating the consumers first.
On that note, the Attribute
implementation is not forward compatible, but it is also very hard to implement. I'll work on this in a separate PR which you then can depend on.
#4626 implements support for #[derive(Debug, Empty, FromValue, IntoValue, ProcessValue)]
pub struct Attribute {
#[metastructure(flatten)]
pub value: AttributeValue,
/// Additional arbitrary fields for forwards compatibility.
#[metastructure(additional_properties)]
pub other: Object<Value>,
}
#[derive(Debug, Empty, FromValue, IntoValue, ProcessValue)]
pub struct AttributeValue {
#[metastructure(field = "type", required = true, trim = "false")]
ty: Annotated<String>,
#[metastructure(required = true pii = "true")]
value: Annotated<Value>,
} I experimented with a strongly typed variant in #4627, which 'validates' when converting via Instead we can invest into not making We will need an additional normalization and validation step during processing. In any Relay we can validate the validity of the A lot of that logic I already started implementing (also in #4627), which may be of use: match (value.ty, value.value) {
(Annotated(Some(ty), ty_meta), Annotated(Some(value), mut value_meta)) => {
match (ty, value) {
(AttributeType::Bool, Value::Bool(v)) => Self::Bool(v),
(AttributeType::Int, Value::I64(v)) => Self::I64(v),
(AttributeType::Int, Value::U64(v)) => Self::U64(v),
(AttributeType::Float, Value::F64(v)) => Self::F64(v),
(AttributeType::Float, Value::I64(v)) => Self::F64(v as f64),
(AttributeType::Float, Value::U64(v)) => Self::F64(v as f64),
(AttributeType::String, Value::String(v)) => Self::String(v),
(ty @ AttributeType::Unknown(_), value) => Self::Other(AttributeValueRaw {
ty: Annotated(Some(ty), ty_meta),
value: Annotated(Some(value), value_meta),
}),
(ty, value) => {
value_meta.add_error(ErrorKind::InvalidData);
value_meta.set_original_value(Some(value));
Self::Other(AttributeValueRaw {
ty: Annotated(Some(ty), ty_meta),
value: Annotated(None, value_meta),
})
}
}
}
(mut ty, mut value) => {
if ty.is_empty() {
ty.meta_mut().add_error(ErrorKind::MissingAttribute);
}
if value.is_empty() {
value.meta_mut().add_error(ErrorKind::MissingAttribute);
}
Self::Other(AttributeValueRaw { ty, value })
} Then as an additional step we need a processing step, which removes unknown type/value combinations and annotates the value as It is important to note, that only a processing Relay can make the decision whether a provided type is unknown or not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some naming updates
Co-authored-by: Michi Hoffmann <[email protected]>
Co-authored-by: Michi Hoffmann <[email protected]>
…bility (allow other to be sent but not emit it to kafka)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Protocol in tests looks good to me 👍
relay-ourlogs/src/ourlog.rs
Outdated
Some(OurLogLevel::Warn) => 13, | ||
Some(OurLogLevel::Error) => 17, | ||
Some(OurLogLevel::Fatal) => 21, | ||
_ => 25, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When parsing, we default other to info, is 25 correct here or should it be rather treated as info?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, actually 0
is default and is essentially 'unmapped'.
25c8c8a
to
defad31
Compare
c4d7ca7
to
53c5ce0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per discussion, this should still not be used by SDKs, there are more changes planned to the envelope protocol, just not to the log item.
https://www.notion.so/sentry/Performance-Envelope-Containers-1d28b10e4b5d80b3b1c3e661f5eda6b4
Summary
Now that we've mostly finalized the logs protocol with sdks (see develop doc for more info), we want to update Relay to allow the 'log' item type to be sent in this format.
SDK's will likely primarily use the
log
ItemType instead ofotel_log
since we don't need to change timestamp conventions, can send a simplifiedlevel
(seeOurLogLevel
added in this PR).Schema changes
We've deprecated some fields in the protocol that only exist for OTEL:
severity_number
andseverity_text
, we're coercing these tolevel
, but we're keeping the original severity text and number as attributes as OTel allows custom severity text.observed_timestamp_nanos
is always set by relay regardless of what is sent because Relay is the 'collector'. We have to leave this as an attribute as well since it's being used by the existing consumer for origin timestamp.timestamp_nanos
becomestimestamp: Timestamp
trace_flags
, this is unused, and the consumer doesn't even store it in the table. Will decide what to do with this later.Future work
ourlog_merge_otel
function can be trimmed down since we won't need to fill in deprecated fields to send the same data to the kafka consumer.OurLog
protocol from json received from sdks to a generic EAP "trace items" kafka message that is essentially a couple fields (eg. traceid) + a KVMap forattributes
.