-
Notifications
You must be signed in to change notification settings - Fork 34
feat: add optional unique tracer provider for batch per-query operations #57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
0092558 to
4a56273
Compare
obitech
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
|
@jahough could you rebase on |
4a56273 to
8122891
Compare
|
Thanks for the contribution! 🚀 Just to be sure we don't try to kill a fly with a bazooka: how about something like #59 to solve the specific issue of batches vs individual queries? I'm a bit weary of too much flexibility, since that can also make using the API a bit more difficult. |
|
To keep some of the flexibility, we could extend #59 and export a helper function to determine if we're in a batch context, so the consumer can use the existing That would keep this somewhat advanced use-case behind a bit more code, but an existing API. WDYT? |
Apologies for the delay - got tied up with things yesterday. I'm personally fine if we want to go with a more binary on/off style approach. Conversely, to play the other side... if a user wants to see specific attributes values used within some of the queries, and/or they want to see some of the individual per query durations, then this is where having a means to sample would be helpful. All that to say, if we can offer both options (through what you're suggesting below) with the same implementation, then I think all of ^ is best left for the user to decide what they want to do and what caveats they want to accept.
I'd be open to this. I think I like the sound of keeping a single |
Context
This PR is intended to resolve issue #54.
Applications that have lengthy batch operations can result in noisy span creation.
This can bloat the overall trace and potentially even cause traces to be dropped at ingestion time due to their length/size. Though, the latter point is the extreme - it definitely is dependent on user's infrastructure configuration.
Change
Add a new option that allows users to specify a distinct
TracerProviderthat will be specifically used for these per-query span creations.This offers users the flexibility to sample these spans however they wish using the OpenTelemetry SDK's
Samplertype.For example, users can choose to
NeverSample,AlwaysSample, or even sample a percentage of these spans. It is entirely up to them.In the event users do not provide this new option, backwards compatibility with
WithTracerProvideris maintained.The default will be to use the
TracerProviderassigned here, or otherwise, the same existing OpenTelemetry SDK's globalTracerProvider.Option not provided
Option provided with a TracerProvider configured to never sample
Note that there are no longer any
query INSERTspans, yet the batch size attribute remains at10.