Skip to content

wip(openai): trying to convert openai to use orchestrion #5762

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

sabrenner
Copy link
Collaborator

What does this PR do?

Motivation

Plugin Checklist

Additional Notes

Copy link

Overall package size

Self size: 9.44 MB
Deduped: 103.46 MB
No deduping: 103.98 MB

Dependency sizes | name | version | self size | total size | |------|---------|-----------|------------| | @datadog/libdatadog | 0.5.1 | 29.73 MB | 29.73 MB | | @datadog/native-appsec | 8.5.2 | 19.33 MB | 19.34 MB | | @datadog/pprof | 5.8.0 | 12.55 MB | 12.92 MB | | @datadog/native-iast-taint-tracking | 4.0.0 | 11.72 MB | 11.73 MB | | @opentelemetry/core | 1.30.1 | 908.66 kB | 7.16 MB | | protobufjs | 7.4.0 | 2.77 MB | 5.42 MB | | @datadog/wasm-js-rewriter | 4.0.1 | 2.85 MB | 3.58 MB | | @datadog/native-metrics | 3.1.1 | 1.02 MB | 1.43 MB | | @opentelemetry/api | 1.8.0 | 1.21 MB | 1.21 MB | | import-in-the-middle | 1.13.1 | 117.64 kB | 839.26 kB | | source-map | 0.7.4 | 226 kB | 226 kB | | opentracing | 0.14.7 | 194.81 kB | 194.81 kB | | lru-cache | 7.18.3 | 133.92 kB | 133.92 kB | | pprof-format | 2.1.0 | 111.69 kB | 111.69 kB | | @datadog/sketches-js | 2.1.1 | 109.9 kB | 109.9 kB | | lodash.sortby | 4.7.0 | 75.76 kB | 75.76 kB | | ignore | 5.3.2 | 53.63 kB | 53.63 kB | | istanbul-lib-coverage | 3.2.0 | 29.34 kB | 29.34 kB | | rfdc | 1.4.1 | 27.15 kB | 27.15 kB | | @isaacs/ttlcache | 1.4.1 | 25.2 kB | 25.2 kB | | dc-polyfill | 0.1.8 | 25.08 kB | 25.08 kB | | tlhunter-sorted-set | 0.1.0 | 24.94 kB | 24.94 kB | | shell-quote | 1.8.2 | 23.54 kB | 23.54 kB | | limiter | 1.1.5 | 23.17 kB | 23.17 kB | | retry | 0.13.1 | 18.85 kB | 18.85 kB | | semifies | 1.0.0 | 15.84 kB | 15.84 kB | | jest-docblock | 29.7.0 | 8.99 kB | 12.76 kB | | crypto-randomuuid | 1.0.0 | 11.18 kB | 11.18 kB | | ttl-set | 1.0.0 | 4.61 kB | 9.69 kB | | mutexify | 1.4.0 | 5.71 kB | 8.74 kB | | path-to-regexp | 0.1.12 | 6.6 kB | 6.6 kB | | koalas | 1.0.2 | 6.47 kB | 6.47 kB | | module-details-from-path | 1.0.3 | 4.47 kB | 4.47 kB |

🤖 This report was automatically generated by heaviest-objects-in-the-universe

Copy link

codecov bot commented May 22, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 78.06%. Comparing base (86f1f53) to head (c6474bb).

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5762      +/-   ##
==========================================
- Coverage   79.15%   78.06%   -1.10%     
==========================================
  Files         520      505      -15     
  Lines       23729    22845     -884     
==========================================
- Hits        18783    17834     -949     
- Misses       4946     5011      +65     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

}
}

const OPENAI_VERSION = tryRequire('openai/version')
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

soooo janky and not where we want to do this but i'm not sure where else to do it - besides having a separate plugin for openai >=4.0 that sets this differently 😕 but then we'll have like 46 plugins (9 different functions traced + 5 that are already versioned + 9 by splitting it into v3 as well, all doubled for LLMObs plugins as well)


// also this is messy - just copied from the existing instrumentation
// it needs to be cleaned up
shimmer.wrap(result, 'parse', parse => function () {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment explains it but this is one pain point so far - how do we properly wrap this stream? we don't want to consume it ourselves because we want to finish the span once the user has finished consuming it (assuming they do some stuff between each chunk). in the current approach in datadog-instrumentations/src/openai.js we do what we're doing here and wrap it directly... not sure how it translates to here if we don't want to be doing patching here (although, i would maybe argue it's not so much patching as instance wrapping, if that really represents a difference)

@datadog-datadog-prod-us1
Copy link

Datadog Report

Branch report: sabrenner/openai-orchestrion
Commit report: 0cf81bf
Test service: dd-trace-js-integration-tests

✅ 0 Failed, 983 Passed, 0 Skipped, 15m 1.88s Total Time

// instead of wrapping the result, queue up a separate promise to handle when the response resolves
// since we want the response headers as well, call `withResponse()` to get that
// while this makes it easier to manage on our side as opposed to wrapping, it does queue up another promise
result.withResponse().then(({ data, response }) => {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as comment explains - not ideal since we'll be queueing up a separate promise, but this way we don't have to do instance patching on the custom promise type.

the custom promise type is also the whole reason we can't do tracePromise, since itll return an object of a promise prototype not compatible with the openai node.js sdk api promise (ref)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant