Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test coverage around OpenAI(BaseRepresentation) #2051

Open
steven-solomon opened this issue Jun 14, 2024 · 3 comments
Open

Add test coverage around OpenAI(BaseRepresentation) #2051

steven-solomon opened this issue Jun 14, 2024 · 3 comments

Comments

@steven-solomon
Copy link

steven-solomon commented Jun 14, 2024

Goal

I would like to make it easier to verify the behavior of the OpenAI class by adding test coverage.

Proposal

Test coverage will aim at the following scenarios—both for their happy path and edge cases. Although, additional tests may be added in the implementation phase.

Cases to cover:

  • Chat
  • Chat with backoff
  • Completions
  • Completions with backoff

Tests will be divided into happy path and edge case scenarios to make nominal and exceptional behaviors clear to the reader.
In order to hold the design of the OpenAI class static, I would like to pursue a mocking strategy for the openai.OpenAI.client. I would like to create two drafts with one using OpenAI-Responses package and the other using unittest.mock.MagicMock. I will defer to your preference on readability between the two libraries.

Considerations

Here are some tradeoffs that I considered when drafting this proposal:

The inherent risk of mock based testing is that the underlying API can diverge from the mock's behavior. However, it allows us to have confidence of the integration on the edges of our system.

When selecting a mocking strategy, I considered using either a built in Python package or a specialized package that provides a simpler interface. The OpenAI-Responses package can make it easy to mock the OpenAI behavior, but in the scenario of an API functionality drifts, the update loop may be longer or completely blocked by the maintainer of that package. In terms of the built in MagicMock class, mocking will be verbose and difficult to reason about, coupling your codebase to the nested attributes of the API client. Comprehension can be improved using construction patterns but that adds to the maintenance cost.

Despite the downsides, having an additional automated quality check outside of manual testing can make both contributions and reviews easier.

@MaartenGr
Copy link
Owner

Thank you for the extensive description! Indeed, test coverage is quite tricky at the moment especially with a key-based external provider.

With respect to your suggestions, the openai-responses package looks great but is not maintained by OpenAI itself nor has the popularity/usage where I am confident it will be maintained in the upcoming years. Since we are going to be dependent on that package, it is important that we can be relatively sure it will continue development. Moreover, it introduces another package with dependencies that might result in dependency conflicts.

I don't have much experience with MagicMock, but I can imagine the difficulty of setting something up from scratch.

@steven-solomon
Copy link
Author

I'd be happy to create a draft using MagicMock once I wrap up my current PR if you are interested in pursuing this. I'd also be interested in redirecting my contributions towards other projects if you would find that more valuable.

@MaartenGr
Copy link
Owner

I'd be happy to create a draft using MagicMock once I wrap up my current PR if you are interested in pursuing this. I'd also be interested in redirecting my contributions towards other projects if you would find that more valuable.

Although testing is important, it's not the highest priority at the moment since I would rather fix bugs or introduce new features at this point. Having said that, any and all contributions are highly appreciated and you should definitely work (if you want) on things that you find enjoyable! I'm just happy with anything anyone wants to work on.

In other words, up to you 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants