Skip to content
This repository was archived by the owner on Mar 6, 2024. It is now read-only.

Feature - Additional model support #406

Closed
nlubock opened this issue Aug 1, 2023 · 0 comments · Fixed by #424
Closed

Feature - Additional model support #406

nlubock opened this issue Aug 1, 2023 · 0 comments · Fixed by #424

Comments

@nlubock
Copy link

nlubock commented Aug 1, 2023

It could be great to have additional models supported. In particular gpt-3.5-turbo-16k.

harjotgill pushed a commit that referenced this issue Aug 11, 2023
`TokenLimits` is only the place needed to be modified. Have set the
token limits accordingly. Closes #406
<!-- This is an auto-generated comment: release notes by OSS CodeRabbit
-->
### Summary by CodeRabbit

**New Feature:**
- Added support for the "gpt-3.5-turbo-16k" model in the `TokenLimits`
class.
- Set the `maxTokens` limit to 16300 and the `responseTokens` limit to
3000 for the new model.

> 🎉 With tokens aplenty, we set the stage,
> For the "gpt-3.5-turbo-16k" to engage.
> More power, more wisdom, in every page,
> A new chapter begins, let's turn the page! 🚀
<!-- end of auto-generated comment: release notes by OSS CodeRabbit -->
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant