Skip to content
This repository was archived by the owner on Mar 6, 2024. It is now read-only.

Commit 4c02adf

Browse files
authored
Support gpt 3.5 turbo 16k model (#424)
`TokenLimits` is only the place needed to be modified. Have set the token limits accordingly. Closes #406 <!-- This is an auto-generated comment: release notes by OSS CodeRabbit --> ### Summary by CodeRabbit **New Feature:** - Added support for the "gpt-3.5-turbo-16k" model in the `TokenLimits` class. - Set the `maxTokens` limit to 16300 and the `responseTokens` limit to 3000 for the new model. > 🎉 With tokens aplenty, we set the stage, > For the "gpt-3.5-turbo-16k" to engage. > More power, more wisdom, in every page, > A new chapter begins, let's turn the page! 🚀 <!-- end of auto-generated comment: release notes by OSS CodeRabbit -->
1 parent 500adcb commit 4c02adf

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

src/limits.ts

+3
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ export class TokenLimits {
99
if (model === 'gpt-4-32k') {
1010
this.maxTokens = 32600
1111
this.responseTokens = 4000
12+
} else if (model === 'gpt-3.5-turbo-16k') {
13+
this.maxTokens = 16300
14+
this.responseTokens = 3000
1215
} else if (model === 'gpt-4') {
1316
this.maxTokens = 8000
1417
this.responseTokens = 2000

0 commit comments

Comments
 (0)