소스 검색

feat: Updata tongyi models (#9552)

AAEE86 6 달 전
부모
커밋
9b32bfb3db

+ 1 - 0
api/core/model_runtime/model_providers/tongyi/llm/qwen-max-longcontext.yaml

@@ -76,3 +76,4 @@ pricing:
   output: '0.12'
   unit: '0.001'
   currency: RMB
+deprecated: true

+ 2 - 2
api/core/model_runtime/model_providers/tongyi/llm/qwen-max.yaml

@@ -10,7 +10,7 @@ features:
   - stream-tool-call
 model_properties:
   mode: chat
-  context_size: 8000
+  context_size: 32000
 parameter_rules:
   - name: temperature
     use_template: temperature
@@ -26,7 +26,7 @@ parameter_rules:
     type: int
     default: 2000
     min: 1
-    max: 2000
+    max: 8192
     help:
       zh_Hans: 用于指定模型在生成内容时token的最大数量,它定义了生成的上限,但不保证每次都会生成到这个数量。
       en_US: It is used to specify the maximum number of tokens when the model generates content. It defines the upper limit of generation, but does not guarantee that this number will be generated every time.

+ 1 - 1
api/core/model_runtime/model_providers/tongyi/llm/qwen-plus.yaml

@@ -10,7 +10,7 @@ features:
   - stream-tool-call
 model_properties:
   mode: chat
-  context_size: 131072
+  context_size: 128000
 parameter_rules:
   - name: temperature
     use_template: temperature

+ 2 - 2
api/core/model_runtime/model_providers/tongyi/llm/qwen-turbo.yaml

@@ -10,7 +10,7 @@ features:
   - stream-tool-call
 model_properties:
   mode: chat
-  context_size: 8000
+  context_size: 128000
 parameter_rules:
   - name: temperature
     use_template: temperature
@@ -26,7 +26,7 @@ parameter_rules:
     type: int
     default: 2000
     min: 1
-    max: 2000
+    max: 8192
     help:
       zh_Hans: 用于指定模型在生成内容时token的最大数量,它定义了生成的上限,但不保证每次都会生成到这个数量。
       en_US: It is used to specify the maximum number of tokens when the model generates content. It defines the upper limit of generation, but does not guarantee that this number will be generated every time.