Codeunit "AOAI Chat Completion Params"
Represents the Chat Completion parameters used by the API. See more details at https://aka.ms/AAlrz36.
Properties
Name | Value |
---|---|
Access | Public |
InherentEntitlements | X |
InherentPermissions | X |
Methods
GetTemperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
procedure GetTemperature(): Decimal
Returns
Type | Description |
---|---|
Decimal |
The sampling temperature being used. |
GetMaxTokens
Gets the maximum number of tokens allowed for the generated answer.
procedure GetMaxTokens(): Integer
Returns
Type | Description |
---|---|
Integer |
The maximum number of tokens allowed for the generated answer. |
Remarks
0 or less uses the API default.
GetMaxHistory
Gets the maximum number of messages to send back as the message history.
procedure GetMaxHistory(): Integer
Returns
Type | Description |
---|---|
Integer |
The maximum number of messages to send. |
GetPresencePenalty
Gets the presence penalty value.
procedure GetPresencePenalty(): Decimal
Returns
Type | Description |
---|---|
Decimal |
The presence penalty value. |
GetFrequencyPenalty
Gets the frequency penalty value.
procedure GetFrequencyPenalty(): Decimal
Returns
Type | Description |
---|---|
Decimal |
The frequency penalty value. |
SetTemperature
Sets the sampling temperature to use, between 0 and 2. A higher temperature increases the likelihood that the next most probable token will not be selected. When requesting structured data, set the temperature to 0. For human sounding speech, 0.7 is a typical value
procedure SetTemperature(NewTemperature: Decimal)
Parameters
Name | Type | Description |
---|---|---|
NewTemperature | Decimal |
The new sampling temperature to use. |
SetMaxTokens
Sets the maximum number of tokens allowed for the generated answer. The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
procedure SetMaxTokens(NewMaxTokens: Integer)
Parameters
Name | Type | Description |
---|---|---|
NewMaxTokens | Integer |
The new maximum number of tokens allowed for the generated answer. |
Remarks
If the prompt's tokens + max_tokens exceeds the model's context length, the generate request will return an error.
SetMaxHistory
Sets the maximum number of messages to send back as the message history.
procedure SetMaxHistory(NewMaxHistory: Integer)
Parameters
Name | Type | Description |
---|---|---|
NewMaxHistory | Integer |
The new maximum number of messages to send. |
Remarks
The default is 10 messages including the primary System Message.
SetPresencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
procedure SetPresencePenalty(NewPresencePenalty: Decimal)
Parameters
Name | Type | Description |
---|---|---|
NewPresencePenalty | Decimal |
The new presence penalty value. |
SetFrequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
procedure SetFrequencyPenalty(NewFrequencyPenalty: Decimal)
Parameters
Name | Type | Description |
---|---|---|
NewFrequencyPenalty | Decimal |
The new frequency penalty value. |