Workflow Application: OPENAICHAT usage

Overview

The OPENAICHAT workflow application lets you interact with an OpenAI chat model.

Required parameters

Parameter Type Direction Description
MODEL TEXT IN ID of the model to use
You can find available models at the following link: https://platform.openai.com/docs/models/model-endpoint-compatibility; the endpoint used by default is /v1/chat/completions .

You can use either of the following configurations: with system/user messages, with a message number, or with a JSON message array.

With system/user messages

Parameter Type Direction Description
SYSTEM_MESSAGE TEXT IN The system message content
USER_MESSAGE TEXT IN The user message content

With a message number

Parameter Type Direction Description
MESSAGE_ROLEx TEXT IN The type of the message, where x corresponds to the message number; the value should be assistant, system, or user
MESSAGE_CONTENTx TEXT IN The user message content, where x corresponds to the message number

With a JSON message array

Parameter Type Direction Description
MESSAGE_JSON TEXT IN The JSON array message object

Optional parameters

Parameter Type Direction Description
API_KEY TEXT IN OpenAI API key
By default, this value comes from the OpenAiApiKey parameter in the web.config file.
URL TEXT IN API endpoint; defaults to https://api.openai.com/v1/audio/transcriptions
TEMPERATURE NUMERIC IN Sampling temperature
Default: 1
Higher values (e.g. 0.8) will make the output more random, while lower values (e.g. 0.2) will make it more focused and deterministic.
APP_RESPONSE_IGNORE_ERROR TEXT IN Specifies (Y or N) if error should be ignored
Default: N
In case of error, if the parameter has Y as its value, the error will be ignored and defined OUT parameters (APP_RESPONSE_STATUS or APP_RESPONSE_CONTENT) will be mapped. Otherwise, an exception will be thrown.
RESULT TEXT OUT Chat result call
RESULT_CONTENT TEXT OUT Content of the assistant message
RESULT_TOTAL_TOKENS NUMERIC OUT Total of tokens used for generation
RESULT_COMPLETION_TOKENS NUMERIC OUT Total of tokens used for generation
APP_RESPONSE_STATUS TEXT OUT Response status code
APP_RESPONSE_CONTENT TEXT OUT Response payload or error message
TOP_P NUMERIC IN An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Default: 1
FREQUENCY_PENALTY NUMERIC IN Number between -2.0 and 2.0
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’'s likelihood to repeat the same line verbatim.
Default: 0
MAX_TOKENS NUMERIC IN Maximum number of tokens that can be generated in the chat completion
Default 256
PRESENCE_PENALTY NUMERIC IN Number between -2.0 and 2.0
Default: 0
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
RESPONSE_FORMAT TEXT IN Format of the response: json_object or text
Default: text
When the value is json_object, the system prompt should contain the JSON keyword.
RESULT_PROMPT_TOKENS NUMERIC OUT Total of token used for the prompt
1 Like