Set an OpenAI model
Usage
model_openai(
model = "gpt-4o-mini",
api = NULL,
seed = NA,
format = "none",
max_tokens = 100,
temperature = 0,
top_p = 0.1,
n = 1,
stop = NULL,
presence_penalty = 0.2,
frequency_penalty = 1
)
Arguments
- api
The API key to use.
- seed
The seed to use for the model.
- format
The format
- max_tokens
The maximum number of tokens to generate. This limits the length of the response. Tokens refers to a unit of text the model reads and can vary from one character to several words, varying based on the model. As a rule of thumb 1 token is approximately 4 characters or 0.75 words for English text.
- temperature
The temperature of the model where higher temperature means higher creativity.
- top_p
The nucleus sampling or penalty. It limits the cumulative probability of the most likely tokens. Higher values allow more tokens and diverse responses, and while lower values are more focused and constrained answers.
- n
The number of completions/responses to generate.
- stop
A list of stop sequences to use.
- presence_penalty
Avoidance of specific topics in response provided in the user messages. A lower value make the model less concerned about preventing these topics.
- frequency_penalty
The frequency penalty to use. It discourages the model from repeating the same text. A lower value results in the model more likely to repeat information.
See also
Other large-lanugage-models:
model_mistral()
,
model_ollama()
,
model_vendor()