Skip to contents

Set a model from Mistral AI

Usage

model_mistral(
  model = "open-mistral-7b",
  temperature = 0,
  top_p = 0.1,
  max_tokens = 100,
  min_tokens = 0,
  stop = NULL,
  seed = NA,
  format = "none",
  safe_prompt = FALSE,
  tools = NULL,
  tool_choice = "auto"
)

Arguments

temperature

The temperature of the model where higher temperature means higher creativity.

top_p

The nucleus sampling or penalty. It limits the cumulative probability of the most likely tokens. Higher values allow more tokens and diverse responses, and while lower values are more focused and constrained answers.

max_tokens

The maximum number of tokens to generate. This limits the length of the response. Tokens refers to a unit of text the model reads and can vary from one character to several words, varying based on the model. As a rule of thumb 1 token is approximately 4 characters or 0.75 words for English text.

min_tokens

The minimum number of tokens to generate.

stop

A list of stop sequences to use.

seed

The seed to use for the model.

format

The format

safe_prompt

Whether to use safe prompt or not.

tools

The list of tools. Not implemented.

tool_choice

The choice of tool. Not implemented.

See also

Other large-lanugage-models: model_ollama(), model_openai(), model_vendor()