This version is still in development and is not considered stable yet. For the latest snapshot version, please use Spring AI 1.0.0-SNAPSHOT!spring-doc.cn

More info can be found here
Property Description Default

spring.ai.watsonx.ai.base-urlspring-doc.cn

The URL to connect tospring-doc.cn

us-south.ml.cloud.ibm.comspring-doc.cn

spring.ai.watsonx.ai.stream-endpointspring-doc.cn

The streaming endpointspring-doc.cn

generation/stream?version=2023-05-29spring-doc.cn

spring.ai.watsonx.ai.text-endpointspring-doc.cn

The text endpointspring-doc.cn

generation/text?version=2023-05-29spring-doc.cn

spring.ai.watsonx.ai.project-idspring-doc.cn

The project IDspring-doc.cn

-spring-doc.cn

spring.ai.watsonx.ai.iam-tokenspring-doc.cn

The IBM Cloud account IAM tokenspring-doc.cn

-spring-doc.cn

Property Description Default

spring.ai.watsonx.ai.chat.enabledspring-doc.cn

Enable Watsonx.AI chat model.spring-doc.cn

truespring-doc.cn

spring.ai.watsonx.ai.chat.options.temperaturespring-doc.cn

The temperature of the model. Increasing the temperature will make the model answer more creatively.spring-doc.cn

0.7spring-doc.cn

spring.ai.watsonx.ai.chat.options.top-pspring-doc.cn

Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.2) will generate more focused and conservative text.spring-doc.cn

1.0spring-doc.cn

spring.ai.watsonx.ai.chat.options.top-kspring-doc.cn

Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.spring-doc.cn

50spring-doc.cn

spring.ai.watsonx.ai.chat.options.decoding-methodspring-doc.cn

Decoding is the process that a model uses to choose the tokens in the generated output.spring-doc.cn

greedyspring-doc.cn

spring.ai.watsonx.ai.chat.options.max-new-tokensspring-doc.cn

Sets the limit of tokens that the LLM follow.spring-doc.cn

20spring-doc.cn

spring.ai.watsonx.ai.chat.options.min-new-tokensspring-doc.cn

Sets how many tokens must the LLM generate.spring-doc.cn

0spring-doc.cn

spring.ai.watsonx.ai.chat.options.stop-sequencesspring-doc.cn

Sets when the LLM should stop. (e.g., ["\n\n\n"]) then when the LLM generates three consecutive line breaks it will terminate. Stop sequences are ignored until after the number of tokens that are specified in the Min tokens parameter are generated.spring-doc.cn

-spring-doc.cn

spring.ai.watsonx.ai.chat.options.repetition-penaltyspring-doc.cn

Sets how strongly to penalize repetitions. A higher value (e.g., 1.8) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient.spring-doc.cn

1.0spring-doc.cn

spring.ai.watsonx.ai.chat.options.random-seedspring-doc.cn

Produce repeatable results, set the same random seed value every time.spring-doc.cn

randomly generatedspring-doc.cn

spring.ai.watsonx.ai.chat.options.modelspring-doc.cn

Model is the identifier of the LLM Model to be used.spring-doc.cn

google/flan-ul2spring-doc.cn

In addition to the model specific WatsonxAiChatOptions.java you can use a portable ChatOptions instance, created with the ChatOptionsBuilder#builder().
For more information go to watsonx-parameters-info