File-Based Prompt Functions in Semantic Kernel
In previous articles, we covered the basics of using Semantic Kernel and how to create and execute inline prompts. Today we'll focus on a more advanced approach—file-based prompt functions. By moving your prompts to dedicated files, you can reuse them, making management easier and improving project readability. In this article, you'll learn how to implement this approach in practice and why it's worth considering.
Benefits of File-Based Prompt Functions
Using file-based prompt functions offers many advantages:
- Modularity and Clarity: Storing prompts in separate files allows for better separation of logic and data, leading to increased clarity and improved code organization.
- Ease of Updates: Updating prompts in a central location means that changes are automatically applied wherever the prompt is used, eliminating the need to modify multiple sections of the code.
- Reusability: Prompt files can be reused across different projects, reducing code duplication, minimizing errors, and ensuring consistency.
- Better Collaboration: By storing prompts in files, the team can easily share them within a repository, ensuring uniform standards and facilitating team collaboration.
Components of File-Based Prompt Functions
Storing prompt functions in the file system involves creating two key files:
- skprompt.txt: A simple text file where we define the prompt, i.e., what we want the model to do. It can include both specific instructions and variables.
- config.json: A configuration file that defines the model parameters and the variables used in the prompts, making our function more flexible.
The skprompt.txt file contains the prompt definition, which may include dynamic variables. This allows you to easily adjust its behavior to specific needs.
For example, you could create a prompt instructing the model to translate text from one language to another:
Translate the following text from {{$source_language}} to {{$target_language}}: "{{$text_to_translate}}"
In the example above, we used variables like {{$source_language}}, {{$target_language}}, and {{$text_to_translate}}. These variables are dynamic and can be replaced with the appropriate values when calling the function, allowing for a broad range of applications without editing the prompt each time.
The config.json file plays a crucial role in defining the model's behavior configuration. It contains parameters like maximum token count and temperature, which control the response generation process. Additionally, this file defines the variables used in skprompt.txt, ensuring their compatibility and dynamic usage.
Below is a sample config.json file:
{
"schema": 1,
"description": "Translate text.",
"default_services": [
"gpt-4o-mini"
],
"execution_settings": {
"default": {
"max_tokens": 1000,
"temperature": 0.5
}
},
"input_variables": [
{
"name": "$source_language",
"description": "Language from which the text will be translated.",
"required": true
},
{
"name": "$target_language",
"description": "Language into which the text will be translated.",
"required": true
},
{
"name": "$text_to_translate",
"description": "The text that needs to be translated.",
"required": true
}
]
}
In the above example, execution_settings defines parameters like the number of tokens (max_tokens) and temperature (temperature), which influence the type of response generated — the higher the temperature, the more creative the results. In the input_variables section, we define the variables that were used earlier in skprompt.txt, such as $source_language, $target_language, and $text_to_translate. Each of these variables has a detailed description, making it easier for other developers to use and understand them.
Now, let's combine all the elements and run our prompt. The C# code below demonstrates how to use file-based prompt functions.
using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
var builder = new ConfigurationBuilder()
.AddUserSecrets<Program>();
var configuration = builder.Build();
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion("gpt-4o-mini", configuration["apiKey"]!)
.Build();
var path = Path.Combine(Directory.GetCurrentDirectory(), "..","..","..", "Prompts");
var prompts = kernel.CreatePluginFromPromptDirectory(path);
var result = await kernel.InvokeAsync(prompts["Translate"], new KernelArguments
{
["source_language"] = "polish",
["target_language"] = "english",
["text_to_translate"] = "Work as hard as possible; it increases the chances of success. If other people work 40 hours a week and you work 100, you will achieve in four months what would take them a year."
}) ;
Console.WriteLine(result);
Console.ReadLine();
At the beginning of the code, a ConfigurationBuilder is created, which is responsible for building the application's configuration, including getting the API key from UserSecrets. This allows secure storage of authentication data. Next, using Kernel.CreateBuilder(), we create an instance of kernel with support for the OpenAI model using the API key (configuration["apiKey"]!). This prepares our model (gpt-4o-mini) for response generation.
The next step is to load the prompts from the file system. The path to the prompt directory is determined using Path.Combine(), enabling prompt loading using kernel.CreatePluginFromPromptDirectory(path). This way, the prompt files become part of the Semantic Kernel environment as plugins that can be invoked in the code. The prompt functions are located in the Prompts/Translate source directory, where Translate is the name of the prompt function.
In the next part of the code, we use kernel.InvokeAsync() to run the Translate prompt. Variables such as source_language (source language), target_language (target language), and text_to_translate (text to translate) are passed in. In this case, we want to translate a sentence from Polish to English. The provided variables dynamically replace those used in the skprompt.txt prompt definition, allowing for flexible use in various contexts.
Finally, the translation result is displayed on the console using Console.WriteLine(result). Console.ReadLine() keeps the program running so the user can examine the result before the application terminates. This code demonstrates how to seamlessly integrate configuration, prompts, and function invocation in a modular way, making code management more efficient and comprehensible.
Conclusion
Storing prompt functions in files is a highly practical approach that enhances modularity, flexibility, and efficiency when working with Semantic Kernel. By separating prompt content from the code, projects become easier to manage, teamwork improves, and consistency is ensured across applications. This solution enables dynamic and scalable interactions with AI models, which is crucial in the rapidly evolving field of artificial intelligence. I encourage you to try this approach in your projects and experience its advantages firsthand.
See you in future posts!