How to Start Programming with Microsoft Semantic Kernel?
The previous post introduced the world of Microsoft Semantic Kernel, covering its key features and the capabilities it offers for integrating with language models. Today, we'll delve into practical aspects—showing you, step by step, how to start programming with Semantic Kernel.
Installing Microsoft Semantic Kernel
The first step is to add the Microsoft Semantic Kernel package to your .NET project. You can do this using the following command:
dotnet add package Microsoft.SemanticKernel
Connecting to the Language Model (LLM)
Once the package is installed, you can connect to a language model, such as the one provided by OpenAI, like gpt-40-mini. This example demonstrates how to configure the connection to the GPT-4 model using the Microsoft.Extensions.Configuration library for secure API key management.
Here's an example code snippet illustrating how to do this:
using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
var builder = new ConfigurationBuilder()
.AddUserSecrets<Program>();
var configuration = builder.Build();
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion("gpt-4o-mini", configuration["apiKey"]!)
.Build();
In the example above, we configure the ConfigurationBuilder to load the API key from User Secrets, which allows for secure storage of authentication data. Then, we create an instance of the Semantic Kernel using Kernel.CreateBuilder() and add support for the OpenAI model using AddOpenAIChatCompletion. Once the kernel is built, it can be used to generate responses from the GPT-4 model or another model.
Microsoft Semantic Kernel also offers integration not only with language models but also with other AI services such as image generation or audio-to-text conversion. To learn more about the available services and how to integrate them with Semantic Kernel, you can visit this link.
What is a Prompt?
In the context of large language models (LLM) like GPT, a prompt is a textual query or instruction that the user provides to generate a response or content from the model. A prompt can be a question, a command, or even a fragment of text that the model is supposed to complete or expand upon. The quality and precision of the prompt directly affect the response obtained—the more detailed and clear the prompt, the better the results you can get from the model. Therefore, prompting is a key skill in effectively using language models.
Example Usage
Below is a simple example of how to use Semantic Kernel to generate a response from the GPT-4 model based on a prompt:
using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
var builder = new ConfigurationBuilder()
.AddUserSecrets<Program>();
var configuration = builder.Build();
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion("gpt-4o-mini", configuration["apiKey"]!)
.Build();
const string prompt = "What is Semantic Kernel? Describe it in two sentences.";
var promptAsync = await kernel.InvokePromptAsync(prompt);
Console.WriteLine(promptAsync);
Console.ReadLine();
In the example above, we first create and configure the kernel, and then define a prompt, which is a textual instruction sent to the language model. The prompt is a crucial element in interacting with AI models, as it determines what response we expect from the model. In this case, we ask the model to describe Semantic Kernel in two sentences. After invoking the InvokePromptAsync function, the model returns a response, which we print to the console.
Defining Functions (Prompt Function)
Semantic functions are functions that connect with AI services, typically language models (LLM), to perform specific tasks. These functions allow an external model to complete a task based on a defined prompt. Previously known as semantic functions, they are referred to as prompt functions following the release of version 1.0.
The example below shows how to define a prompt function that translates text from one language to another:
using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
var builder = new ConfigurationBuilder()
.AddUserSecrets<Program>();
var configuration = builder.Build();
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion("gpt-4o-mini", configuration["apiKey"]!)
.Build();
const string prompt = """
Translate the following text from {{$source_language}} to {{$target_language}}: "{{$text_to_translate}}"
""";
var translateFunction = kernel.CreateFunctionFromPrompt(prompt);
var result = await kernel.InvokeAsync(translateFunction, new KernelArguments
{
["source_language"] = "polish",
["target_language"] = "english",
["text_to_translate"] = "Pracuj tak ciężko jak to możliwe, to zwiększa szanse na sukces. Jeśli inni ludzie pracują 40 godzin w tygodniu, a ty 100 godzin, to ty w 4 miesiące osiągniesz to, co innym zajmie rok."
});
Console.WriteLine(result);
Console.ReadLine();
In the above code, we define a prompt that includes variables such as {{$source_language}}, {{$target_language}}, and {{$text_to_translate}}. These variables can be dynamically replaced with values passed during the function call. The prompt function is created using CreateFunctionFromPrompt and then invoked using InvokeAsync, where we pass specific values to the variables via KernelArguments. In this example, we translate text from Polish to English.
It is also possible to pass various parameters that determine how the model processes the prompt, which can be achieved using OpenAIPromptExecutionSettings. This allows control over parameters such as the maximum number of tokens, model temperature, and more. The example below shows how to define a prompt function with custom settings:
var translateFunction = kernel.CreateFunctionFromPrompt(prompt, new OpenAIPromptExecutionSettings()
{
MaxTokens = 1000,
Temperature = 0.5,
});
In the example above, we set the maximum number of tokens to 1000 and the model's temperature to 0.5, which affects the randomness of the generated response.
Summary
In this post, I discussed the step-by-step process of installing and configuring Microsoft Semantic Kernel, including connecting to a language model (LLM) and defining prompt functions (previously known as semantic functions).
In future posts, I will cover increasingly advanced topics related to Semantic Kernel.
Stay tuned for the next posts!