Explore the power of integrating OpenAI's advanced AI models into .NET applications. This guide delves into the benefits, features, and practical implementation steps, ensuring developers can effortlessly harness AI capabilities and also learn how the client package for making this possible, is developed.
Introduction
The ConnectingApps.Refit.OpenAI
NuGet package offers a seamless way to integrate OpenAI's API into .NET applications. This article provides insights into the package, its features, and how developers can harness the power of OpenAI.
Background
OpenAI's API provides access to advanced AI models like ChatGPT, capable of understanding and generating text. This is invaluable for developers creating applications that require conversational agents, simulated characters for games, or any feature that involves dynamic text generation and understanding. The API offers a simple interface for prompt-response interactions.
Using the Code
To utilize the ConnectingApps.Refit.OpenAI
NuGet package, start by setting up your OpenAI API key:
var apiKey = Environment.GetEnvironmentVariable("OPENAI_KEY");
This line fetches the OpenAI API key from the environment variables, which is a secure way to store and retrieve sensitive information like API keys.
Next, instantiate the ICompletion
(or any other) Refit interface:
using ConnectingApps.Refit.OpenAI;
using ConnectingApps.Refit.OpenAI.Completions;
var completionApi = RestService.For<ICompletion>(new HttpClient
{
BaseAddress = new Uri("https://api.openai.com")
}, OpenAiRefitSettings.RefitSettings);
The RestService.For
method is a part of the Refit
library. It creates an instance of the specified interface, in this case, ICompletion
, which provides methods to interact with OpenAI's API. The method takes in an HttpClient
with a specified base address pointing to OpenAI's API endpoint and the settings for Refit
. This setup allows for easy and user-friendly REST API calls using the created interface instance.
Interface Description
The ICompletion
interface is a crucial component of the ConnectingApps.Refit.OpenAI
package. It defines the methods that allow for interactions with OpenAI's API. The interface contains the following methods:
CreateCompletionAsync
: This method is used to send a completion request to OpenAI's API. It takes in a ChatRequest
object, which specifies the model to use, the temperature setting for randomness, and the message to send. The method returns a ChatResponse
object containing the API's response.
Here is the content of the ICompletion
interface:
using System.Threading.Tasks;
using ConnectingApps.Refit.OpenAI.Completions.Request;
using ConnectingApps.Refit.OpenAI.Completions.Response;
using Refit;
namespace ConnectingApps.Refit.OpenAI.Completions
{
public interface ICompletion
{
[Post("/v1/chat/completions")]
Task<ApiResponse<ChatResponse>> CreateCompletionAsync([Body]
ChatRequest chatRequest, [Header("Authorization")] string authorization);
}
}
In addition to ICompletion
, there are several other interfaces you can instantiate depending on what functionality you require. Here is an overview:
ICompletion
for Completions IVariation
for image variants IAudioTranslation
for audio translation ITrancription
for audio transcription IModeration
for moderating the content of posts ImageCreation
for creating images based on your requirements IFiles
for submitting files (for example for fine-tuning) IEmbedding
for transforming the text of a post into a vector to posts can be compared mathematically IFineTune
Fine-tuning is a typical AI process of tailoring a pre-trained model to better suit specific tasks. This will be explained later in more detail.
If you want to make a client package for another REST API yourself, I recommend the following:
- Also use Refit. If you are not familiar with it, read the documentation. You can still use the
HttpClient
you configured yourself but you don't need to write the implementation of your calls as this is configured for you because of the attributes. - Use .NET Standard 2.0. In this way, you can support both .NET (Core) and .NET Framework (and even more frameworks).
Testing the Code
Ensuring the reliability and correctness of the code is crucial. The ConnectingApps.Refit.OpenAI
package includes integration tests to verify its functionality. One such test is the CompletionTest
class, which tests the completion functionality of the package.
The CompletionTest
class contains two test methods:
CapitalOfFrance
: This test checks the response when asking the capital of France. It expects the response to contain the word "Paris
". CapitalOfFranceTopP
: Similar to the first test, but with a different parameter setting for the completion request. It also expects the response to contain "Paris
".
Both tests utilize the CompletionCaller
delegate, which is responsible for making the completion request to OpenAI's API. The tests ensure that the response is successful (HTTP status code 200) and that the content of the response matches the expected output.
Here is the content of the CompletionTest.cs file:
using System.Net;
using ConnectingApps.Refit.OpenAI.Completions;
using ConnectingApps.Refit.OpenAI.Completions.Request;
using FluentAssertions;
using Refit;
using ConnectingApps.Refit.OpenAI.Completions.Response;
namespace ConnectingApps.Refit.OpenAI.IntegrationTest
{
public class CompletionTest
{
private static readonly
Func<ChatRequest, Task<ApiResponse<ChatResponse>>> CompletionCaller;
static CompletionTest()
{
var apiKey = Environment.GetEnvironmentVariable("OPENAI_KEY");
apiKey.Should().NotBeNullOrEmpty
("OPENAI_KEY environment variable must be set");
var completionApi = RestService.For<ICompletion>
("https://api.openai.com", OpenAiRefitSettings.RefitSettings);
CompletionCaller = chatRequest => completionApi.CreateCompletionAsync
(chatRequest, $"Bearer {apiKey}");
}
[Fact]
public async Task CapitalOfFrance()
{
var response = await CompletionCaller(new ChatRequest
{
Model = "gpt-3.5-turbo",
Temperature = 0.7,
Messages = new List<Message>
{
new()
{
Role = "user",
Content = "What is the capital of the France?",
}
}
});
(response.Error?.Content, response.StatusCode).Should().Be
((null, HttpStatusCode.OK));
response.Content!.Choices!.First().Message!.Content.Should().Contain("Paris");
}
[Fact]
public async Task CapitalOfFranceTopP()
{
var response = await CompletionCaller(new ChatRequest
{
Model = "gpt-3.5-turbo",
TopP = 1,
Messages = new List<Message>
{
new()
{
Role = "user",
Content = "What is the capital of the France?",
}
}
});
(response.Error?.Content, response.StatusCode).Should().Be
((null, HttpStatusCode.OK));
response.Content!.Choices!.First().Message!.Content.Should().Contain("Paris");
}
}
}
This test class provides a clear example of how to use the package's functionalities and verify their correctness. The assertion is done with Fluent Assertions.
Fine-tuning
As mentioned, fine-tuning is a typical AI feature supported by OpenAI and therefore supported by the NuGet package. Its usage is a bit more difficult so this requires some more detailed explanation. Fine-tuning is a process of tailoring a pre-trained model to better suit specific tasks. By fine-tuning an OpenAI model, you can improve the model's performance on your specific use-case. OpenAI offers an API to facilitate this, allowing you to upload a training file and start a fine-tuning job based on that data.
Firstly, the IFineTune
interface is initialized using the OpenAI API key. This interface will be used to interact with OpenAI's fine-tuning feature.
var apiKey = Environment.GetEnvironmentVariable("OPENAI_KEY");
var fineTuneApi = RestService.For<IFineTune>(new HttpClient
{
BaseAddress = new Uri("https://api.openai.com")
}, OpenAiRefitSettings.RefitSettings);
var token = $"Bearer {apiKey}";
After that, the GetJobsAsync
method retrieves the current fine-tuning jobs. The example limits the retrieval to 200 jobs.
var jobs = await fineTuneApi.GetJobsAsync(token, limit: 200);
Now (mydata.jsonl
) is uploaded to OpenAI's server using the PostFileAsync
method.
await using (var fineTuneDataStream =
new FileStream("mydata.jsonl", FileMode.Open, FileAccess.Read))
{
var openAiApi = RestService.For<IFiles>
("https://api.openai.com", OpenAiRefitSettings.RefitSettings);
var streamPart = new StreamPart(fineTuneDataStream, "mydata.jsonl");
var postFileResponse = await openAiApi.PostFileAsync(token, streamPart, "fine-tune");
newTraingFile = postFileResponse.Content!.Id;
}
After the upload, a new fine-tuning job is initiated by using the PostJobAsync
method, which requires the ID of the uploaded training file and the model to be fine-tuned.
var newJobResponse = await fineTuneApi.PostJobAsync(new FineTuneRequest
{
TrainingFile = newTraingFile,
Model = "gpt-3.5-turbo"
}, token);
As there is a new job, the status and details of the newly initiated job can be fetched and the job can be cancelled using GetJobAsync
and CancelJobAsync
methods, respectively.
var newJob = await fineTuneApi.GetJobAsync(newJobResponse.Content!.Id, token);
var cancelResponse = await fineTuneApi.CancelJobAsync(newJobResponse.Content!.Id, token);
This is just how API calls should be coded in C#. If you want to know more about the concept and the data, read the OpenAI documentation about this topic.
Build Pipeline Description
The build pipeline for the ConnectingApps.Refit.OpenAI
NuGet package is defined in the dotnet-desktop.yml file. This pipeline is responsible for building, testing, packaging, and publishing the NuGet package to NuGet.org.
Here is the code:
name: .NET CI
on:
push:
branches:
- main
- release
- develop
- feature/✶✶
- bugfix/✶✶
jobs:
build_and_test:
name: Build and Test
runs-on: ubuntu-latest
env: # Setting environment variable at the job level
OPENAI_KEY: ${{ secrets.OPENAI_KEY }} # Accessing the secret
and assigning it to an environment variable
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup .NET SDK
uses: actions/setup-dotnet@v3
with:
dotnet-version: '6.0.x' # Adjust the version as necessary
- name: Run another one-line script
run: echo Hello, ${{ vars.PUBLISH_NUGET }}!
- name: Restore dependencies
run: dotnet restore OpenAI.sln
- name: Build Solution
run: dotnet build OpenAI.sln --configuration Release --no-restore
- name: Run Tests
run: python -c "import os; os.system('dotnet test OpenAI.sln
--configuration Release --no-build --verbosity normal --logger trx');"
- name: Publish Test Results
uses: dorny/test-reporter@v1
with:
name: 'Test Results'
path: '✶✶/TestResults/✶✶/✶.trx'
reporter: 'dotnet-trx'
package:
name: Package
needs: build_and_test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup .NET SDK
uses: actions/setup-dotnet@v3
with:
dotnet-version: '6.0.x' # Adjust the version as necessary
- name: Restore dependencies
run: dotnet restore OpenAI.sln
- name: Build Solution
run: dotnet build OpenAI.sln --configuration Release --no-restore
- name: Pack
run: dotnet pack ConnectingApps.Refit.OpenAI/ConnectingApps.Refit.OpenAI.csproj
--configuration Release --no-build -o out
- name: Find package file
run: find | grep 'nupkg'
- name: Publish Artifacts
uses: actions/upload-artifact@v3
with:
name: nuget-package
path: out/✶.nupkg
- name: Publish to NuGet
if: ${{ vars.PUBLISH_NUGET == 'true' }} # Conditional execution based
# on PUBLISH_NUGET environment variable
run: dotnet nuget push "out/✶.nupkg" --api-key ${{ secrets.NUGET_KEY }}
--source https:∕∕api.nuget.org∕v3∕index.json
Here's a breakdown of the pipeline:
- Trigger: The pipeline is triggered on pushes to the main, release, develop, feature, and bugfix branches.
- Environment Variables: The pipeline uses the
OPENAI_KEY
environment variable, which is fetched from GitHub secrets. - Build and Test: The pipeline runs on an Ubuntu-latest machine. It checks out the code, sets up the .NET SDK, restores dependencies, builds the solution in Release configuration, and runs tests. Test results are then published.
- Package: The pipeline packages the project into a NuGet package and stores it in the out directory.
- Publish: If the
PUBLISH_NUGET
environment variable is set to true
, the pipeline pushes the NuGet package to NuGet.org using the NUGET_KEY
secret for authentication.
This pipeline ensures that the NuGet package is built, tested, and published in a consistent and automated manner, ensuring quality and ease of distribution.
Points of Interest
While integrating the ConnectingApps.Refit.OpenAI
package, it's evident that the synergy between OpenAI's capabilities and Refit's user-friendly REST functionalities is powerful. The addition of new features like Image Variations, Audio Translations, and more makes the package even more versatile for developers looking to leverage advanced AI in their projects. Additionally, the build pipeline ensures that the package is always of high quality and is distributed efficiently.
Have a look at the source code to have a more detailed view of how this all works.
History
- 22nd October, 2023
- Initial version of this article
- Previously, some tip articles were published
- 25nd October, 2023