Category Archives: ai

Blazor Simple AI Project (Part 6) Azure Open AI Image Generation

Welcome to the Blazor Simple AI Single Page App, Part 6 of the Microsoft AI services journey, which now includes Microsoft Azure Open AI image generation.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI

To download the project documentation, you can download a PDF here.

Since part 5, the following changes to the project have been implemented.

Project Changes

  1. ImageGen.razor page has been added to the project Pages folder. This is a page hosting the image generation component and necessary code
  2. AzureOpenAIImageGeneration.razor component has been added to the project components folder which handles the user prompt, then displays the image viewer dialogue with the Azure Open AI generated image
  3. ImageViewer.razor component has been added to the project components folder. This displays the image dialogue
  4. The following configuration changes have been added to AppSettings.Json for the DALL-E deployment name

“AzureAIConfig”

{

    “OpenAIDALLEEndpoint”: “[Your Azure Open AI endpoint which is hosting the DALL-E deployment]”,

“OpenAIKeyDALLECredential”: “[Your Azure Open AI key] “,

“OpenAIDALLEDeploymentName”: “[Your DALL-E deployment name]”

}

5) The following base model was added to the Open AI Service.

Components

ImageGen.razor (page)

The ImageGen.razor page is used to host the prompt for the user to generate the image. This is distinctively similar to the Open AI Chat index page, which follows a similar pattern to accept text prompts or audio recordings and then the text is passed to the child component, AzureOpenAIImageGeneration, to process the text and generate the image from the Azure Open AI service.

service.AzureOpenAIImageGeneration.razor

A component which accepts the text from the prompt and then calls the Azure Open AI service to generate the image.

ImageViewer.Razor

This component displays the output, the image, generated from the Azure Open AI service which is the template for the image dialogue box. This is called from the Image Generation child component.

The UI

I have added a Image Generation navigation link to the landing page.

Sample Questions and Responses

Question 1

“Draw a futuristic city”

Output for question 1:

The process takes a few seconds for the image generation to complete, so I have displayed a spinning wheel and a prompt for the user to wait for the result.

The output is displayed as follows:

Question 2

“Origins of the universe by the James Webb telescope”

The output is displayed as follows:

Question 3

“exotic cars on a beach”

The output is displayed as follows:

That’s it!

This shows how simple it is to integrate a Blazor Web application with Azure Open AI image generation.

Blazor Simple AI Project (Part 5)

Azure Open AI Chat Audio Recoding Button

Welcome to the Blazor Simple AI Single Page App, Part 5 of the Microsoft AI services journey, which now includes an audio recording button in the Open AI Chat component.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI

If you would like to download the whole series of posts for the Blazor Simple AI Project, you can download the PDF here.

Visual Changes

The audio button has been added to the index.razor page as this is the main landing page. The audio button component is part of Radzen Blazor and simple to interact with which is the RadzenSpeechToTextButton. This utilises JavaScript as a component itself, there is an API to get user media.

For further information on the Razden Blazor Speech To Text Button, see: Blazor SpeechToTextButton Component | Free UI Components by Radzen.

Landing Page

The new landing page has the audio button added next to the chat text box.

When you click on the audio button, the first instance will provide a prompt requesting access from the site to the devices microphone, then recording has started as shown below.

When you are finished speaking, you click the button to stop recording and the text is submitted to the Question string and a OnChange event occurs and the Question value is set, then the state is changed for the component. Since the Question string is a bound field to the child component, AzureOpenAIChat, which then executes the component code to call the Microsoft Azure Open AI service with the text that was bound to the Question string.

An example of the recorded audio text and Azure Open AI response is shown below.

Code Changes

The following code changes were made in index.razor.

Added audio recoding button and spacing.

<Radzen.Blazor.RadzenSpeechToTextButton class="padding-right:10px;" Change="@(args => OnSpeechCaptured(args, "SpeechToTextButton"))" />
<div style="padding-left:10px;" />

Added the OnSpeechCaptured method.

Note: I removed the question marks and period from the string return from the Radzen Speech To Text button as the characters were automatically to the returned text string value from the component.

private void OnSpeechCaptured(string speechValue, string name)

{

    speechValue = speechValue.Trim(new Char[] { '.', '?' });

    RecordedSpeech = speechValue;

    Question = RecordedSpeech;

    this.StateHasChanged();

}

For my next post, I will be utilising the RadzenSpeechToTextButton for a different purpose in the Blazor Simple AI project.