Tag Archives: Azure

The next AI Project? or something else? What’s coming next…

Whilst I am not putting aside the Blazor Simple AI project completely, as I have recently developed a document summary solution using the Microsoft Azure Open AI service for abstractive and extractive analysis of documents, I have been working on a nominal search scenario. The document summary solution will be added to the Blazor Simple AI project later this year and will be available in my GitHub repo with an associated blog post.

For the three different integration patterns I have been developing, using Blazor, .NET Core APIs and background processors, I have been utilising the following architecture patterns:

Request/Response – with Microsoft Azure API Management and a backend API

Web-Queue-Worker – with Microsoft Azure Service Bus Relay

Bi-Directional Synchronisation – with Microsoft Azure Service Bus

All three architecture patterns provide different capabilities for enterprise applications, the last two utilise the same Microsoft Azure Service Bus service, but are referred to as a different resource type by Microsoft. There are a number of different components utilised in the Blazor project and this will be something I will be documenting soon, now that the integration patterns have all been developed and tested.

A snippet of the architecture components are shown in the diagram below.

There is likely a chance that I will integrate this solution with my Blazor Simple AI solution, perhaps with Azure AI Search.

Watch this space!

Blazor Simple AI Project (Part 6) Azure Open AI Image Generation

Welcome to the Blazor Simple AI Single Page App, Part 6 of the Microsoft AI services journey, which now includes Microsoft Azure Open AI image generation.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI

To download the project documentation, you can download a PDF here.

Since part 5, the following changes to the project have been implemented.

Project Changes

  1. ImageGen.razor page has been added to the project Pages folder. This is a page hosting the image generation component and necessary code
  2. AzureOpenAIImageGeneration.razor component has been added to the project components folder which handles the user prompt, then displays the image viewer dialogue with the Azure Open AI generated image
  3. ImageViewer.razor component has been added to the project components folder. This displays the image dialogue
  4. The following configuration changes have been added to AppSettings.Json for the DALL-E deployment name

“AzureAIConfig”

{

    “OpenAIDALLEEndpoint”: “[Your Azure Open AI endpoint which is hosting the DALL-E deployment]”,

“OpenAIKeyDALLECredential”: “[Your Azure Open AI key] “,

“OpenAIDALLEDeploymentName”: “[Your DALL-E deployment name]”

}

5) The following base model was added to the Open AI Service.

Components

ImageGen.razor (page)

The ImageGen.razor page is used to host the prompt for the user to generate the image. This is distinctively similar to the Open AI Chat index page, which follows a similar pattern to accept text prompts or audio recordings and then the text is passed to the child component, AzureOpenAIImageGeneration, to process the text and generate the image from the Azure Open AI service.

service.AzureOpenAIImageGeneration.razor

A component which accepts the text from the prompt and then calls the Azure Open AI service to generate the image.

ImageViewer.Razor

This component displays the output, the image, generated from the Azure Open AI service which is the template for the image dialogue box. This is called from the Image Generation child component.

The UI

I have added a Image Generation navigation link to the landing page.

Sample Questions and Responses

Question 1

“Draw a futuristic city”

Output for question 1:

The process takes a few seconds for the image generation to complete, so I have displayed a spinning wheel and a prompt for the user to wait for the result.

The output is displayed as follows:

Question 2

“Origins of the universe by the James Webb telescope”

The output is displayed as follows:

Question 3

“exotic cars on a beach”

The output is displayed as follows:

That’s it!

This shows how simple it is to integrate a Blazor Web application with Azure Open AI image generation.

Microsoft 365 Mailbox Attachment Processor

A .NET8 C# Application to Process Microsoft 365 Email Messages and Attachments

Introduction

Recently, I was asked how attachments from a Microsoft 365 mailbox could be automatically pushed into an Azure storage file share, so that the attachments can be made accessible to an onward process which needs to be executed on an Azure Virtual Machine. Whilst there are many ways this can be achieved; I decided to create a C# Console application to process the messages and attachments from Microsoft 365 mailbox inbox folder.

You can download this blob post as a PDF here.

Source Code

The source code for this solution can be found in my GitHub repo here.

Dependencies

There are several dependencies for this to work, these are described in the list below.

  • A Microsoft Entra ID registered application, with the following delegated application permissions:
  • An application secret (this can also be a certificate if needed)
  • Install the Microsoft ExchangeOnlineManagement PowerShell tools
  • Create a Microsoft Exchange Online application policy to allow the application access to the mailbox

# Connect to Exchange Online

Connect-ExchangeOnline -UserPrincipalName [Your Exchange Online Admin UPN] [-ShowBanner:$false]

# Create the app policy

New-ApplicationAccessPolicy -AppId [Your application ID] -PolicyScopeGroupId [Full email address of the mailbox] -AccessRight RestrictAccess -Description “Restrict the Mailbox Processor app..”

Reference: Limiting application permissions to specific Exchange Online mailboxes – Microsoft Graph | Microsoft Learn

  • Create an Azure storage account
  • Create an Azure storage account file share

Nuget Packages

The following Nuget packages are a dependency as defined in the project settings.

  <ItemGroup>

    <PackageReference Include=”Azure.Core” Version=”1.44.1″ />

    <PackageReference Include=”Azure.Identity” Version=”1.13.1″ />

    <PackageReference Include=”Azure.Storage.Files.Shares” Version=”12.21.0″ />

    <PackageReference Include=”Microsoft.Extensions.Configuration.Binder” Version=”9.0.0″ />

    <PackageReference Include=”Microsoft.Extensions.Configuration.Json” Version=”9.0.0″ />

    <PackageReference Include=”Microsoft.Graph” Version=”5.63.0″ />

    <PackageReference Include=”Microsoft.Graph.Core” Version=”3.2.1″ />

    <PackageReference Include=”Microsoft.Identity.Client” Version=”4.66.2″ />

  </ItemGroup>

Mailbox Processor Application

The mailbox processor application consists of the following C# Classes and an appsettings.json file.

File NamePurpose
AuthContext.csA C# Class representing the authentication context for the application
JSONConfigurationBuilder.csA C# Class building the configuration from appsettings.json into the application context
MSAzureStorageOperations.csA C# Class with a method to stream the attachment to Azure Storage File Share
MSGraphOperations.csA C# Class with methods to work with the Microsoft Graph API e.g. read/move messages and attachments and folders
Program.csA C# program, the core of the application
Reference.csA C# Class to store the appsettings that are referenced by the application
Appsettings.jsonThe configuration settings for the application

Application Settings

The application settings have been described below.

{

  “AppSettings”: {

    “MailFolderName”: “[The mailbox folder to target to read the messages]”,

    “MailEmailAddress”: “[The mailbox email address]”,

    “MailSubjectSearchString”: “[The subject search string for each mail message]”,

    “ProcessedMessagesFolderName”: “[Process message mailbox folder name]”,

    “AzureStorageConnectionString”: “[The Azure storage connection string] “,

    “AzureStorageFileShareName”: “[Azure storage file share name]”,

    “MSEntraApplicationClientId”: “[Microsoft Entra ID Application Id]”,

    “MSEntraApplicationSecret”: “[Microsoft Entra ID Application Secret]”,

    “MSEntraApplicationTenantId”: “[Microsoft Entra ID Tenant Id]”

  }

}

Application Runtime Process

The application process is described below.

  1. The configuration is initialised
  2. The messages are retrieved from the defined mailbox folder name
  3. Each message is processed in the message collection and the emails with the matched string that are contained in the subject are processed
  4. A console output of the message ID, received date, received from, and subject is displayed
  5. Each attachment is processed and if the file is a file attachment, then the attachment is uploaded to the Azure file share specified in the Azure storage account connection string and file share name
  6. The number of messages processed, and the number of attachments processed is displayed in the output of the console

Sample Output

The mailbox has two messages with the subject containing the search string “course completions”.

The mailbox attachment processor is executed, it displays the following output.

Two messages are processed, although three were seen in the previous email, but since the search string was not contained in the subject, only two messages were processed which were matched.

Two messages are processed, although three were seen in the previous email, but since the search string was not contained in the subject, only two messages were processed which were matched.

Three attachments in total were processed and uploaded to an Azure storage file share.

The email messages were moved to the ProcessedMessages folder, as defined in the application setting ProcessedMessagesFolderName.

When the application is executed again, the output is shown below as there are no longer any matched messages to process.

Closing Thoughts

From a development point of view, using this method provides a simple solution. Other considerations:

  • Store the storage account key in Azure Key Vault
  • Store the application secret (if used( in Azure Key Vault
  • The Azure resource hosting the application e.g. Function App, can have a managed identity and RBAC access can be provided to Azure Key Vault for the service principal (Azure Key Vault access policies are now deprecated)
  • Environment settings can be stored in the hosting environment configuration rather than in the appsettings.json file.

References

Limiting application permissions to specific Exchange Online mailboxes – Microsoft Graph | Microsoft Learn

Blazor Simple AI Project (Part 5)

Azure Open AI Chat Audio Recoding Button

Welcome to the Blazor Simple AI Single Page App, Part 5 of the Microsoft AI services journey, which now includes an audio recording button in the Open AI Chat component.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI

If you would like to download the whole series of posts for the Blazor Simple AI Project, you can download the PDF here.

Visual Changes

The audio button has been added to the index.razor page as this is the main landing page. The audio button component is part of Radzen Blazor and simple to interact with which is the RadzenSpeechToTextButton. This utilises JavaScript as a component itself, there is an API to get user media.

For further information on the Razden Blazor Speech To Text Button, see: Blazor SpeechToTextButton Component | Free UI Components by Radzen.

Landing Page

The new landing page has the audio button added next to the chat text box.

When you click on the audio button, the first instance will provide a prompt requesting access from the site to the devices microphone, then recording has started as shown below.

When you are finished speaking, you click the button to stop recording and the text is submitted to the Question string and a OnChange event occurs and the Question value is set, then the state is changed for the component. Since the Question string is a bound field to the child component, AzureOpenAIChat, which then executes the component code to call the Microsoft Azure Open AI service with the text that was bound to the Question string.

An example of the recorded audio text and Azure Open AI response is shown below.

Code Changes

The following code changes were made in index.razor.

Added audio recoding button and spacing.

<Radzen.Blazor.RadzenSpeechToTextButton class="padding-right:10px;" Change="@(args => OnSpeechCaptured(args, "SpeechToTextButton"))" />
<div style="padding-left:10px;" />

Added the OnSpeechCaptured method.

Note: I removed the question marks and period from the string return from the Radzen Speech To Text button as the characters were automatically to the returned text string value from the component.

private void OnSpeechCaptured(string speechValue, string name)

{

    speechValue = speechValue.Trim(new Char[] { '.', '?' });

    RecordedSpeech = speechValue;

    Question = RecordedSpeech;

    this.StateHasChanged();

}

For my next post, I will be utilising the RadzenSpeechToTextButton for a different purpose in the Blazor Simple AI project.

Blazor Simple AI Project (Part 4) Invoice Analysis with Microsoft Azure Document Intelligence

Welcome to the Blazor Simple AI Single Page App, Part 4 of the Microsoft AI services journey, which now includes invoice analysis which utilises Microsoft Azure AI Document Intelligence service. The document Intelligence service is used to extract the text from an invoice using a pre-built model. A sample of some of the models is shown below in document intelligence studio.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI.

If you would like to download the whole series of posts for the Blazor Simple AI Project, you can download the PDF here.

Since part 3, the following nuget packages have been added to the project.

Azure.AI.DocumentIntelligence (Pre-release)

New Components

Three components have been developed for the image analysis. These are as follows:

  1. InvoiceLoader.Razor – A component which includes the child component (InvoiceFileList.Razor), which uploads invoices to Azure blog storage container
  2. InvoiceFileList.Razor – A component which lists the invoices that a present in the invoice upload container
  3. InvoiceViewer.Razor – A component which allows the user to view the uploaded invoice in a dialog

Provisioning a Microsoft AI Document Intelligence Service

To provision a Microsoft AI Document Intelligence Service resource, follow the instructions in the article below.

Create a Document Intelligence (formerly Form Recognizer) resource – Azure AI services | Microsoft Learn

Learn about Microsoft AI Document Intelligence

To learn more about the capabilities of Microsoft AI Document Intelligence capabilities, see. Document Intelligence documentation – Quickstarts, Tutorials, API Reference – Azure AI services | Microsoft Learn. Microsoft Azure AI Document Intelligence includes more analysis capabilities, not just specifically an invoice model.

Configuration Settings Changes

The following configuration settings were added to appsettings.json.

“AzureDocumentIntelligenceConfig”: {

    “AzureDocumentIntelligenceKey”: “[Your Azure Document Intelligence Key]”,

    “AzureDocumentIntelligenceEndpoint”: “[Your document intelligence endpoint https://[Resource Name].cognitiveservices.azure.com/”

  },

“AzureStorageConfig”: {

    “AzureStorageInvoiceContainer”: “[Your Invoice Upload Container Name]”,

    “AzureStorageInvoiceProcessedContainer”: “[Your Invoice Processed Container Name]”,   },

The invoice processed container is the output interface file that is generated from the text extracted from the original invoice file. It is an output of JSON which utilises the InvoiceAnalysisData data type.

Note: Whist this project utilises the service key, in an enterprise environment, you must consider using token based access to the service secured by Microsoft Entra ID, or if you wish to utilise the service key for any reason, utilise Azure Key Vault to protect the key used by the application with a managed identity for the application to access the service key stored in Azure Key Vault.

Components

Invoice Loader Component (InvoiceLoader.Razor)

The invoice upload component utilises Blazor InputFile for the user to select the file to upload in the application. The component reads the Azure Storage connection string from the configuration, including the container, then uploads the file to the container and also adds a blob http header for the file content type taken from the file properties. The Radzen notification service is used to notify the user of the application activities. I also included a basic spinner as part of the interaction for the upload process.

Invoice File List Component (InvoiceFileList.Razor)

This component reads the Azure Storage connection string from the configuration, including the container, then displays the invoice blob file names in a Radzen DataGrid. A button is added to view the invoice, or process the invoice, which then calls the Radzen notification service to display the activities being taken by the application.

Invoice Viewer Component (InvoiceViewer.Razor)

This component is a child component displayed in a Radzen dialog box which displays the original uploaded invoice directly from the Azure blob storage invoice upload container. A storage SAS key is generated which provides time limited access to the user in order for the invoice to be displayed in the dialog.

Data Classes

InvoiceAnalysisData.cs – The class for the invoice.

InvoiceItem.cs –  The class for the invoice items.

Invoice Sample

I have created an invoice samples to test the pre-built invoice model from Microsoft Azure Document Intelligence.

Supplier 1 Invoice (PDF)

I created two additional sample invoices, both of which were tested and successfully processed. I have not covered the upload of these in my blog post.

Supplier 2 – Jpeg image

(Missing Quantity)

Invoice 3 – Handwritten Invoice – jpeg image

The UI

The UI for invoice analysis is as follows.

Invoice Analysis

Upload File

View Button – Opens the PDF in a dialog box

Process Button – Interactive Dialog box

Processing Completed – Invoice details – text extracted into InvoiceAnalysis object.

Submit Button – Create an output interface file in JSON format.

Azure Storage – (invoiceanalysisupload container)

Processed Output file

File Contents

{
"VendorName":"Car Parts UK Ltd",
"VendorAddress":"15 Street Gardens\nPoole, Dorset, DS11 333",
"CustomerName":"Car Shop",
"CustomerAddress":null,
"InvoiceItems":[
{"Quantity":10.0,"ItemDescription":"Ferrari 360 Spider Hood","Amount":"50000"},
{"Quantity":5.0,"ItemDescription":"Ferrari 360 Spider Gearbox","Amount":"12500"}
],
"Tax":"12500",
"InvoiceTotal":"75000"
}

Note: The code does not extract the customer address, but this is in fact possible.

The handwritten jpeg image, the second invoice as a jpeg image and the PDF all proved to have 100% extraction using the Microsoft AI Document Intelligence service. That’s just amazing!

It is as simple as that!

The reason for creating a interactive SPA as a sample app, is to demonstrate the features. The same code can be used in event driven architectures, or scheduled triggers. That will be something I will  post next.

Invoice 2 – jpeg output

Invoice 2- viewer

Invoice 2 – Processed

Invoice 3 – Handwritten jpeg

Blazor Simple AI Project (Part 3) with a GPT-4 Model Change

Many models in the Azure Open AI service are being deprecated on June 14th 2024. All Microsoft Azure Open AI service model retirement dates can be found on Microsoft learn here. It’s time to deploy GPT-4 to Blazor Simple AI and make the minor changes in appsettings.json to utilise the a deployment based on GPT-4. Follow the steps below.

To download all parts of this post, you can download the PDF here.

Deploy a new Model with Azure AI Studio

  1. Launch and authenticate to AI Studio https://oai.azure.com/
  2. Click Deployments
  3. Click Deploy a new model
  4. Set the model version, deployment type, I have chosen standard, enter the name of the deployment and the number of required tokens minute and click Create.

Your model will be deployed.

Update the configuration settings in the application

In the configuration section below, update the Open AI deployment name setting, in my case the deployment name I had chosen is “GPT-4”.

"AzureAIConfig": {
    "OpenAIDeploymentName": "GPT-4",
}

That’s all you need to do, Blazor Simple AI Chat is now using Chat GPT-4 from the Microsoft Azure Open AI service. No other code changes are necessary at this stage.

BLAZOR JARVIS AI – Document Redaction Tool

Welcome to JARVIS, the AI document redaction processor. At the time of publishing this version of his document, I was still developing Jarvis as a fully working product. The current version supports word documents and PDFs. With the development of Blazor Simple AI, I can also utilise the image analysis component to redact PII information from images..

If you would like to download a PDF of this post, you can download it here.

Jarvis is made up of the following technologies:

  • Microsoft .NET Blazor (.NET 6.0 LTS release)
  • Microsoft Azure Cognitive Services (Text Analytics Service)
  • Microsoft Azure Cosmos DB (for maintaining document and redaction processor metadata)
  • Azure Web App (hosting the JARVIS AI Web App)
  • Azure Storage (source document upload and redaction storage)
  • Microsoft Azure Function App (for APIs that process PII data and perform redaction processing)
  • Radzen Blazor components (for an amazing UI experience)

A document named “IPAddressandNamesx10Pages.docx” contains the following information, repeated within 10 pages.

——————————————————————————————————————————-

The IP Address if the legacy system is 10.254.1.1.

The company that owns the legacy system is Microsoft.

The original founders of the company Microsoft are Bill Gates and Paul Allen.

——————————————————————————————————————————-

The document is uploaded to Jarvis, the AI Document redaction processor.

The user clicks “Process” to determine the PII and confidential data held in the document.

A notification is sent to the user to advise the document has been submitted for processing.

About 3 seconds later Jarvis has identified the PII and confidential data in the document and provides a notification to the user.

The user can then click “View” and then select which data needs to be redacted from the document.

The user then clicks “Save choices”. This will save the collection of choices, the metadata, to Azure Cosmos DB.

The user then clicks “Redact” and the user is notified of the submission and completion.

The user clicks the download button which is available after the redaction process has completed. The document is displayed with the information redacted using Microsoft Office apps (this can be downloaded to the machine directly also).

The process is going to be made simpler by a set of walkthroughs in the UI which will be a set of steps with instructions, including a preview document component.

Look out for the next update soon.

Blazor Simple AI Project (Part 2) with Microsoft Azure AI Vision

Image Analysis with Azure AI Vision

Welcome to the Blazor Simple AI Single Page App, Part 2 of the Microsoft AI services journey, which now includes image analysis utilising Microsoft Azure AI Vision. The Vision Read API is used to extract the text from an image. This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI.

If you would like to download both part 1 and part 2 as a PDF document, you can download the PDF here.

Since part 1, the following nuget packages have been added to the project.

Azure AI Vision Image Analysis – for reading text and metadata from images.

Radzen Blazor – for providing an amazing UI experience.

Azure Storage Blob – for handling interactions with Azure Blob Storage.

Visual Changes

I have made some appealing improvements from the basic Blazor template and styled the UI based on a project from Martin Mogusu available here: GitHub – martinmogusu/blazor-top-navbar: A top navbar example created in blazor. This saved me a lot of time and all I had to do was apply my own visual styles after the top navigation was applied to the project in shared/NavMenu.razor. In addition, I had added a pre-built model for interactive Invoice Analysis and processing, which I will leave the full explanation until Part 3 of this post.

Components

Three components have been developed for the image analysis. These are as follows:

  1. Vision.razor – The Image Analysis page
  2. VisionBlobLoader.razor– This includes the capability to upload files to Azure blob storage, which also sets the content type for the blob file.
  3. VisionBlobFileList.razor – This is a child component embedded into the VisionBlobLoader component, which lists the image files that have been uploaded to Azure blob storage.

Learn about Microsoft AI Vision

To learn more about the capabilities of Microsoft AI Vision, see What is Azure AI Vision? – Azure AI services | Microsoft Learn. Azure AI Vision includes more analysis capabilities, not just specifically  image files.

Configuration Settings Changes

The following configuration settings were added to appsettings.json.

  “AzureVsionConfig”: {

    “AzureAIVisionEndpoint”: “https://%5BYour AI Vision Service].cognitiveservices.azure.com/”,

    “AzureAIVisionKeyCredential”: “[AI Vision Service Key]”

  },

  “AzureStorageConfig”: {

    “AzureStorageConnectionString”: “[Your Storage Account Connection String”,

    “AzureStorageContainer”: “[Your Storage Account Container]”,

    “AzureStorageAccountName”: “[Your Storage Account Name]”,

    “AzureStorageAccountKey”: “Your Storage Account Key”

  },

Note: Whist this project utilises the service key, in an enterprise environment, you must consider using token based access to the service secured by Microsoft Entra ID, or if you wish to utilise the service key for any reason, utilise Azure Key Vault to protect the key used by the application with a managed identity for the application to access the service key stored in Azure Key Vault.

Components

File Upload Component (VisionBlobLoader)

The file upload component utilises Blazor InputFile for the user to select the file to upload in the application. The component reads the Azure Storage connection string from the configuration, including the container, then uploads the file to the container and also adds a blob http header for the file content type taken from the file properties. The Radzen notification service is used to notify the user of the application activities. I also included a basic spinner as part of the interaction for the upload process.

Blob List Component (VisionBlobFileList.razor)

This component reads the Azure Storage connection string from the configuration, including the container, then displays the blob file names in a Radzen DataGrid. A button is added to Analyse the image, which then calls the Radzen notification service to display the activities being taken by the application.

Data Classes

Two data classes have been created as follows:

  • AzureBlobFile.cs – Azure blob file properties
  • ImageDetails.cs – Image details for extraction from the AI Vision Analysis

The UI

The UI is as follows. Notice the menu control has now changed since Part 1. Invoice Analysis will be formed in Part 3, at the time of writing this blog post, I had already uploaded the code to my GitHub repo.

Home page (Chat)

Image Analysis

A screenshot of a computer

Description automatically generated

Upload File Control

A screenshot of a computer

Description automatically generated

Upload Action Spinner

A screenshot of a computer

Description automatically generated

Radzen Blazor File Uploaded Notification

A screenshot of a computer

Description automatically generated

Process Button

The process button read the application configuration for the Azure AI Vision endpoint and service key, then retrieves a SAS token from Azure for the blob being processed and a URL is generated with the generated SAS token, then this is submitted to Azure AI Vision with the generated URL. The SAS token is generated by the async method CreateServiceSASBlob(string BlobName) in the component class. Whilst the method can be defined as a utility class, I have composed this for easier reading of code.

Image Analysis Dialog

When the image processing has completed, a Radzen notification is displayed to the user, with a Radzen dialog popping up to show basic metadata (height and width) of the image, including the text the AI Vision service has extracted as well as the image itself.

A screenshot of a computer

Description automatically generated

That is AI Vision and Image Analysis wrapped up.

Part 3 will focus on processing invoices using the pre-built AI model “prebuilt-invoice” part of Microsoft Azure AI Document Intelligence and creating output files for further processing.

Blazor Simple AI Project – Chat with Microsoft Azure Open AI

Welcome to the Blazor Simple AI Single Page App, the AI App that responds to questions instantly using Microsoft Azure OpenAI Services. This document explains the .NET project I developed which I have pushed to my public Github repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI.

If you wish you to download the PDF version of this blog post, it is available here.

Technologies

Blazor Simple AI is made up of the following technologies:

  • Microsoft .NET Blazor (.NET 6.0 LTS release)
  • Microsoft Azure.AI.OpenAI .NET Library
  • Microsoft Azure AI Services – OpenAI

It’s that simple!

A screenshot of a computer

Description automatically generated

Why Blazor?

Blazor is simply amazing, I have been developing Blazor projects for over four years. There has been great demand for Blazor over the past few years and as a component framework and use of C# this is exactly what I need to develop solutions and concepts super fast!

What Blazor Simple AI Does?

Blazor Simple AI is a Blazor server side single page app which has a single page and a single component. The razor page has two basic user interface controls, a textbox and a submit button for a user to enter the question for Azure OpenAI. The component “AzureOpenAIChat.razor”, has a single parameter which receives the question from the main index page. When the parameter is received by the child component, the component has OnParametersSetAsync() method which then retrieves the appsettings.json values in relation to the Azure OpenAI service AI endpoint, Azure OpenAI key and the deployment name which has the associated model, which was deployed with Azure AI Studio, then send the text to the Azure OpenAI service and retrieves and displays the response.

Core Blazor Template Changes

There have been some basic changes to the basic Blazor layout to accommodate the project. These are as follows:

  1. The sidebar has been removed from the MainLayout.razor page
  2. A new Index.razor.css style sheet has been added to centre the UI components on the page
  3. A new Components folder has been added to the project
  4. A new component named AzureOpenAIChat.razor has been added into the Components folder
  5. A new configuration section has been added to appsettings.json to include the configuration required for the project to interact with the Azure OpenAI service.
  6. The title and main element have had text changes to represent the project name and description

Steps to Deploy Azure Open AI

  1. Create an Azure Resource Group
  2. Deploy the Azure OpenAI service in the resource group, see: How-to: Create and deploy an Azure OpenAI Service resource – Azure OpenAI | Microsoft Learn
  3. Manage Deployments in Azure AI Studio and create a deployment using the gpt-35-turbo model
A screenshot of a computer

Description automatically generated
  • Update the appsettings.json with the settings
"AzureAIConfig": {
    "OpenAIEndpoint": "https://[You Azure OpenAI Service].openai.azure.com/",
    "OpenAIKeyCredential": "[Your Azure Open AI Key]",
    "OpenAIDeploymentName": "[Your Azure Open AI Deployment Name]"
    "RetroResponse": "true or false"
}
  • Build the project and ask Azure OpenAI anything you like.


The UI

The landing page.

Sample Questions and Responses

Question 1

Who founded Microsoft?

A screenshot of a computer

Description automatically generated

Question 2

Who developed OpenAI?

A screenshot of a computer

Description automatically generated

Question 3

How can I develop a Blazor App?

A screenshot of a computer

Description automatically generated

Basic CSS

The AzureOpenAIChat.razor component has a basic CSS style sheet which allows the deployment to have a retro style response or a basic response text visualization option. If the app setting below is set to true, you will get the retro response as per the sample above. For a standard non-retro style response, you can set the value to false, example below.

"AzureAIConfig": {
     "RetroResponse": "false"
}

Azure VM Managed Disks backup to Azure Blob Storage

I’ve recently had a requirement to backup managed disks to Azure blob storage. Rather performing this manually, I scripted this with PowerShell. I created a structure within a .CSV file as an input file, which specifies the Virtual machine names and the storage container and folder, where the disks will be copied into. The script and a sample input file is in my repository below.

Copy Azure Virtual Machine Managed Disks to Azure Blob Storage PowerShell Script

Sample Input .CSV File

You will need to update the following variables in the PowerShell script and create your own input file.

$VMNames = Import-Csv -LiteralPath “FULL-PATH-TO-CSV-FILE”
$TargetStorageAccountName = “YOUR-STORAGE-ACCOUNT-NAME”
$TargetStorageAccountKey = “YOUR-STORAGE-ACCOUNT-KEY”