Tag Archives: .NET

Blazor Simple AI Project (Part 6) Azure Open AI Image Generation

Welcome to the Blazor Simple AI Single Page App, Part 6 of the Microsoft AI services journey, which now includes Microsoft Azure Open AI image generation.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI

To download the project documentation, you can download a PDF here.

Since part 5, the following changes to the project have been implemented.

Project Changes

  1. ImageGen.razor page has been added to the project Pages folder. This is a page hosting the image generation component and necessary code
  2. AzureOpenAIImageGeneration.razor component has been added to the project components folder which handles the user prompt, then displays the image viewer dialogue with the Azure Open AI generated image
  3. ImageViewer.razor component has been added to the project components folder. This displays the image dialogue
  4. The following configuration changes have been added to AppSettings.Json for the DALL-E deployment name

“AzureAIConfig”

{

    “OpenAIDALLEEndpoint”: “[Your Azure Open AI endpoint which is hosting the DALL-E deployment]”,

“OpenAIKeyDALLECredential”: “[Your Azure Open AI key] “,

“OpenAIDALLEDeploymentName”: “[Your DALL-E deployment name]”

}

5) The following base model was added to the Open AI Service.

Components

ImageGen.razor (page)

The ImageGen.razor page is used to host the prompt for the user to generate the image. This is distinctively similar to the Open AI Chat index page, which follows a similar pattern to accept text prompts or audio recordings and then the text is passed to the child component, AzureOpenAIImageGeneration, to process the text and generate the image from the Azure Open AI service.

service.AzureOpenAIImageGeneration.razor

A component which accepts the text from the prompt and then calls the Azure Open AI service to generate the image.

ImageViewer.Razor

This component displays the output, the image, generated from the Azure Open AI service which is the template for the image dialogue box. This is called from the Image Generation child component.

The UI

I have added a Image Generation navigation link to the landing page.

Sample Questions and Responses

Question 1

“Draw a futuristic city”

Output for question 1:

The process takes a few seconds for the image generation to complete, so I have displayed a spinning wheel and a prompt for the user to wait for the result.

The output is displayed as follows:

Question 2

“Origins of the universe by the James Webb telescope”

The output is displayed as follows:

Question 3

“exotic cars on a beach”

The output is displayed as follows:

That’s it!

This shows how simple it is to integrate a Blazor Web application with Azure Open AI image generation.

Microsoft 365 Mailbox Attachment Processor

A .NET8 C# Application to Process Microsoft 365 Email Messages and Attachments

Introduction

Recently, I was asked how attachments from a Microsoft 365 mailbox could be automatically pushed into an Azure storage file share, so that the attachments can be made accessible to an onward process which needs to be executed on an Azure Virtual Machine. Whilst there are many ways this can be achieved; I decided to create a C# Console application to process the messages and attachments from Microsoft 365 mailbox inbox folder.

You can download this blob post as a PDF here.

Source Code

The source code for this solution can be found in my GitHub repo here.

Dependencies

There are several dependencies for this to work, these are described in the list below.

  • A Microsoft Entra ID registered application, with the following delegated application permissions:
  • An application secret (this can also be a certificate if needed)
  • Install the Microsoft ExchangeOnlineManagement PowerShell tools
  • Create a Microsoft Exchange Online application policy to allow the application access to the mailbox

# Connect to Exchange Online

Connect-ExchangeOnline -UserPrincipalName [Your Exchange Online Admin UPN] [-ShowBanner:$false]

# Create the app policy

New-ApplicationAccessPolicy -AppId [Your application ID] -PolicyScopeGroupId [Full email address of the mailbox] -AccessRight RestrictAccess -Description “Restrict the Mailbox Processor app..”

Reference: Limiting application permissions to specific Exchange Online mailboxes – Microsoft Graph | Microsoft Learn

  • Create an Azure storage account
  • Create an Azure storage account file share

Nuget Packages

The following Nuget packages are a dependency as defined in the project settings.

  <ItemGroup>

    <PackageReference Include=”Azure.Core” Version=”1.44.1″ />

    <PackageReference Include=”Azure.Identity” Version=”1.13.1″ />

    <PackageReference Include=”Azure.Storage.Files.Shares” Version=”12.21.0″ />

    <PackageReference Include=”Microsoft.Extensions.Configuration.Binder” Version=”9.0.0″ />

    <PackageReference Include=”Microsoft.Extensions.Configuration.Json” Version=”9.0.0″ />

    <PackageReference Include=”Microsoft.Graph” Version=”5.63.0″ />

    <PackageReference Include=”Microsoft.Graph.Core” Version=”3.2.1″ />

    <PackageReference Include=”Microsoft.Identity.Client” Version=”4.66.2″ />

  </ItemGroup>

Mailbox Processor Application

The mailbox processor application consists of the following C# Classes and an appsettings.json file.

File NamePurpose
AuthContext.csA C# Class representing the authentication context for the application
JSONConfigurationBuilder.csA C# Class building the configuration from appsettings.json into the application context
MSAzureStorageOperations.csA C# Class with a method to stream the attachment to Azure Storage File Share
MSGraphOperations.csA C# Class with methods to work with the Microsoft Graph API e.g. read/move messages and attachments and folders
Program.csA C# program, the core of the application
Reference.csA C# Class to store the appsettings that are referenced by the application
Appsettings.jsonThe configuration settings for the application

Application Settings

The application settings have been described below.

{

  “AppSettings”: {

    “MailFolderName”: “[The mailbox folder to target to read the messages]”,

    “MailEmailAddress”: “[The mailbox email address]”,

    “MailSubjectSearchString”: “[The subject search string for each mail message]”,

    “ProcessedMessagesFolderName”: “[Process message mailbox folder name]”,

    “AzureStorageConnectionString”: “[The Azure storage connection string] “,

    “AzureStorageFileShareName”: “[Azure storage file share name]”,

    “MSEntraApplicationClientId”: “[Microsoft Entra ID Application Id]”,

    “MSEntraApplicationSecret”: “[Microsoft Entra ID Application Secret]”,

    “MSEntraApplicationTenantId”: “[Microsoft Entra ID Tenant Id]”

  }

}

Application Runtime Process

The application process is described below.

  1. The configuration is initialised
  2. The messages are retrieved from the defined mailbox folder name
  3. Each message is processed in the message collection and the emails with the matched string that are contained in the subject are processed
  4. A console output of the message ID, received date, received from, and subject is displayed
  5. Each attachment is processed and if the file is a file attachment, then the attachment is uploaded to the Azure file share specified in the Azure storage account connection string and file share name
  6. The number of messages processed, and the number of attachments processed is displayed in the output of the console

Sample Output

The mailbox has two messages with the subject containing the search string “course completions”.

The mailbox attachment processor is executed, it displays the following output.

Two messages are processed, although three were seen in the previous email, but since the search string was not contained in the subject, only two messages were processed which were matched.

Two messages are processed, although three were seen in the previous email, but since the search string was not contained in the subject, only two messages were processed which were matched.

Three attachments in total were processed and uploaded to an Azure storage file share.

The email messages were moved to the ProcessedMessages folder, as defined in the application setting ProcessedMessagesFolderName.

When the application is executed again, the output is shown below as there are no longer any matched messages to process.

Closing Thoughts

From a development point of view, using this method provides a simple solution. Other considerations:

  • Store the storage account key in Azure Key Vault
  • Store the application secret (if used( in Azure Key Vault
  • The Azure resource hosting the application e.g. Function App, can have a managed identity and RBAC access can be provided to Azure Key Vault for the service principal (Azure Key Vault access policies are now deprecated)
  • Environment settings can be stored in the hosting environment configuration rather than in the appsettings.json file.

References

Limiting application permissions to specific Exchange Online mailboxes – Microsoft Graph | Microsoft Learn

Blazor Simple AI Project (Part 5)

Azure Open AI Chat Audio Recoding Button

Welcome to the Blazor Simple AI Single Page App, Part 5 of the Microsoft AI services journey, which now includes an audio recording button in the Open AI Chat component.

This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI

If you would like to download the whole series of posts for the Blazor Simple AI Project, you can download the PDF here.

Visual Changes

The audio button has been added to the index.razor page as this is the main landing page. The audio button component is part of Radzen Blazor and simple to interact with which is the RadzenSpeechToTextButton. This utilises JavaScript as a component itself, there is an API to get user media.

For further information on the Razden Blazor Speech To Text Button, see: Blazor SpeechToTextButton Component | Free UI Components by Radzen.

Landing Page

The new landing page has the audio button added next to the chat text box.

When you click on the audio button, the first instance will provide a prompt requesting access from the site to the devices microphone, then recording has started as shown below.

When you are finished speaking, you click the button to stop recording and the text is submitted to the Question string and a OnChange event occurs and the Question value is set, then the state is changed for the component. Since the Question string is a bound field to the child component, AzureOpenAIChat, which then executes the component code to call the Microsoft Azure Open AI service with the text that was bound to the Question string.

An example of the recorded audio text and Azure Open AI response is shown below.

Code Changes

The following code changes were made in index.razor.

Added audio recoding button and spacing.

<Radzen.Blazor.RadzenSpeechToTextButton class="padding-right:10px;" Change="@(args => OnSpeechCaptured(args, "SpeechToTextButton"))" />
<div style="padding-left:10px;" />

Added the OnSpeechCaptured method.

Note: I removed the question marks and period from the string return from the Radzen Speech To Text button as the characters were automatically to the returned text string value from the component.

private void OnSpeechCaptured(string speechValue, string name)

{

    speechValue = speechValue.Trim(new Char[] { '.', '?' });

    RecordedSpeech = speechValue;

    Question = RecordedSpeech;

    this.StateHasChanged();

}

For my next post, I will be utilising the RadzenSpeechToTextButton for a different purpose in the Blazor Simple AI project.

BLAZOR JARVIS AI – Document Redaction Tool

Welcome to JARVIS, the AI document redaction processor. At the time of publishing this version of his document, I was still developing Jarvis as a fully working product. The current version supports word documents and PDFs. With the development of Blazor Simple AI, I can also utilise the image analysis component to redact PII information from images..

If you would like to download a PDF of this post, you can download it here.

Jarvis is made up of the following technologies:

  • Microsoft .NET Blazor (.NET 6.0 LTS release)
  • Microsoft Azure Cognitive Services (Text Analytics Service)
  • Microsoft Azure Cosmos DB (for maintaining document and redaction processor metadata)
  • Azure Web App (hosting the JARVIS AI Web App)
  • Azure Storage (source document upload and redaction storage)
  • Microsoft Azure Function App (for APIs that process PII data and perform redaction processing)
  • Radzen Blazor components (for an amazing UI experience)

A document named “IPAddressandNamesx10Pages.docx” contains the following information, repeated within 10 pages.

——————————————————————————————————————————-

The IP Address if the legacy system is 10.254.1.1.

The company that owns the legacy system is Microsoft.

The original founders of the company Microsoft are Bill Gates and Paul Allen.

——————————————————————————————————————————-

The document is uploaded to Jarvis, the AI Document redaction processor.

The user clicks “Process” to determine the PII and confidential data held in the document.

A notification is sent to the user to advise the document has been submitted for processing.

About 3 seconds later Jarvis has identified the PII and confidential data in the document and provides a notification to the user.

The user can then click “View” and then select which data needs to be redacted from the document.

The user then clicks “Save choices”. This will save the collection of choices, the metadata, to Azure Cosmos DB.

The user then clicks “Redact” and the user is notified of the submission and completion.

The user clicks the download button which is available after the redaction process has completed. The document is displayed with the information redacted using Microsoft Office apps (this can be downloaded to the machine directly also).

The process is going to be made simpler by a set of walkthroughs in the UI which will be a set of steps with instructions, including a preview document component.

Look out for the next update soon.

Blazor Simple AI Project (Part 2) with Microsoft Azure AI Vision

Image Analysis with Azure AI Vision

Welcome to the Blazor Simple AI Single Page App, Part 2 of the Microsoft AI services journey, which now includes image analysis utilising Microsoft Azure AI Vision. The Vision Read API is used to extract the text from an image. This document explains the project in my GitHub repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI.

If you would like to download both part 1 and part 2 as a PDF document, you can download the PDF here.

Since part 1, the following nuget packages have been added to the project.

Azure AI Vision Image Analysis – for reading text and metadata from images.

Radzen Blazor – for providing an amazing UI experience.

Azure Storage Blob – for handling interactions with Azure Blob Storage.

Visual Changes

I have made some appealing improvements from the basic Blazor template and styled the UI based on a project from Martin Mogusu available here: GitHub – martinmogusu/blazor-top-navbar: A top navbar example created in blazor. This saved me a lot of time and all I had to do was apply my own visual styles after the top navigation was applied to the project in shared/NavMenu.razor. In addition, I had added a pre-built model for interactive Invoice Analysis and processing, which I will leave the full explanation until Part 3 of this post.

Components

Three components have been developed for the image analysis. These are as follows:

  1. Vision.razor – The Image Analysis page
  2. VisionBlobLoader.razor– This includes the capability to upload files to Azure blob storage, which also sets the content type for the blob file.
  3. VisionBlobFileList.razor – This is a child component embedded into the VisionBlobLoader component, which lists the image files that have been uploaded to Azure blob storage.

Learn about Microsoft AI Vision

To learn more about the capabilities of Microsoft AI Vision, see What is Azure AI Vision? – Azure AI services | Microsoft Learn. Azure AI Vision includes more analysis capabilities, not just specifically  image files.

Configuration Settings Changes

The following configuration settings were added to appsettings.json.

  “AzureVsionConfig”: {

    “AzureAIVisionEndpoint”: “https://%5BYour AI Vision Service].cognitiveservices.azure.com/”,

    “AzureAIVisionKeyCredential”: “[AI Vision Service Key]”

  },

  “AzureStorageConfig”: {

    “AzureStorageConnectionString”: “[Your Storage Account Connection String”,

    “AzureStorageContainer”: “[Your Storage Account Container]”,

    “AzureStorageAccountName”: “[Your Storage Account Name]”,

    “AzureStorageAccountKey”: “Your Storage Account Key”

  },

Note: Whist this project utilises the service key, in an enterprise environment, you must consider using token based access to the service secured by Microsoft Entra ID, or if you wish to utilise the service key for any reason, utilise Azure Key Vault to protect the key used by the application with a managed identity for the application to access the service key stored in Azure Key Vault.

Components

File Upload Component (VisionBlobLoader)

The file upload component utilises Blazor InputFile for the user to select the file to upload in the application. The component reads the Azure Storage connection string from the configuration, including the container, then uploads the file to the container and also adds a blob http header for the file content type taken from the file properties. The Radzen notification service is used to notify the user of the application activities. I also included a basic spinner as part of the interaction for the upload process.

Blob List Component (VisionBlobFileList.razor)

This component reads the Azure Storage connection string from the configuration, including the container, then displays the blob file names in a Radzen DataGrid. A button is added to Analyse the image, which then calls the Radzen notification service to display the activities being taken by the application.

Data Classes

Two data classes have been created as follows:

  • AzureBlobFile.cs – Azure blob file properties
  • ImageDetails.cs – Image details for extraction from the AI Vision Analysis

The UI

The UI is as follows. Notice the menu control has now changed since Part 1. Invoice Analysis will be formed in Part 3, at the time of writing this blog post, I had already uploaded the code to my GitHub repo.

Home page (Chat)

Image Analysis

A screenshot of a computer

Description automatically generated

Upload File Control

A screenshot of a computer

Description automatically generated

Upload Action Spinner

A screenshot of a computer

Description automatically generated

Radzen Blazor File Uploaded Notification

A screenshot of a computer

Description automatically generated

Process Button

The process button read the application configuration for the Azure AI Vision endpoint and service key, then retrieves a SAS token from Azure for the blob being processed and a URL is generated with the generated SAS token, then this is submitted to Azure AI Vision with the generated URL. The SAS token is generated by the async method CreateServiceSASBlob(string BlobName) in the component class. Whilst the method can be defined as a utility class, I have composed this for easier reading of code.

Image Analysis Dialog

When the image processing has completed, a Radzen notification is displayed to the user, with a Radzen dialog popping up to show basic metadata (height and width) of the image, including the text the AI Vision service has extracted as well as the image itself.

A screenshot of a computer

Description automatically generated

That is AI Vision and Image Analysis wrapped up.

Part 3 will focus on processing invoices using the pre-built AI model “prebuilt-invoice” part of Microsoft Azure AI Document Intelligence and creating output files for further processing.

Blazor Simple AI Project – Chat with Microsoft Azure Open AI

Welcome to the Blazor Simple AI Single Page App, the AI App that responds to questions instantly using Microsoft Azure OpenAI Services. This document explains the .NET project I developed which I have pushed to my public Github repository which is available here: https://github.com/tejinderrai/public/tree/main/BlazorSimpleAI.

If you wish you to download the PDF version of this blog post, it is available here.

Technologies

Blazor Simple AI is made up of the following technologies:

  • Microsoft .NET Blazor (.NET 6.0 LTS release)
  • Microsoft Azure.AI.OpenAI .NET Library
  • Microsoft Azure AI Services – OpenAI

It’s that simple!

A screenshot of a computer

Description automatically generated

Why Blazor?

Blazor is simply amazing, I have been developing Blazor projects for over four years. There has been great demand for Blazor over the past few years and as a component framework and use of C# this is exactly what I need to develop solutions and concepts super fast!

What Blazor Simple AI Does?

Blazor Simple AI is a Blazor server side single page app which has a single page and a single component. The razor page has two basic user interface controls, a textbox and a submit button for a user to enter the question for Azure OpenAI. The component “AzureOpenAIChat.razor”, has a single parameter which receives the question from the main index page. When the parameter is received by the child component, the component has OnParametersSetAsync() method which then retrieves the appsettings.json values in relation to the Azure OpenAI service AI endpoint, Azure OpenAI key and the deployment name which has the associated model, which was deployed with Azure AI Studio, then send the text to the Azure OpenAI service and retrieves and displays the response.

Core Blazor Template Changes

There have been some basic changes to the basic Blazor layout to accommodate the project. These are as follows:

  1. The sidebar has been removed from the MainLayout.razor page
  2. A new Index.razor.css style sheet has been added to centre the UI components on the page
  3. A new Components folder has been added to the project
  4. A new component named AzureOpenAIChat.razor has been added into the Components folder
  5. A new configuration section has been added to appsettings.json to include the configuration required for the project to interact with the Azure OpenAI service.
  6. The title and main element have had text changes to represent the project name and description

Steps to Deploy Azure Open AI

  1. Create an Azure Resource Group
  2. Deploy the Azure OpenAI service in the resource group, see: How-to: Create and deploy an Azure OpenAI Service resource – Azure OpenAI | Microsoft Learn
  3. Manage Deployments in Azure AI Studio and create a deployment using the gpt-35-turbo model
A screenshot of a computer

Description automatically generated
  • Update the appsettings.json with the settings
"AzureAIConfig": {
    "OpenAIEndpoint": "https://[You Azure OpenAI Service].openai.azure.com/",
    "OpenAIKeyCredential": "[Your Azure Open AI Key]",
    "OpenAIDeploymentName": "[Your Azure Open AI Deployment Name]"
    "RetroResponse": "true or false"
}
  • Build the project and ask Azure OpenAI anything you like.


The UI

The landing page.

Sample Questions and Responses

Question 1

Who founded Microsoft?

A screenshot of a computer

Description automatically generated

Question 2

Who developed OpenAI?

A screenshot of a computer

Description automatically generated

Question 3

How can I develop a Blazor App?

A screenshot of a computer

Description automatically generated

Basic CSS

The AzureOpenAIChat.razor component has a basic CSS style sheet which allows the deployment to have a retro style response or a basic response text visualization option. If the app setting below is set to true, you will get the retro response as per the sample above. For a standard non-retro style response, you can set the value to false, example below.

"AzureAIConfig": {
     "RetroResponse": "false"
}

CarShop .NET Core Blazor Project – Part 1

Over the past few months I have been working on a sample project, which will eventually be published to my GitHub repo. The project specifically focusses on Microsoft Entity Framework with an Azure SQL database and also utilises Blazor as the underlying UI and logic layer, including DevExpress for Blazor UI components. This is an initial post which describes the project and it’s capabilities.

The CarShop project was envisioned from wanting to build something new and then work on multiple articles rather than only a few for a project, for part of this year. This will enable me to provide updates at various intervals and at some stage, including the publishing of the code.

Why a CarShop?

Since I am car fan and have been for many years, I thought this would be an exciting project to work on this year. With Blazor + .NET and DevExpress being some of my favorite development frameworks, I thought this would be ideal as a project.

The database schema

Since I am using the Microsoft Entity Framework in the Blazor Visual Studio project, the schema was exactly where I wanted to start. As the iterations developed, I decided to go straight into Azure SQL to provision my tables, entities and relationships, primary keys and foreign keys etc. The project needs to store car details (at a basic level), customer details, car manufacturers, car models, fuel types, engine sizes etc. Whilst it is a simple model to start with, it’s relatively simple to expand the schema as I see fit, both from the SQL backend as well as the coded elements and data classes.

For this post I will show an example of the schema, which is below, produced by dbForgeStudio 2022 for SQL Server.

CarShop Schema – Developed using dbForgeStudio 2022 for SQL Server

Since this is a relatively simple sample project, the data is held in a single Azure SQL database. As you can see, the Vehicles table has the most relationships with car fuel types, vehicle status, engine size, colours, models and manufacturers. For the customers table, I’ve kept the design simple for now although I intend to expand this into a scenario where there may need to be some data quality checks and periodic checks around when the customer data was last updated, for reasons I will include in a future post.

Part 2 will focus on the Transact-SQL, so that the schema can be provisioned.