Introduction and Background
Previously, I was thinking we could find the faces in the uploaded image, so why not create a small module that automatically finds the faces and renders them when we want to load the images on our web pages. That was a pretty easy task but I would love to share what and how I did it. The entire procedure may look a bit complex, but trust me, it is really very simple and straight-forward. However, you may be required to know about a few frameworks beforehand as I won’t be covering most of the in-depth stuff of that scenario — such as computer vision, which is used to perform actions such as face detection.
In this post, you will learn the basics of many things that range from:
- Performing computer vision operations — most basic one, finding the faces in the images
- Sending and receiving content from the web server based on the image data uploaded
- Using the
canvas
HTML element to render the results
I won’t be guiding you throughout each and every thing of the computer vision, or the processes that are required to perform the facial detection in the images, for that, I would like to ask you to go and read this post of mine, Facial biometric authentication on your connected devices. In this post, I’ll cover the basics of how to detect the faces in ASP.NET web application, how to pass the characters of the faces in the images, how to use those properties and to render the faces on the image in the canvas.
Making the Web App Face-Aware
There are two steps that we need to perform in order to make our web applications face aware and to determine whether there is a face in the images that are to be uploaded or not. There are many uses, and I will enlist a few of them in the final words section below. The first step is to configure our web application to be able to consume the image and then render the image for processing. Our image processing toolkit would allow us to find the faces, and the locations of our faces. This part would then forward the request to the client-side, where our client itself would render the face locations on the images.
In this sample, I am going to use a canvas
element to draw objects, whereas this can be done using multiple div
containers to contain span
elements and they can be rendered over the actual image to show the face boxes with their positions set to absolute.
First of all, let us program the ASP.NET web application to get the image, process the image, find the faces and generate the response to be collected on the client-side.
Programming File Processing Part
On the server-side, we would preferably use the Emgu CV library. This library has been of great use in the C# wrappers list of OpenCV library. I will be using the same library to program the face detectors in ASP.NET. The benefits are:
- It is a very lightweight library.
- The entire processing can take less than a second or two, and the views would be generated a second later.
- It is better than most other computer vision libraries; as it is based on OpenCV.
First of all, we would need to create a new controller in our web application that would handle the requests for this purpose, we would later add the POST
method handler to the controller action to upload and process the image. You can create any controller, I used the name, “FindFacesController
” for this controller in my own application. To create a new Controller, follow: Right click Controllers folder → Select Add → Select Controller…, to add a new controller. Add the name to it as you like and then proceed. By default, this controller is given an action, Index and a folder with the same name is created in the Views folder. First of all, open the Views folder to add the HTML content for which we would later write the backend part. In this example project, we need to use an HTML form, where users would be able to upload the files to the servers for processing.
The following HTML snippet would do this:
<form method="post" enctype="multipart/form-data" id="form">
<input type="file" name="image" id="image" onchange="this.form.submit()" />
</form>
You can see that this HTML form is enough in itself. There is a special event handler attached to this input element, which would cause the form to automatically submit once the user selects the image. That is because we only want to process one image at a time. I could have written a standalone function, but that would have made no sense and this inline function call is a better way to do this.
Now for the ASP.NET part, I will be using the HttpMethod
property of the Request
to determine if the request was to upload the image or to just load the page.
if(Request.HttpMethod == "POST") {
}
Now before I actually write the code, I want to show and explain what we want to do in this example. The steps to be performed are as below:
- We need to save the image that was uploaded in the request.
- We would then get the file that was uploaded, and process that image using Emgu CV.
- We would get the locations of the faces in the image and then serialize them to JSON string using Json.NET library.
- Later part would be taken care of on the client-side using JavaScript code.
Before I actually write the code, let me first show you the helper objects that I had created. I needed two helper objects, one for storing the location of the faces and other to perform the facial detection in the images.
public class Location
{
public double X { get; set; }
public double Y { get; set; }
public double Width { get; set; }
public double Height { get; set; }
}
public class FaceDetector
{
public static List<Rectangle> DetectFaces(Mat image)
{
List<Rectangle> faces = new List<Rectangle>();
var facesCascade = HttpContext.Current.Server.MapPath("~/haarcascade_frontalface_default.xml");
using (CascadeClassifier face = new CascadeClassifier(facesCascade))
{
using (UMat ugray = new UMat())
{
CvInvoke.CvtColor(image, ugray, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
CvInvoke.EqualizeHist(ugray, ugray);
Rectangle[] facesDetected = face.DetectMultiScale(
ugray,
1.1,
10,
new Size(20, 20));
faces.AddRange(facesDetected);
}
}
return faces;
}
}
These two objects would be used, one for the processing and the other for the client-side code to render the boxes on the faces. The action code that I used for this is as below:
public ActionResult Index()
{
if (Request.HttpMethod == "POST")
{
ViewBag.ImageProcessed = true;
if (Request.Files.Count > 0)
{
var file = Request.Files[0];
var fileName = Guid.NewGuid().ToString() + ".jpg";
file.SaveAs(Server.MapPath("~/Images/" + fileName));
var bitmap = new Bitmap(Server.MapPath("~/Images/" + fileName));
var faces = FaceDetector.DetectFaces(new Image<Bgr, byte>(bitmap).Mat);
if (faces.Count > 0)
{
ViewBag.FacesDetected = true;
ViewBag.FaceCount = faces.Count;
var positions = new List<Location>();
foreach (var face in faces)
{
positions.Add(new Location
{
X = face.X,
Y = face.Y,
Width = face.Width,
Height = face.Height
});
}
ViewBag.FacePositions = JsonConvert.SerializeObject(positions);
}
ViewBag.ImageUrl = fileName;
}
}
return View();
}
The code above does entire processing of the images that we upload to the server. This code is responsible for processing the images, finding and detecting the faces and then returning the results for the views to be rendered in HTML.
Programming Client-Side Canvas Elements
You can create a sense of opening a modal popup to show the faces in the images. I used the canvas
element on the page itself, because I just wanted to demonstrate the usage of this coding technique. As we have seen, the controller action would generate a few ViewBag
properties that we can later use in the HTML content to render the results based on our previous actions.
The View
content is as follows:
@if (ViewBag.ImageProcessed == true)
{
if (ViewBag.FacesDetected == true)
{
<img src="~/Images/@ViewBag.ImageUrl" alt="Image"
id="imageElement" style="display: none; height: 0; width: 0;" />
<p><b>@ViewBag.FaceCount</b> @if (ViewBag.FaceCount == 1)
{ <text><b>face</b> was</text> }
else { <text><b>faces</b> were</text> }
detected in the following image.</p>
<p>A <code>canvas</code> element is being used to render the image
and then rectangles are being drawn on the top of that canvas to
highlight the faces in the image.</p>
<canvas id="faceCanvas"></canvas>
<!-- HTML content has been loaded, run the script now. -->
var canvas = document.getElementById("faceCanvas");
var img = document.getElementById("imageElement");
canvas.height = img.height;
canvas.width = img.width;
var myCanvas = canvas.getContext("2d");
myCanvas.drawImage(img, 0, 0);
@if(ViewBag.ImageProcessed == true && ViewBag.FacesDetected == true)
{
img.style.display = "none";
var facesFound = true;
var facePositions = JSON.parse(JSON.stringify(@Html.Raw(ViewBag.FacePositions)));
}
if(facesFound) {
for (face in facePositions) {
myCanvas.lineWidth = 2;
myCanvas.strokeStyle = selectColor(face);
console.log(selectColor(face));
myCanvas.strokeRect(
facePositions[face]["X"],
facePositions[face]["Y"],
facePositions[face]["Width"],
facePositions[face]["Height"]
);
}
}
function selectColor(iteration) {
if (iteration == 0) { iteration = Math.floor(Math.random()); }
var step = 42.5;
var randomNumber = Math.floor(Math.random() * 3);
var red = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
var green = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
var blue = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
switch (randomNumber) {
case 0: red = 0; break;
case 1: green = 0; break;
case 2: blue = 0; break;
}
var rgbString = "rgb(" + red + ", " + green + " ," + blue + ")";
return rgbString;
}
}
else
{
<p>No faces were found in the following image.</p>
<img src="~/Images/@ViewBag.ImageUrl" alt="Image" id="imageElement" />
}
}
This code is the client-side code and would be executed only if there is an upload of image previously. Now let us review what our application is capable of doing at the moment.
Running the Application for Testing
Since we have developed the application, now it is time that we actually run the application to see if that works as expected. Following are the results generated of multiple images that were passed to the server.
The above image shows the default HTML page that is shown to the users when they visit the page for the first time. Then they will upload the image, and the application would process the content of the image that was uploaded. Following images show the results of those images.
I uploaded my image, it found my face and as shown above in bold text, “1 face was detected…”. It also renders the box around the area where the face was detected.
This article would have never been complete, without Eminem being a part of it!:) Love this guy!
Secondly, I wanted to show how this application processed multiple faces. On top, see that it shows “5 faces were detected…” and it renders 5 boxes around the areas where faces were detected. I also seem to like the photo, as I am a fan of Batman myself.
This image shows what happens if the image does not contain a detected face (by detected, there are many possibilities where a face might not be detected, such as having hairs, wearing glasses etc.) In this image, I just used the three logos of companies and the system told me there were no faces in the image. It also rendered the image, but no boxes were made since there were no faces in the image.
Final Words
That was it for this post. This method is useful in many facial detection software applications, many areas where you want the users to upload a photo of their faces, not just some photo of a scenery etc. This is an ASP.NET web application project, which means that you can use this code in your own web applications too. The library usage is also very simple and straight-forward as you have already seen in the article above.
There are other uses, such as in the cases where you want to perform analysis of peoples’ faces to detect their emotions, locations and other parameters. You can first of all, perform this action to determine if there are faces in the images or not.