Alright, now to use the API in an application. I’ve created a new ASP.NET Core 2.0 web application and adjusted it slightly, by changing the
Index page and removing the standard About and Contact pages.
I’ll highlight some pieces of code that show you how to work with the Emotion API. You can follow along by looking at the
GitHub project.
First, I use JavaScript to post an image file from a file upload to the UploadImage action of the HomeController:
function imageIsLoaded(e) {
resetCanvas();
document.getElementById('myImgage').src = e.target.result;
};
$('#fileControl').on('change', function (e) {
var files = e.target.files;
if (files.length > 0) {
var data = new FormData();
data.append("file", files[0]);
var reader = new FileReader();
reader.onload = imageIsLoaded;
reader.readAsDataURL(this.files[0]);
$.ajax({
type: "POST",
url: '/Home/UploadImage',
contentType: false,
processData: false,
data: data,
success: function (result) {
upload(result);
},
error: function (ex) {
alert('Emotion recognition failed')
}
});
}
});
The imageIsLoaded function puts the image on the screen so that you can see it.
Once the image is posted to the HomeController, in the **UploadImage task, I create a HttpClient object and put the request to the Emotion API together. The output that I get is similar to the JSON data that you saw earlier.
string output = string.Empty;
//get the image file from the request, I assume only one file
var fileContent = Request.Form.Files[0];
if (fileContent != null && fileContent.Length > 0)
{
var client = new HttpClient();
var queryString = HttpUtility.ParseQueryString(string.Empty);
// Request headers, include your own subscription key
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{ enter your subscription key }");
var uri = "https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize?" + queryString;
HttpResponseMessage response;
//copy the file into a stream and into a byte array
using (var stream = new MemoryStream())
{
fileContent.CopyTo(stream);
byte[] byteData = stream.ToArray();
using (var content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
//post to the emotion API
response = await client.PostAsync(uri, content);
output = await response.Content.ReadAsStringAsync();
}
}
}
To use this yourself, you need to replace the “{ enter your subscription key }” string with the key that you’ve got from the Emotion API.
Another thing to note is that I first assumed that the MediaTypeHeaderValue should be whatever the value of the image is (e.g. image/png), but that results in a HTTP 415 response (Unsupported Media Type). The value should be application/octet-stream as I send the image in a byte array that I create from a Stream.
Next, I filter out the results, so that I only send the emotion with the highest confidence value back as a JSON string. And then I use the faceRectangle values to paint a rectangle over the face in the picture using the HTML canvas element. I also populate a div with the text of the emotion with the highest confidence value and place that under the rectangle using JavaScript.
//initialize canvas
var canvas = document.getElementById('myCanvas');
var ctx = canvas.getContext('2d');
function upload(json) {
var result = JSON.parse(json);
//draw rectangle around face
ctx.lineWidth = "6";
ctx.strokeStyle = "yellow";
ctx.rect(result.faceRectangle.left, result.faceRectangle.top,
result.faceRectangle.width, result.faceRectangle.height);
ctx.stroke();
//set emotion div and position it
var d = document.getElementById('myEmotion');
d.innerHTML = result.primaryEmotion;
d.style.position = "absolute";
d.style.left = 50 + result.faceRectangle.left + (result.faceRectangle.width / 4) + 'px';
d.style.top = (300 + result.faceRectangle.top + result.faceRectangle.height + 10) + 'px';
}
When you run it and upload an image with a face in it, it will display the image, paint a rectangle around the face and shows the emotion of the face. This looks something like this: