Face Detection (Augmented Reality) in C# and APEX
- Introduction
- Technologies and Tools Used
- Steps
- Conclusion
1. Introduction
Face Detection Augmented Reality in C# allows you to overlay virtual objects, such as glasses, on detected faces in real-time. This guide will walk you through the steps of creating an augmented reality application using C#, DlibDotNet for face detection, and OpenCvSharp for image processing.
2. Technologies and Tools Used
Below are the technologies and tools used to create Face Detection Augmented Reality in C#:
1. C# programming language
2..NET Framework
3.DlibDotNet for face detection
4.OpenCvSharp for image processing
5.Windows Forms for GUI development
3. Steps
3.1. Setup Project and Dependencies
- Create a new Windows Forms Application project in Visual Studio.
- Add references to DlibDotNet and OpenCvSharp libraries in your project.
- Set up the project structure and design the user interface.
3.2. Implement Face Detection
- Use DlibDotNet library to perform face detection on webcam frames.
- Process the webcam frames to detect faces and retrieve their bounding rectangles.
- Optionally, implement facial landmark detection for more accurate positioning of virtual objects.
3.3. Implement Glasses Overlay
- Calculate the position and size of glasses based on the detected face region or facial landmarks.
- Overlay the glasses image onto the detected face region in real-time.
- Implement mouse event handlers to allow users to interactively position the glasses on the detected face.
using DlibDotNet;
using OpenCvSharp;
using OpenCvSharp.Extensions;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Threading;
namespace ARFeature {
using Point = System.Drawing.Point;
public partial class Form1: Form {
// Variables to hold state and data
private List < Rect > detectedFaces;
private Point glassesPosition;
private bool isDragging = false;
private PictureBox selectedImage;
private Array2D < byte > array2D;
private VideoCapture capture;
private bool isCapturing = false;
public Form1() {
InitializeComponent();
// Initialize variables
detectedFaces = new List < Rect > ();
array2D = new Array2D < byte > ();
capture = new VideoCapture(0);
}
private void button1_Click(object sender, EventArgs e) {
// Start capturing frames from webcam
isCapturing = true;
// Load the haarcascade and shape predictor files
using(var faceCascade = new CascadeClassifier(“D:/AR/AR-Final/ARFeature/ARFeature/haarcascade_frontalface_default.xml”)) {
using(var shapePredictor = ShapePredictor.Deserialize(“D:/AR/AR-Final/ARFeature/ARFeature/shape_predictor_68_face_landmarks.dat”)) {
// Open the video capture device
using(var capture = new VideoCapture(0)) {
if (!capture.IsOpened()) {
Console.WriteLine(“Failed to open the webcam.”);
return;
}
// Loop to continuously process frames
using(var frame = new Mat()) {
while (isCapturing) {
capture.Read(frame);
if (frame.Empty()) continue;
// Convert the frame to grayscale
var grayFrame = new Mat();
Cv2.CvtColor(frame, grayFrame, ColorConversionCodes.BGR2GRAY);
detectedFaces.Clear();
// Detect faces in the grayscale frame
var faces = faceCascade.DetectMultiScale(grayFrame);
foreach(var rect in faces) {
var dlibRect = rect.ToDlibRectangle();
// Convert frame to Dlib byte array
byte[] imageData = new byte[grayFrame.Total() * grayFrame.ElemSize()];
System.Runtime.InteropServices.Marshal.Copy(grayFrame.Data, imageData, 0, imageData.Length);
uint stepSize = (uint)(grayFrame.Cols * grayFrame.Channels());
array2D.Dispose(); // Dispose previous image data
array2D = Dlib.LoadImageData < byte > (imageData, (uint) grayFrame.Rows, (uint) grayFrame.Cols, stepSize);
// Detect landmarks for each face
var landmarks = shapePredictor.Detect(array2D, dlibRect);
detectedFaces.Add(rect);
// Iterate through landmarks
for (int i = 0; i < landmarks.Parts; i++) {
var landmark = landmarks.GetPart((uint) i);
var point = new OpenCvSharp.Point(landmark.X, landmark.Y);
}
}
// Iterate through detected faces
foreach(var rect in detectedFaces) {
if (isDragging && selectedImage != null) {
var dlibRect = rect.ToDlibRectangle();
var landmarks = shapePredictor.Detect(array2D, dlibRect);
var leftEyeLandmarkIndex = 36; // Assuming the landmark index for the left eye
var rightEyeLandmarkIndex = 45; // Assuming the landmark index for the right eye
var leftEyeLandmark = landmarks.GetPart((uint)(leftEyeLandmarkIndex));
var rightEyeLandmark = landmarks.GetPart((uint)(rightEyeLandmarkIndex));
var averageEyeLandmarkX = (leftEyeLandmark.X + rightEyeLandmark.X) / 2;
var averageEyeLandmarkY = (leftEyeLandmark.Y + rightEyeLandmark.Y) / 2;
var eyeDistance = Math.Abs(rightEyeLandmark.X – leftEyeLandmark.X);
var glassesWidth = (int)(eyeDistance * 1.4);
var glassesHeight = glassesWidth * selectedImage.Image.Height / selectedImage.Image.Width;
var glassesX = averageEyeLandmarkX – glassesWidth / 2;
var glassesY = averageEyeLandmarkY – glassesHeight / 2;
var resizedGlasses = new Bitmap(glassesWidth, glassesHeight);
using(var graphics = Graphics.FromImage(resizedGlasses)) {
graphics.DrawImage(selectedImage.Image, 0, 0, glassesWidth, glassesHeight);
}
using(var glassesMat = BitmapConverter.ToMat(resizedGlasses)) {
var frameWidth = frame.Width;
var frameHeight = frame.Height;
foreach(var faceRect in detectedFaces) {
var glassesXInFrame = Math.Max(0, Math.Min(glassesX, frameWidth – 1));
var glassesYInFrame = Math.Max(0, Math.Min(glassesY, frameHeight – 1));
var glassesWidthInFrame = Math.Min(glassesMat.Width, frameWidth – glassesXInFrame);
var glassesHeightInFrame = Math.Min(glassesMat.Height, frameHeight – glassesYInFrame);
var glassesXInFace = glassesXInFrame – faceRect.X;
var glassesYInFace = glassesYInFrame – faceRect.Y;
var roi = new Rect(faceRect.X + glassesXInFace, faceRect.Y + glassesYInFace, glassesWidthInFrame, glassesHeightInFrame);
var glassesMatChannels = frame.Channels();
var resizedGlassesChannels = glassesMat.Channels();
var glassesMatConverted = new Mat();
if (glassesMatChannels != resizedGlassesChannels) {
Cv2.CvtColor(glassesMat, glassesMatConverted, ColorConversionCodes.BGRA2BGR);
}
else {
glassesMatConverted = glassesMat;
}
var mask = new Mat(glassesMatConverted.Size(), MatType.CV_8UC1, new Scalar(255));
Cv2.CvtColor(glassesMatConverted, mask, ColorConversionCodes.BGR2GRAY);
var resizedMask = new Mat();
Cv2.Resize(mask, resizedMask, roi.Size);
var glassesRoi = new Rect(0, 0, resizedMask.Width, resizedMask.Height);
var glassesRegion = glassesMatConverted[glassesRoi];
glassesRegion.CopyTo(frame[roi], resizedMask);
}
}
}
}
pictureBoxGlasses.Image = BitmapConverter.ToBitmap(frame);
if (isDragging && selectedImage != null) {
glassesPosition = pictureBoxGlasses.PointToClient(Cursor.Position);
selectedImage.Location = new Point(glassesPosition.X – selectedImage.Width / 2, glassesPosition.Y – selectedImage.Height / 2);
}
if (Cv2.WaitKey(1) >= 0) break;
}
}
}
}
}
}
private void pictureBox1_MouseDown(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = true;
selectedImage = (PictureBox) sender;
selectedImage.BringToFront();
}
}
private void pictureBox1_MouseUp(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
// isDragging = false;
// selectedImage = null;
}
}
private void pictureBox1_MouseDoubleClick(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = false;
selectedImage = null;
}
}
private void pictureBox2_MouseDown(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = true;
selectedImage = (PictureBox) sender;
selectedImage.BringToFront();
}
}
private void pictureBox2_MouseUp(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
//isDragging = false;
//selectedImage = null;
}
}
private void pictureBox2_MouseDoubleClick(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = false;
selectedImage = null;
}
}
private void pictureBox3_MouseDown(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = true;
selectedImage = (PictureBox) sender;
selectedImage.BringToFront();
}
}
private void pictureBox3_MouseUp(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
//isDragging = false;
//selectedImage = null;
}
}
private void pictureBox3_MouseDoubleClick(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = false;
selectedImage = null;
}
}
private void pictureBox4_MouseDown(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = true;
selectedImage = (PictureBox) sender;
selectedImage.BringToFront();
}
}
private void pictureBox4_MouseUp(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
//isDragging = false;
//selectedImage = null;
}
}
private void pictureBox4_MouseDoubleClick(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = false;
selectedImage = null;
}
}
private void pictureBox5_MouseDown(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = true;
selectedImage = (PictureBox) sender;
selectedImage.BringToFront();
}
}
private void pictureBox5_MouseUp(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
//isDragging = false;
//selectedImage = null;
}
}
private void pictureBox5_MouseDoubleClick(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = false;
selectedImage = null;
}
}
private void pictureBox6_MouseDown(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = true;
selectedImage = (PictureBox) sender;
selectedImage.BringToFront();
}
}
private void pictureBox6_MouseUp(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
//isDragging = false;
//selectedImage = null;
}
}
private void pictureBox6_MouseDoubleClick(object sender, MouseEventArgs e) {
if (e.Button == MouseButtons.Left) {
isDragging = false;
selectedImage = null;
}
}
private void button2_Click(object sender, EventArgs e) {
isCapturing = false;
if (capture != null) {
capture.Release();
}
if (array2D != null) {
array2D.Dispose();
}
pictureBoxGlasses.Image = null;
}
}
}
3.4. Convert Application to Executable (exe)
- In Visual Studio, build the solution in Release mode to generate the executable file (exe).
- Navigate to the output directory and locate the generated exe file.
- Distribute the exe file along with any necessary dependencies (e.g., DLL files).
3.5. Integrate with Other Software (e.g., Oracle Apex)
3.5.1 Host Application in IIS
1.Compile the C# application into an executable file (ARFeature.exe).
2.Install and configure Internet Information Services (IIS) on your server.
3.Create a new website in IIS and point it to the directory where ARFeature.exe resides.
4.Ensure that the appropriate permissions are set to allow IIS to execute the application.
3.5.2 Configure Custom URL Scheme
1.Decide on a custom URL scheme for your application, e.g., myapp://.
2.Update the C# application to register this custom URL scheme.
3.You can achieve this through registry settings or programmatically during application startup.
3.5.3 Configure Oracle APEX to Execute the Application
1.In your Oracle APEX application, create a new page or modify an existing one where you want to trigger the execution of the C# application.
2.Add a dynamic action to the page load event.
3.Use JavaScript code to open the custom URL when the page loads. Here’s an example:
var customUrl = ‘myapp://execute/ARFeature.exe’; // Modify with your actual custom scheme and filename
window.open(customUrl, ‘_blank’);
4. Conclusion
Creating Face Detection Augmented Reality in C# opens up exciting possibilities for interactive applications. By combining face detection, image processing, and GUI development, you can create engaging experiences that augment the real world with virtual objects. This project showcases the capabilities of C# and its libraries for developing augmented reality applications.