This post is a look into a new library that I’m writing that’s intended to make life easier for WPF developers working with Intel RealSense devices. As many of you may know, I’ve been involved with the RealSense platform for a couple of years now (back from when it was called the Perceptual Computing SDK). When I develop samples with it, I tend to use WPF as my default development experience, and the idea of hooking up the Natural User Interface capabilities of RealSense devices with the NUI power of WPF in an easy to use package is just too good to resist. On top of this, I still strongly believe in WPF and it will take a lot to remove me from developing desktop applications with it because it is just so powerful.
To this end, I have started developing a library called RealSenseLight
that will enable WPF developers to easily leverage the power of RealSense without having to worry about the implementation details. While it's primarily aimed at WPF developers, the functionality available will be usable from other C# (Windows Desktop) applications, so hooking into Console applications will certainly be possible.
One of the many decisions I’ve taken is to allow configuration of features to be set via Fluent interface, so it’s possible to do things like this:
RealSenseApplication.Uses(new EmotionDetectionConfiguration())
.Uses(SpeechRecognition.DefaultConfiguration().ChangePitch(0.8))
.Start();
ViewModel
s will be able to hook into RealSense using convenient interfaces that abstract the underlying implementations. There’s no need to call Enable… to enable a RealSense capability. The simple fact of integrating a concrete implementation means that the feature is automatically available. The following example demonstrates what an IoC resolved implementation looks like:
public class EmotionViewModel : ViewModelBase
{
private IEmotion _emotion;
public EmotionViewModel(IEmotion emotion)
{
emotion.OnUserHappy(user => System.Debug.WriteLine("{0} is happy", user.DetectedUser));
}
}
The library will provide the ability to do things such as pause/resume individual RealSense capabilities, identify and choose from the relevant RealSense compatible devices, although this does require identifying up front, what the different aspects are you’re interested in because it uses these to evaluate the devices that meet these capabilities.
I’m still fleshing out what the whole interface will look like, so all of the features haven’t been determined yet, but I will keep posting my designs and a link to the repo once I have it in a state where it’s ready for an initial commit.