Demo 4: Real-Time Face Embedding Visualization

Released by Brandon Amos and Gabriele Farina on 2016-09-15.


We had a great opportunity (thanks to Jan Harkes, Alison Langmead, and Aaron Henderson) to present a short OpenFace demo in the Data (after)Lives art exhibit at the University of Pittsburgh, which investigates the relationship between the human notions of self and technical alternative, externalized, and malleable representations of identity. The following video is just a quick example, and a real-time version is being shown live from Sept 8, 2016 to Oct 14, 2016. We have released the source code behind this demo in our main GitHub repository in demos/sphere.py. This exhibit also features two other art pieces by Sam Nosenzo, Alison Langmead, and Aaron Henderson that use OpenFace.

How this is implemented

This is a short description of our implementation in demos/sphere.py, which is only ~300 lines of code.

For a brief intro to OpenFace, we provide face recognition with a deep neural network that embed faces on a sphere. (See our tech report for a more detailed intro to how OpenFace works.) Faces are often embedded onto a 128-dimensional sphere. For this demo, we re-trained a neural network to embed faces onto a 3-dimensional sphere that we show in real-time on top of a camera feed. The 3-dimensional embedding doesn't have the same accuracy as the 128-dimensional embedding, but it's sufficient to illustrate how the embedding space distinguishes between different people.

In this demo:

Running on your computer

To run this on your computer:

  1. Set up OpenFace.
  2. Download the 3D model from here.
  3. Run demos/sphere.py with the --networkModel argument pointing to the 3D model.