Paul has been at the bleeding intersection of real-time AI and computer graphics since 2000 when he founded AI.implant to use AI (flocking behaviours and path finding) to create and simulate huge crowds of interacting autonomous characters. Customers included Disney and Lucas Film for visual effects; Bioware and EA for game development; and L3 and Lockheed Martin for military simulation. AI.implant was acquired in 2005 by Presagis, the world’s leading developer of software tools for military simulation and training. In 2007, he founded GRIP to use AI (behaviour trees) to create high fidelity autonomous characters capable of rich and complex behaviours. Customers included Bioware, Disney, EA and Eidos. GRIP was acquired in 2011 by Autodesk, the world’s leading developer of software tools for digital entertainment. In 2014, he founded wrnch to use AI (deep learning and computer vision) to enable computers to read human body language. As serial AI entrepreneur, Paul has been hustling and hacking since he was 12, when he leveraged a $250 livestock sale into a $1000 TRS-80 Color Computer to program video games and calculate Pi to as many digits as possible. Paul went on to obtain a Ph.D. in computer science from McGill University during which time the book “The Algorithmic Beauty of Plants” hooked him on computer graphics & AI. He enjoys a crisp gin martini (stirred not shaken) whilst reading New Scientist.
We will describe in detail how to use AI to digitize human motion and behavior from standard video cameras without traditional motion capture or depth sensing hardware. We will present a live “MagicMirror” in which an audience member can be beamed from a mobile phone camera into an Augmented Reality application.