The field of Human-Computer Interactions is one which grows in importance each and every year, due to the increasing demand from medical, robotic, and tech industries for more streamlined methods of communicating with computer systems. While a variety of research and proposed solutions have gone into the field, this paper will focus on one specific aspect of the discipline: hand pose reconstruction. Many papers published in this field focus on the use of cameras - including depth, RGB, and others - to capture data and reconstruct specific hand poses. While this method has shown a fair level of success, a few prominent issues continue to exist, namely finger occlusion (where certain joints become hidden behind the hand) and the non-ergonomic design of fixed camera systems. In this paper we instead choose to explore a hardware-oriented approach, using a minimal suite of mounted flex sensors to gather data about the current joint angles at specific locations in the hand. Once this glove has been constructed and tested, the gathered data will be fed into a trained deep learning model in the hopes of accurately reconstructing various hand poses from minimal hardware data. Finally, to verify the collected data and results, the reconstructed hand poses will be compared with data acquired from a high-precision motion capture system.
University / Institution: University of Utah
Format: In Person
SESSION D (3:30-5:00PM)
Area of Research: Engineering
Faculty Mentor: Edoardo Battaglia