The researchers have tested their method in two scenarios: in a workspace, the camera was mounted on the target object, and in an everyday situation, a user wore an on-body camera, so that it took on a first-person perspective. The result: Since the method works out the necessary knowledge for itself, it is robust, even when the number of people involved, the lighting conditions, the camera position, and the types and sizes of target objects vary.
"We can in principle identify eye contact clusters on multiple target objects with only one camera, but the assignment of these clusters to the various objects is not yet possible,” said Bulling. “Our method currently assumes that the nearest cluster belongs to the target object, and ignores the other clusters. This limitation is what we will tackle next. This paves the way not only for new user interfaces that automatically recognize eye contact and react to it, but also for measurements of eye contact in everyday situations, such as outdoor advertising, that were previously impossible."
A demonstration video is at https://www.youtube.com/watch?v=ccrS5XuhQpk