征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

In the aim of natural interaction with machines, a framework must be developed to include the adaptability humans portray to understand gestures from context, from a single observation or from multiple observations. This is also referred as adaptive shot learning – the ability to adapt the mechanism of recognition to a barely seen gesture, well-known or entirely unknown. Of particular interest to the community are zero-shot and one-shot learning, given that most work has been done in the N-shot learning scenario.

Previous approaches related to zero and one-shot gesture recognition rely heavily on statistical and data mining based solutions, and leave aside the mechanisms humans use to perceive and execute gestures which can provide valuable context information. This gap leads to suboptimal solutions. This workshop aims to present and disseminate novel approaches considering the process that leads to the realization of a gesture, rather on the gesture itself. The workshop aims to encourage works that focus on the way in which humans produce gestures – the kinematic and biomechanical characteristics, and the cognitive process involved when perceiving, remembering and replicating them.

Benchmarks and performance metrics of such approaches are also of interest. For example, gesture recognition similar to that exhibited by humans are preferable to perfect recognition. In such context mimicry is more important than optimal recognition accuracy. Furthermore, how efficiency is obtained when compared to traditional N-shot learning approaches is not trivial.

We expect to bring a diverse community of linguists, psychologists, computer scientists, roboticists, and engineers together to address these questions and propose solutions to these challenges. With the workshop we expect to gain new understanding about how humans generalize from single or unseen observations from context. Knowledge gained through the workshop can shed light to understanding how infants learn to gesture, how to identify spontaneous gestures and/or uncontrolled gesturing (e.g. such as in Parkinson's disease). All these are cases of zero or one shot gesture learning or production.

征稿信息

重要日期

2017-02-16
初稿截稿日期

征稿范围

  • One and zero shot recognition

  • Gesture production from context or single observation

  • EEG based gesture recognition

  • Context modelling from gesture languages

  • Holistic approaches to gesture modelling

  • Human-like gesture production and recognition

  • Gesture based robotic control and interfaces

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 05月30日

    2017

    会议日期

  • 02月16日 2017

    初稿截稿日期

  • 05月30日 2017

    注册截止日期

移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询