Realtime Filtering of Noise Generated in VR Devices with CAE

2022.12.18(SICE 2022)
作者:Naoki HASHIMOTO, Pin-Chu YANG, Tetsuya OGATA
Teleoperation through VR devices is a popular control method for controlling full body movement of humanoid robots. However, motion data obtained from VR devices are usually estimated through full body inverse kinematics (IK) methods, which are susceptible to noise or undesired generated character-poses, thereby harming the robot or environment. Therefore, we propose a convolutional autoencoder based real-time motion generation method that can generate natural-looking motion with de-noising capability. We evaluated our method by applying it on a humanoid and demonstrating 5 different real-time control scenarios using a VR device.

Generating Humanoid Robot Motions based on a Procedural Animation IK Rig Method

2022.01.09(SII 2022)
作者:Pin-Chu Yang, Satoshi Funabashi, Mohammed Al-Sada and Tetsuya Ogata
Japanese animation is becoming increasingly popular; such animation-like expression is an important aspect for designing artificial characters that interact with humans. To promote such interaction, humanoid robots, which can autonomously interact with people, are one option for promoting such interaction. It requires the use of a robot that is capable of performing natural and interactive motions in the physical world; however, the creation of natural-like motions for a humanoid robot can be difficult, requires manpower, can be tedious, and expensive. Currently, most studies are focused on either how well an action is performed using the robot-animation retargeting method or how to produce an interactive motion for several applications. It results in a robot with a lack of interaction ability or unnatural motions, which is not pleased by the culture. In this study, we propose a pipeline approach for creating new humanoid robot motions from existing character animation using a deep motion style transfer and an IK rig interactive motion generation, which is inspired by a procedural animation technique that is extensively used in game and animation industries. We train deep neural networks and demonstrate our deep motion style transfer approach for creating various motions using free online human–motion database data and our collected motion capture data. In experiments, we first demonstrate the IK rig procedural-motion control method using the collected motion data. Second, new motions are generated by combining different Content motions and Style motions. Finally, these newly generated motions are performed in simulation and the physical world with the real-time adjustment of IK rig procedural motion as an interactive motion on the humanoid robot Hatsuki Mk.Ib.

From Anime To Reality: Embodying An Anime Character As A Humanoid Robot

2021.06.01(CHI 2021)
作者:Mohammed Al-Sada, Pin-Chu Yang, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Mhd Yamen Saraiji, Tetsuya Ogata and Tatsuo Nakajima
Otaku is a Japanese term commonly associated with fans of Japanese animation, comics or video games. Otaku culture has grown to be a global phenomenon with various hobbies and media. Despite its popularity, research efforts to contribute to the otaku culture have been modest. Therefore, we present Hatsuki, which is a humanoid robot that is especially designed to embody anime characters. Hatsuki advances the state of the art as it: 1) realizes aesthetics resembling anime characters, 2) implements 2D anime-like facial expression system, and 3) realizes anime-style behaviors and interactions. We explain Hatsuki's design specifics and its interaction domains as an autonomous robot and as a teleoperated humanoid avatar. We discuss our efforts under each interaction domain, and follow by discussing its potential deployment venues and applications. We highlight opportunities of interplay between otaku culture and interactive systems, potentially enabling highly desirable interactions and familiar system designs to users exposed to otaku culture.

HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation

2020.04.01(RO-MAN 2020)
作者:Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu and Tetsuya Ogata
Japanese character figurines are popular and have pivot position in Otaku culture. Although numerous robots have been developed, less have focused on otaku-culture or on embodying the anime character figurine. Therefore, we take the first steps to bridge this gap by developing Hatsuki, which is a humanoid robot platform with anime based design. Hatsuki's novelty lies in aesthetic design, 2D facial expressions, and anime-style behaviors that allows it to deliver rich interaction experiences resembling anime-characters.We explain our design implementation process of Hatsuki, followed by our evaluations. In order to explore user impressions and opinions towards Hatsuki, we conducted a questionnaire in the world's largest anime-figurine event. The results indicate that participants were generally very satisfied with Hatsuki's design, and proposed various use case scenarios and deployment contexts for Hatsuki. The second evaluation focused on imitation learning, as such method can provide better interaction ability in the real world and generate rich, context-adaptive behavior in different situations. We made Hatsuki learn 11 actions, combining voice, facial expressions and motions, through neuron network based policy model with our proposed interface. Results show our approach was successfully able to generate the actions through self-organized contexts, which shows the potential for generalizing our approach in further actions under different contexts. Lastly, we present our future research direction for Hatsuki, and provide our conclusion.
Citation: Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu and Tetsuya Ogata, “HATSUKI : An anime character like robot figure platform with anime-style expressions and imitation learning based action generation,” 2020. arXiv: 2003.14121 video (RO-MAN 2020)

ゲームエンジンを使用したロボット模倣学習を効率化するプラットフォーム開発: Autonomous Humanoid Figure “Hatsuki” Mk.I

2020.02.27(ROBOMECH 2020)
作者:陽 品駒, 鈴木 彼方, 邱 章傑, トモ ティト プラドノ, ヤルタ ネルソン, カク ケビン, 舒 國豪, 尾形 哲也
This study proposed an effective imitation learning humanoid robot platform based on a Game Engine which considered usual creators of 3DCG animator or game creator’s usual development environment. We verify the proposed platform with an actual imitation learning task which is trained our robot to learn to generate 10 different action patterns. Each action pattern contains time-series motor angle information, facial animation command and voice command. Finally, we evaluate the man-hour cost through the instructor of the Japanese Industrial Standards(JIS Z 8141-1227) and show a 60% reduction of time cost for executing the same manner to a similar setup.
Citation: 陽 品駒, 鈴木 彼方, 邱 章傑, Tito Pradhono TOMO, Nelson YALTA, Kevin KUO, 舒 國豪, 尾形 哲也: “ゲームエンジンを使用したロボット模倣学習を効率化するプラットフォーム開発: Autonomous Humanoid Figure “Hatsuki” Mk.I, ” 日本機械学会ロボティクス・メカトロニクス講演会 (ROBOMEC2020) , 石川県金沢市 , 2020年5月27日-30日(予定)