MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels

December 12, 2022 ยท Entered Twilight ยท ๐Ÿ› AAAI Conference on Artificial Intelligence

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, LICENSE, README.md, assets, data, envs, evaluate, generate.py, main, models, recursive, requirements.sh, test.py, train.py, utils

Authors Taeryung Lee, Gyeongsik Moon, Kyoung Mu Lee arXiv ID 2212.05897 Category cs.CV: Computer Vision Citations 57 Venue AAAI Conference on Artificial Intelligence Repository https://github.com/TaeryungLee/MultiAct_RELEASE โญ 62 Last Checked 2 months ago
Abstract
We tackle the problem of generating long-term 3D human motion from multiple action labels. Two main previous approaches, such as action- and motion-conditioned methods, have limitations to solve this problem. The action-conditioned methods generate a sequence of motion from a single action. Hence, it cannot generate long-term motions composed of multiple actions and transitions between actions. Meanwhile, the motion-conditioned methods generate future motions from initial motion. The generated future motions only depend on the past, so they are not controllable by the user's desired actions. We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a unified recurrent generation system. It repetitively takes the previous motion and action label; then, it generates a smooth transition and the motion of the given action. As a result, MultiAct produces realistic long-term motion controlled by the given sequence of multiple action labels. Codes are available here at https://github.com/TaeryungLee/MultiAct_RELEASE.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision