No.118 November 2009

Research Presentations

  • Dynamic 3D Models Generated From Multiple Video
    Kensuke HISATOMI, Kimihiro TOMIYAMA, Miwa KATAYAMA and Yuichi IWADATE
    ↓summary

    summary
    We propose a method of generating dynamic 3D models for each frame of video captured by multiple cameras surrounding a human subject and a method of mapping textures that gives "the feel" of a material by mapping camera images to the surface of the human model. In generating dynamic 3D models, we adapted the stereo matching method supported by the regional surface features of the visual hull to reconstruct accurate models stably. For the texture mapping, we adapted a method to blend three camera images that are closest to the viewpoint and visible from a target polygon and to map the blended texture to the surface of the generated models to improve arbitrary-viewpoint images.
  • Soccer simulator using dynamic 3D video
    Kimihiro TOMIYAMA and Yuichi IWADATE
    ↓summary

    summary
    To improve the comprehensibility of soccer game commentary through interactive operation, we developed a soccer simulation system using dynamic 3D video and augmented reality technology. This system synthesizes dynamic 3D video with video showing a model of the soccer field. The dynamic 3D video can show players running and passing the ball.