With the goal of developing a new form of broadcasting, we are pursuing the technology of a three-dimensional (3D) television that shows more natural 3D images to viewers without special glasses. We conducted comprehensive research on 3D imaging which include image capturing and displaying technologies, coding methods and required system parameters for the integral 3D, and display devices development for 3D images. In parallel, we made progress in our R&D on a new image representation technology using real-space sensing that is applicable to live sports broadcasting. We aim to utilize the technology during the 2020 Tokyo Olympic and Paralympic Games.
In our research on display technologies based on the integral 3D method, we are developing basic technologies to increase the number of pixels and expand the viewing zone. In FY 2016, we developed a direct-view display system that combines 4 images produced from four HD liquid crystal panels with a new optical setup. The system reduced the noise and increased the resolution of displayed images. We also built prototype system using five high-definition projectors, which increased the resolution and viewing-zone angle (horizontal and vertical) to approximately 114,000 pixels and 40 degrees, respectively. Moreover, we fabricated direct-view display system that uses a high-density 13.3-inch 8K OLED display (664 ppi).
The MPEG Free-Viewpoint Television (MPEG-FTV) ad hoc group have started its activities in 2013 to standardize new coding technologies for 3D images, and we have been participating in this group. In FY 2016, we demonstrated coding experiments applying 3D High Efficiency Video Coding (3D-HEVC), which is a conventional multiview coding method, into the integral 3D images. We have successfully compressed the integral images using the 3D-HEVC by converting elemental images in the integral images into multi-viewpoint images which can be compressed by the conventional multiview coding method (3D-HEVC).
In our research on image capturing technologies of the integral 3D method, we are studying technologies to obtain spatial information by using multiple cameras and lens arrays for creating high-quality 3D images. In FY 2016, we developed an integral 3D model-based capture technology that generates 3D models of an object from multi-viewpoint images captured by multiple robotic cameras and converts them into elemental images. We have demonstrated how this technology generates a 3D point cloud model of a photographed image and converts it into elemental images by using seven robotic cameras positioned in a hexagonal arrangement.
In our research on the system parameters of the integral 3D method that we started in FY 2015, we are investigating the relation between the display parameters (in terms of the focal length of the lens array and the pixel pitch of the display device, and so on) and image quality (in terms of the depth-reconstruction range, resolution and viewing zone) through subjective evaluations of the simulated integral 3D images. In FY 2016, we developed a nonlinear depth-compressed expression technology that compresses the depth of 3D scenes and performs a no-blur and high-quality 3D visualization on an integral 3D display with a limited depth reconstruction capabilities without inducing a sense of unnaturalness in viewers by taking advantage of the characteristics of human depth perception. The results of the subjective evaluations showed that the amount of unnaturalness was acceptable even when a 3D space in excess of 100 m was compressed into a depth range of 1 m by using this technology.
In our research on 3D display devices, we have been studying electronic holography devices and beam-steering devices. For electronic holography, we continued to study spatial light modulators using spin transfer switching (spin-SLM). In FY 2016, we prototyped an active-matrix-driven spin-SLM (100×100 pixels) using a tunnel magnetoresistance element with a narrow 2-μm pixel pitch, and successfully demonstrated displaying 2D images. With the aim of building an integral 3D display that uses beam-steering devices instead of a lens array, we studied an optical phased array that uses an electro-optic polymer. In FY 2016, we designed and prototyped an optical phased array consisting of multiple channels and demonstrated the operation of one-dimensional beam steering.
In our research on multidimensional image representation technology using real-space sensing, we are developing technologies for new image representation techniques using real-space sensing such as a high-speed object tracking system using a near-infrared camera, a method to estimate the head pose of a soccer player and a studio robot for joint performance with CGs; four-dimensional spatial analysis and image representation of sport scenes that combines multi-viewpoint robotic cameras, object tracking technology and 3D information analysis; a 2.5D multi-motion representation technique that enables time-series image presentation; and a naturally enhanced image representation method for a flying object to generate the trajectory of a golf ball in real time. In FY 2016, we verified the effectiveness of each basic system through prototyping and field experiments and identified issues to be resolved toward practical use.