[Inter BEE Forum 2007] Video Symposium Report 1
2008.3.21 UP
New developments in digital video content production born from technological innovations in hardware, software and networking
Video symposium outline and discussion themes
Held at the Makuhari International Convention Center, Inter BEE 2007 featured an exhibition of the latest equipment aimed at individuals in the broadcast industry and other professionals involved in content production, and a symposium concerning the basic technologies related to content production.
According to the forecast for the period from 2007 through 2010 announced by the Japan Electronics and Information Technology Industries Association (JEITA), as the switch to HDTV (high definition TV) continues and we move towards a new era with the fusion of telecommunication systems, from now on demand for broadcast equipment will primarily be for broadcast cameras, post-production systems and other related equipment.
At this year's Inter BEE Forum, an international symposium was once again held focusing on the three fields of the broadcast industry, video and audio.
In the "Video Session", presentations by experts working in the front line of production in Japan and US, including Hollywood, provided insight into the latest situation of digital video production resulting from technical innovations in hardware, software and networking based on HDTV as cross-media development continues. Coordinator, Mr. Seiji Kunishige of NHK, raised questions concerning the increasing importance of cooperation with the content creators who are the driving force behind technological advances, and problems facing digital technology in general. Each of the participating presenters on the panel provided their views on these problems from their own particular perspective. This is a summary of the panelists' presentations at the symposium. Due to magazine space limitations, parts of the presentations have been omitted or abbreviated. (Shuichi Tamegaya)
[Presenters]
- Seiji Kunishige (Director General Affairs Division Engineering Administration Department)
"New developments in digital video content production born from technological innovations in hardware, software and networking"
- Ryo Sakaguchi (Technical Director, Digital Domain, Inc. of the United States)
"CG video production for movies using fluid simulation"
- Walter Mundt-Blum (Vice President in charge of worldwide sales, NVIDIA Corporation of the United States)
"Evolution of the graphic processing unit (GPU)"
- Toshimasa Yamazaki (Marketing & Engineering Alliance & Technology Program Manager, Cisco Systems, Inc.)
"Production collaboration using broadband networks"
[Moderators]
- Seiji Kunishige (NHK)
- Shuichi Tamegaya (Professor, Joshibi University of Art and Design Graduate School)
Symposium
Just recently, live high definition images were transmitted from a satellite orbiting the moon. There's no doubt that with technical innovations in video production systems, the scope of content production is continuing to expand. The most important consideration is how to best to apply these technological advances to create new content. Technology evolves, but if it doesn't give rise to high quality content, it never attains its true potential. What exactly do technicians and creators want for content production? Conversely, are creators utilizing and incorporating the outcome of technological evolution into their own ideas? The goal of this symposium is to provide a forum for debate between technicians and creators.
(Shuichi Tamegaya)
Summary of Mr. Seiji Kunishige's presentation
I'm sure you saw the tapeless cameras at the equipment exhibition. As someone who has played a key role in promoting this technology since 1983, I'm very happy to see that such systems and equipment are now available.
My presentation is entitled "New developments in digital video content production born from technological innovations in hardware, software and networking". I would like to take a look at whether hardware, software and networks that have reaped the benefits of digital technology are really being used to their full extent, while focusing on the theme of the best way to realize their full potential.
As technology continues to advance, in the field of digital video content production, at first glance there appears, above all, to be a reciprocal relation between improved quality and improved efficiency. Another question is that as high definition TV becomes the mainstream, how can the technology for generating real-time high definition images best be applied at the production site for a real-time medium such as TV? With advances in both technology and media, and changes in the audiovisual environment, the media market continues to expand. It is anticipated that high quality content will be efficiently provided for multiple channels at a low price, but this requires the development of unique technologies as well changes to workflow and, indeed, to the way we think. As well as unique technologies, existing technologies must be used skillfully. To enable the provision of a broad range of services, I think copyright management, acquiring personnel and education will become important issues.
Now I'd like to talk about the current situation at production sites from the viewpoint of new developments in content production.
The importance of developing original technologies
I have paid particular attention to the development of new technologies that play a key role when producing content. Basically, the first thing is to fully use existing technologies. Creating richly expressive scenes of visible and invisible things, extremely small things, and things in the macro world we can't see, as well as dramas and documentaries, has led to the onsite development of original technologies. In my experience, the development of image processing and data processing technologies has been particularly important, especially the development of original technologies that strongly support the expressive power of artists and creators.
As a specific example, when a creator decides to make a new animation character, it's quite a difficult task to achieve using tools available on the market. This leads to the development of menus that creators can easily use and techniques that enable them to incorporate the image they had in mind in an animated movie, which in turn boost content quality.
Let's take a look at an example of original technology for TV studio motion control cameras. Together with a manufacture working on the development of robot technology for automobile assembly, I was in involved in the development of a compact system that can be assembled in small studios. Applying our video production know-how to robot development made onscreen verification of shooting conditions possible in real time. Control is possible with a joystick and you can move freely between key frames. What's more, when used in conjunction with CG application software, you can replay previous frames on a computer. Delivering considerable efficiency and quality, the system is ideal for use in small studios for dramas and other productions.
Real-time image generation technologies such as physical simulation and scientific visualization are used during CG video production to provide expressive images that the viewer can easily understand. While taking advantage of commercially available software, original technology is being developed to provide the quality high definition productions require. This calls for the ability to develop new technologies including hardware and development tools.
The skillful use of real-time technologies can shorten the production process, but at the same time it is vital that quality is improved. Real-time technology such as previsualization (Pre-Viz) is becoming the mainstream. As a part of this trend, the final quality of images on location can be improved further if high quality images were to be reproduced in real time, onsite while checking the images.
Conclusion
From now on, development of content, including broadcast content, will move ahead for a wide range of media. A point to remember here is the improvement of content quality. To achieve this, it is absolutely vital from a business point of view that the development of new technologies, the use of existing technologies and the use of network technologies are unified and seen as a total system.
I mentioned earlier the evolution of cameras to digital and tapeless models. However, the dinosaur scene you saw at the start of the presentation was produced using an original system using data from many different kinds of cameras for CG composition. Important issues in the future will be how to interrelate acquired data with images and how to manage it in a single content. Here, networks, tapeless cameras and servers will play a leading role. I think digital production basically requires a unified system that precisely interrelates different kinds of metadata and scenes. Other points to remember are the use of original images and their attached metadata to enable single-source multi-output, and the question of what kinds of services to provide. Using technology such as XML, I think it's important to think about cross-media development from the time of shooting images on location.
Summary of Mr. Walter Mundt-Blum's presentation
The main themes of this presentation are real-time computing and the Shader Model. NVIDIA, which was founded 15 years ago, has its head office in Santa Clara, California. We have over 4,800 employees, and last year we enjoyed sales of three billion dollars. As a company that believes in aggressive investment in new technologies, over the last two years we have invested two billion dollars in R&D. In the field of broadcast graphics we offer two products, the FX5600SDI and the FX4600SDI. They are both graphics boards but they boast two unique features. One is a 7500MB and 1.5GB high-speed frame buffer. The other is direct video compositing. Usually, when composing images, processing is performed after returning to the CPU's system bus. Because this processing travels an extremely long route that includes the system CPU, memory architecture and I/O boards, time is required for processing before the SDI signal is output. Our new products, however, first pick up the video stream, calculate 3D content and perform rendering with a GPU, which are then combined to enable direct SDI output. This eliminates the need to continually return to the system bus or system memory to enable maximum performance and both SDI and HD output image quality. This is an extremely important factor for real-time video stream. Performance this high makes a difference with, for example, weather reports. Actual video composition is possible directly on the board. Instead of operating the video, it is simply imported and 3D content is output on the spot. Three processes are involved here, namely Chroma keying, Luma keying and alpha compositing. This offers the great advantage of importing of video, generation of 3D graphics, overall composition and then direct SDI output.
Shader technology is not only used for broadcasts. In fact, it's used by many industries looking for fast processing. These days, most of the videos you see are produced using offline rendering, which requires a lot of CPU and GPU power as well as time. To produce these videos, while speeding up the process and doing it in real time, quality must be maintained. To satisfy HDTV and 4K digital cinema quality requirements, anti-aliasing is also necessary. This is in the interest of all industries. For Shader, an invisible line of reality between video spots and 3D content must be achieved. This composition technology enables the common goals of processing close to real time and a dramatic reduction in cost.
Making this possible requires new hardware architecture, that is to say, integrated architecture. Without it, realistic representation of complex programs with Shader would be complicated. For it to happen, you need a GPU. As well as creating beautiful images, this GPU can also handle data processing. The GPU is a huge parallel processing device.
Computing is the biggest consideration at our company, and the GPU is a computing device for large-volume parallel computing. Looking just at the construction of the GPU, it's basically built to drive pixels and display graphics, but it's not only for graphics it can also be used for data processing. Using the new software platform CUDA enables C language programming. Since C language and CUDA source codes are very similar, the programmer can perform processing of mathematically common data with C language. Performing this data processing with a GPU is very effective when processing large-volume parallel data. For example, for mechanical and seismic calculations, and fluid calculations for smoke and clouds, and ray tracing for high quality computer graphics, a GPU delivers 240 times the performance of a CPU. This speed is possible with a GPU alone, and the difference it makes become more noticeable as parallel processing calculations become more complex.
My final message for the broadcast and content production industries is that I think interactive games and broadcast content will become increasingly interactive. As technology for real-time processing, the use of video streaming combined with simulation technology in a cluster configuration will become the most generally used method.
NVIDIA will announce its next-generation products in 2008.