자주하는 질문

What is the Realistic Media?


What is the Realistic Media?

Recently, since 3D, UHD contents appeared, it attracts more public attention.

The Realistic Media means that the information with all senses are transmitted so that the sense of immersion can be maximized for user’s satisfaction.

It means that the visual information that has remained on 2D is provided with 3D information and that the information that can be seen, heard and felt through human’s five senses is transmitted, and it is possible to utilize it in various fields such as computer graphics, the display and industrial applications etc. as well as the entertainment field including broadcasting, movies and games etc. by providing much better expressiveness, clearness and the sense of reality than the media that is currently used as a media of the next generation in order for the most close reproduction of the real world.

What is HMD?

The HMD (Head Mounted Display) is the display device that is mounted on the user’s head, and presents images in front of a user’s eye directly.

Currently, Samsung Gear VR and Google CardBoard are the representative two products.

The Gear VR is HMD, and the CardBoard is called as Dive.

  • The HMD provides the optimal VR environment by being equipped with an internal sensor and the internal panel at the inside of it.
  • The Dive is the method using the smartphone panel by mounting a smartphone on the case that a lens is included.


What is VR?
 This is similar with the description of the Realistic Media.

The Virtual Reality (VR) or Virtual Environment (VE) refers to various environments that human can move like a real situation in the virtual world which is based on a computer, which can be seen.

Recently, it is all the rage through the Gear VR and is emerging as the Realistic Media that reproduces the actual space with digital techniques by using the network platform or the display device.

It can be predicted that ‘ER (Experienced Reality)’ will become a daily routine by combining the Virtual Reality with the Augmented Reality. If the equipment like a goggle is worn on a head, the experienced reality will be before your eyes and be a reality to be experienced.

It is worthwhile to consider joining ‘Virtual Reality’, ‘Augmented Reality’ in other words, ‘Experienced Reality’ seriously if you’re in the contents field (movies, games, broadcasting and media etc.).


How should the contents of VR be prepared?
 First of all, it is divided on the basis of CG/an actual image.

It is made by using Unity and Unreal Engine through the rendering process after coloring and adding patterns on the model that is made from CG (3DMAX and MAYA).

On the basis of actual images, it is divided into a video and an image, and it is available if whatever can take photos.

  • The panoramic head that shoot in line with the panorama theory is essential for the stitching error and the optimal output.

Since the methods for videos and taking images are different but the principle (No-Parallax Point) is identical, all you need to do is to strengthen only the basic skills.


What is the difference between 3D and S3D?


The sphere image is only to make a general image be 360 spaces but is not three-dimensional.

The 3D used to mean animations and game videos on a computer but in general, is called as a stereoscopic image.

In the past, it usually refers to CG using Maya and 3DMax.

In order to distinguish the stereoscopic image from CG, the SIGGRAPH suggested to write it as S3D (Stereoscopic 3D) in case of the stereoscopic 3D.


An object using scripts can be inserted into the sphere (videos, images). This is also 3D.

If you’re interested in S3DVR, we can show it to you if you contact us since it is R-rated.

S3D is a compound word of Stereo and Scopic which means seeing and is the technique that can make you perceive the sense of depth of 3D by presenting a pair of 2D videos that have binocular disparity to two eyes respectively by using the difference of a vision of two eyes.

Additionally, the perception of 3D is made from disparity of two eyes.

This is called as stereoscopic.

It means that the image which a left eye sees is different from the one that a right eye sees.

For instance, after you fold a paper in half, open it slightly.

Then, you can see it from your left eye, but not from your right eye.

The images that each eye sees show the difference in the perspective and the vanishing point.

These two images are transmitted to your brain and are perceived as one stereoscopic image.

I can perceive the stereoscopic image with the naked eye. The test method is to stare at a simple image (a circle, a triangle) after dividing it into the right angle-angle system (left, right).


At that moment, you can see an additional image in the center separately from the images on both sides.

You just need to move slightly so that the image can be seen well. It will be easier if a paper is placed on the boundary.

You just need to know that what is using CG is indicated as 3D, and S3D for videos.


How do you find Non-Parallax Point?

The camera lens forms one overall a convex lens by combination of several convex lenses and concave lenses in order to calibrate chromatic aberration and distortion etc.

You may think that the entire lens group is one thick convex lens.

Since the camera lens forms a convex lens by combination of multiple lenses like this, the NPP point of a certain point in somewhere at the center of the lens group shall be found.


In the country, it is called as a nodal point or the point is known as the point that a light passes through, which is wrong, and the correct name is No-Parallax Point.

  • First, close your left eye and point out the subject in the long distance and keep its position in mind.
  • Again, close your right eye and point out the subject and compare it with the position that you remembered just a moment ago.
  • You can tell that the subject that is seen by your left eye has a different position from the one that is seen by your right eye.

The parallax occurs as much as the different distance, and this shall be made as the same point by using the rotator equipment.


What equipment to take actual images is there?
 What equipment to take actual images is there?

First, it is divided into taking images and shooting videos.

As for taking images, it is divided into a tripod, a high pole and an air photograph, and an angle of view that is seen varies according to each method of shooting but in the later work, there are pros and cons.

As for shooting videos, it is divided into a full video, a partial video and a time lapse, and the one-shot equipment is essential for the full video and the time lapse, and the shooting will be available only when the ready-made goods is purchased separately or made to order.


What are precautions upon shooting?

The spherical panorama requires shooting all directions (front, back, left, right, up and down) thoroughly and compositeness as one output.

The biggest problem among these is exposure.


Since it shoots all directions, backlight, oblique light, purity lighting and proper exposure shall be set up.

First, after setting aperture: more than f8, ISO: less than 200 and then AV (Aperture Priority Mode), look around to the east, west, south and north and then take photos of the value of the center point that the bright part is mixed with the dark part after changing it into the manual mode (M).

Even though it is taken with the M mode, chromatic aberration and changes in exposure occur due to features of the lens.


As for those parts, S/W makes the boundary of images smoothly by using the technique of multi-resolution splines during the process of stitching.

In case of shooting without fixing exposure, since there is a problem that the stripes on the boundary can’t be erased even though the multi-resolution splines is used, a shooting shall be made after setting it up as the manual (M) mode.


What is the principle of the stitching program?

The technique of the panorama output stitching is the one that creates a combined output with multiple images acquired from several cameras.

The principle of the panorama stitching all over the world is made of PanoTools that is developed by Helmut Dersch who is a physics and mathematics professor in Germany.


The technique of the panorama output stitching is the one that creates a combined output with multiple images acquired from several cameras.

The principle of the panorama stitching all over the world is made of PanoTools that is developed by Helmut Dersch who is a physics and mathematics professor in Germany.


There are several stitching tools, but the most commonly used ones are PTGui andAutopanogiga among them, and domestic users use PTGui most which is developed in Netherlands.

As for the video panorama, two S/W shall be used alternatively, not using one S/W. Only KOLOR offers the two S/W, and there will no difficulties in working if the functions of AUTOPANO VIDEO and GIGA are fully learned.

As for multiple images, the output that is taken at the same time, at least adjacent images shall include common areas. For instance, if there is no spatial and geometric information on camera shooting, the required geometric information can be drawn only when the common area must be included between adjacent outputs for conducting the stitching.

The feature point can be drawn for the output having two common areas, and the definition of the feature point may vary depending on algorithm, but generally the point that becomes a corner at the output refers to a feature point. Once a feature point is drawn from the output, one corresponding point can be found by matching the feature point mutually between the two videos on the common area.

If the corresponding points are more than 4 points, the Homography function of the output can be found, and the obtained Homography is applied to one input video and the image conversion is conducted depending on what the standard output is.

The converted image like this is the one which is converted into pixel coordinates which is identical to the standard output and the two outputs are aligned into one output.



A photoshop has been used to revise stitching errors and to correct the panorama images. It has very powerful functions such as compensating, compounding distorted image in a full spherical image. This tool is considered as a must- to –know in panoramic view project.

Panorama tool couldn’t be updated since 2009 due to IP issue.


What is Viewer?

Panoramic viewer is required to show the video clip or image in VR. This is like you need video player such as Window media player or GOM player to see the movie, you need panoramic viewer if you want to watch VR.

There several types of viewers such as Desktop based, Web based and App based, and different players are being used. (Deval VR player for Desktop, Krapano viewer for Web)


How long does it take for the project and what is the cost?

It fully depends on the purpose of Film and function.

You may want to consider the followings before discussion for details.

· Image or video

· Land or sky

· Web base or App base