3D Technology: Types and Uses
Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
CHAPTER 1: INTRODUCTION
This report will focus on how different 3D technologies work, it will include the entire work flow, from recording the action, encoding the footage, playing back the media via a cinema projector or television and finally how the audience views the 3D film or video, whether it be through specially designed glasses or an auto-stereoscopic television.
At present the most popular way to view 3D media is with the use of specialised glasses, the most popular being, active shutter glasses, passive polarised glasses and colour separationbased glasses.
Wearing glasses to watch a movie is often mentioned as a negative aspect of 3D. There is a technology available that allows you to watch 3D on screens without wearing any additional glasses, it is called autostereoscopy, this will also be looked at.
The health impacts that result from watching 3D will also be examined, along with factors that will prevent a person from being able to correctly view 3D images.
There will be impacts on the entire industry from studios and cinemas to smaller production companies and independent producers if 3D films become the norm and these will be examined.
A good place to start this report is to examine how two of the highest profile media companies around at present are currently viewing 3D technology.
Phil McNally stereoscopic supervisor at Disney-3D and Dreamworks was quoted as saying,
'...consider that all technical progress in the cinema industry brought us closer to the ultimate entertainment experience: the dream. We dream in colour, with sound, in an incoherent world with no time reference. The cinema offers us a chance to dream awake for an hour. And because we dream in 3D, we ultimately want the cinema to be a 3D experience not a flat one.'(Mendiburu, 2009)
In the BBC Research White Paper: The Challenges of Three-Dimensional Television, 3D technology is referred to as
'...a continuing long-term evolution of television standards towards a means of recording, transmitting and displaying images that are indistinguishable from reality'(Armstrong, Salmon, & Jolly, 2009)
It is clear from both of these high profile sources that the industry is taking the evolution of 3D very seriously, as a result this is a topic that is not only very interesting but will be at the cutting edge of technological advances for the next couple of years.
This report will be covering the following things:
- What does the term 3D mean with reference to film and video
- A look at the history of 3D in film
- How does 3D technology work
- The implications of 3D on the film business and on cinemas
- The methods used to create the media and also the ways in which the 3D image is recreated for the viewer
The reasons I have chosen to do my project on this topic is that I am very interested in the new media field. 3D video when accompanied with high definition film and video is a field that is growing rapidly. Earlier this year, on 02 April 2009, Sky broadcast the UK's first live event in the 3D TV format, it featured a live music concert by the pop group Keane, it was sent via the company's satellite network using polarisation technology.
Traditionally we view films and television in two dimensions, this in essence means we view the media as a flat image. In real life we view everything in three dimensions, this is because we get a slightly different image received in each eye, the brain then combines these and we can work out depth of vision and create a 3D image. (this will be explained further in Chapter 3)
There is a high level of industrial relevance with this topic, as 3D technology coupled with high definition digital signal is at the cutting edge of mainstream digital media consumption. Further evidence of this is that the sports company ESPN will be launching their new TV channel, ESPN-3D in North America in time for this year's Summer Football World Cup.
In January 2009 the BBC produced a Research White Paper entitled The Challenges of Three-Dimensional Television on this subject and over the next couple of years they predict that it will start to be introduced in the same way that HD (High Definition) digital television signal is currently being phased in, with pay-per-view movies and sports being the first take advantage of it.
Sky have announced that their existing Sky+HD boxes will be able to broadcast the 3D signals so customers will not even need to update their equipment to be able to receive the 3D Channel that they are starting to broadcast later this year.
On Sunday January 31st 2010, Sky broadcast a live Premier League football match between Arsenal and Manchester United for the first time in 3D to selected pubs across the country, Sky equipped the selected pubs with LG's new 47-inch LD920 3D TVs. These televisions use the passive glasses, similar to the ones uses in cinemas as opposed to the more expensive Active glasses which are also an option. (The differences between Active and Passive technologies will be explained in Chapter 8)
It is also worth noting that at the 2010 Golden Globe awards, on acceptance of his award for 'Best Picture' for the 3D Box Office Hit Avatar, the Canadian director James Cameron pronounced 3D as 'the future'.
At the time of writing this report (27/01/2010) the 3D film Avatar has just taken over from Titanic (also a James Cameron film) to become the highest grossing movie of all time, with worldwide takings of $1.859 billion. This is being accredited to the films outstanding takings in the 3D version of its release, in America 80% of the films box office revenue has been received from the 3D version of its release.
In an industry where 'money talks', these figures will surely lead to an dramatic increase in production of 3D films and as a result Avatar could potentially be one of the most influential films of all time.
After completing this dissertation I hope to be able to have a wide knowledge base on the subject and hopefully this will appeal to companies that I approach about employment once I have graduated.
In the summer of 2010 when I will be looking for jobs, I believe that a lot of production companies will have some knowledge of 3D technology and be aware of how in the near future it may be something that they will have to consider adopting in the way that many production companies are already or soon will be adopting HD into their workflow.
In order to ensure that I complete this project to a high standard it is important that I gain a complete understanding of the topic and study a variety of different sources when compiling my research.
3D media itself is not a new concept so there are a wide range of books and articles on the theory of 3D and stereoscopy along with anaglyphs.
However in recent years there has been a resurgence in 3D with relation to film and TV. This is due mainly to digital video and film production making it easier and cheaper to create and manage the two channels needed for three-dimensional video production.
It has proved more difficult to study books and papers on this most recent resurgence of 3D because it is still happening and evolving all the time. I have read various research white papers on the subject, which have been cited in the Bibliography, I have also used websites and blogs along with some recently published books, one of the problems with such a fast moving technological field such as 3D though, is that these books quickly become outdated.
CHAPTER 2: HUMAN VISION
In the real world we see in three dimensions as opposed to the two dimensions that we have become accustomed to when watching TV or at the cinema. Human vision appears in three dimensions because it is normal for people to have two eyes that both focus on the object, in the brain these two images are then fused into one, from this we can work out depth of vision, this process is called stereopsis. All of these calculations happen in the brain without the person ever even noticing, as a result we see the world in three dimensions very naturally.
The reason that we see in 3D is because of stereoscopic depth perception. There are various complex calculations going on in our brains, this coupled with real experience allows our brain to work out the depth of vision. If it wasn't for this it would be impossible to tell if something was very small or just very far away.
As humans, we have learnt to judge depth even with only one view point. This is why, if a person has one eye they can still manage to do most things that a person with two eyes can do. This is also why when watching a 2-D film you can still get a good judge of depth.
The term for depth cues based on only one viewpoint is monoscopic depth cues.
One of the most important of these is our own experience, it relates to perspective and relative size of objects. In simple terms, we have become accustomed to object being certain sizes. An example of this is that we expect buildings to be very big, humans are smaller and insects are smaller still. So this means that if we can see all three of these objects next to each other and they appear to be the same size then the insect must be much closer than the person, and both the insect and the person must be much closer that the building (see figure 1).
The perspective depth cue (shown in figure1) was backed up when an experiment was carried out by Ittelson in 1951. He got volunteers to look through a peep hole at some playing cards, the only thing they could see were the cards and so there were no other types of depth cue available. 'There were actually three different-sized playing cards (normal size, half-size, and double size), and they were presented one at a time at a distance of 2.3metres away. The half-sized playing card was judged to be 4.6 metres away from the observer, whereas the double-sized card was thought to be 1.3 metres away. Thus, familiar size had a large effect on distance judgement'(Eysenck, 2002).
Another monoscopic depth cue that is very effective is referred to as occlusion or interposition. This is where an object overlaps another object. If a person is standing behind a tree then you will be able to see all of the tree but only part of the person. This tells us that the tree is nearer to us that the person.
One of the most important single view depth cues in called motion parallax, it works on the basis that if a person moves their head, and therefore eyes, then objects nearer to them, whilst not physically moving, will appear to move more than the objects in the distance. This is the method that astronomers use to measure distances of stars and planets. It is in extremely important method of judging depth and is used extensively in 3D filmmaking.
In filmmaking, lighting is often talked about as being one of the key elements to giving the picture 'depth', and this is because it is a monoscopic depth cue. In real life the main light source for millennia has been the sun. Humans have worked out how to judge depth based on the shadows that are portrayed from an object. In 2D films shadows are often used to display depth by casting them across actors faces it allows the viewers to see the recesses and expressions trying to be portrayed.
So far all of the methods that have been described for determining depth have been monoscopic, these work independently within each eye. If these were the only methods for determining depth there would be no need for 3D films as it would not add anything because all of these methods could be recreated using a single camera lens. This is not the case however, a lot of the more advanced methods used in human vision for judging depth need the use of both eyes, these are called stereoscopic depth cues.
A great deal of stereoscopic depth cues are based around the feedback that your brain gets when the muscles in the eye are manipulated to concentrate your vision on a particular point.
One of the main stereoscopic depth cues is called convergence, this referrers to the way that the eyes rotate in order to focus on an object (see figure 2).
If the focus is on a near object, the eyes rotate around the Y axis and converge on a tighter angle , similarly if the focus is on a distant object the rotation means the eyes have a wider angle of convergence.
It is a lot less stressful on the muscles in the eye to have a wide angle of convergence and look at objects far away, in comparison looking at very close object for any amount of time causes the muscles in the eye to ache. This is a very important factor that should be considered when creating 3D films, as it doesn't matter how good the film is, if it is going to hurt the audience it will not go down well.
A second stereoscopic depth cue that we use is called accommodation, this is the way that our eyes changes focus when we look at an object at different distances, it is very closely linked with convergence.
Usually when we look at an object very close up, our eyes will change rotation and point towards the object (convergence) allowing us to look at the item, our eyes will at the same time change focus (accommodation). Using the ciliarybody muscles in the eye, the lens will change shape to let more or less light in the same way a camera does, thus changing focus.
In everyday life convergence and accommodation usually happen in parallel. The fact that we can, if we wish choose to converge our eyes without changing the focus means that 3D films are possible. When you are sat in the cinema all of the action is projected onto the screen in front of you, so this is where your eyes need to focus. With 2D films the screen is also where your eyes need to converge, but with 3D films this is not the case. When watching a 3D film the focus never changes from the screen, else the whole picture would go out of focus, but objects appear to be in front and behind the screen, so your eyes need to change their convergence to look at these objects without altering their focus from the screen.
It has been suggested that this independence of accommodation and convergence is the reason for eye strain when watching a 3D picture as your eyes are doing something that they are not in the habit of doing (see chapter 12: Is 3D bad for you).
It is also worth noting that our monoscopic depth cues work at almost any range, this is not the case with stereoscopic depth cues. As objects become further away they no longer appear differently in each eye, so there is no way the brain can calculate a difference and work out depth.
'The limit occurs in the 100 to 200-yard range, as our discernment asymptomatically tends to zero. In a theatre, we will hit the same limitation, and this will define the "depth resolution" and the "depth range" of the screen'.(Mendiburu, 2009)
This means that when producing a 3D film you have to be aware that the range of 3D that you have to use is not infinite and is limited to 100-200 yards.
CHAPTER 3: Early Stereoscopic History (1838 - 1920)
Three dimensional films are not a new phenomenon, 'Charles Wheatstone discovered, in 1838, that the mechanism responsible for human depth perception is the distance separating the retinas of our eyes .' (Autodesk, 2008)
In a 12,000 word research paper that Wheatstone presented to the Royal Society of Great Britain he described 'the stereoscope and claimed as a new fact in his theory if vision the observation that two different pictures are projected on the retinas of the eyes when a single object is seen'.(Zone, 2007)
Included in the paper were a range of line drawings presented as stereoscopic pairs, these were designed to be viewed in 3D using Wheatstones invention, the stereoscope.
Wheatstone was not the first person to look at the possibility of receiving separate views in each eye, 'In the third century B.C, Euclid in his treatise on Optics observed that the left and right eyes see slightly different views of a sphere'(Zone, 2007). However, Wheatstone was the first person to create a device to be able to re-create 3D images.
Between 1835 and 1839 photography was starting to be developed thanks to work from William Fox Talbot, Nicephore Niepce and Louise Daguerre.
Once Wheatstone became aware of the photographic pictures that were available he requested some stereoscopic photographs to be made for him. Wheatstone observed that 'it has been found advantageous to employ, simultaneously, two cameras fixed at the proper angular positions'(Zone, 2007).
This was the start of stereoscopic photography.
Between 1850 and 1860 work was starting to be done by various people to try and combine stereoscopic photography with machines that would display a series of images very quickly and therefore using persistence of vision to create a moving 3D image. These were the first glimpses of 3D motion.
In 1891 a French scientist, Louis Ducos du Hauron patented the anaglyph, a method for separating an image into two separate colour channels and then by wearing glassing with the same colours but on opposite eyes thereby cancelling out the image, thus reproducing one image, but in 3D.
Another method used at this time to create 3D was proposed by John Anderton, also in 1891. Anderton's system was to use polarisation techniques to split the image into two separate light paths and then employ a similar polarisation technique to divert a separate image to each eye on viewing.
One of the main advantages of polarisation over anaglyphs is that they do not lose any colour information, this is due to the fact that both images retain the original colour spectrums. They do however loose luminance. It is common for a silver screen to be necessary, it serves two purposes, firstly the specially designed screen maintains the separate polarisation required for each image. It also reflects more light than conventional screens, this compensates for the loss of luminance.
During 1896 and 1897 2D motion pictures started to take off, and by 1910 after a lot of initial experimenting the creative formats of film that we recognise today such as cuts and framing had started to become evident.
In 1920 Jenkins, an inventor that worked hard to try and create a method for recreating stereoscopic motion picture was quoted as saying 'Stereoscopic motion pictures have been the subject of considerable thought and have been attained in several ways...but never yet have they been accomplished in a practical way. By practical, I mean, for example without some device to wear over the eyes of the observer.'(Zone, 2007)
It is worth noting that this problem of finding a 'practical' method of viewing 3D has still to a large extent not been solved.
Chapter 4: Early 3D Feature Films
(1922 - 1950)
4.1 The first 3D feature film
The first 3D feature film, The Power of Love was released in 1922, it was exhibited at the Ambassador Hotel Theatre in Los Angeles. 'Popular Mechanics magazine described how the characters in the film "did not appear flat on the screen, but seemed to be moving about in locations which had depth exactly like the real spots where the pictures were taken"'(Zone, 2007).
The Power of Love was exhibited using red/green glasses using a dual strip anaglyph method of 3D projection. (Anaglyphs are explained in chapter 8.3)
The film was shot on a custom made camera invented by Harry K.Fairall, he was also the director on the film. 'The camera incorporated two films in one camera body'.(Symmes, 2006)
Power of Love was the first film to be viewed using anaglyph glasses, also the first to use dual-strip projection.
Also in 1922, William Van Doren Kelley designed his own camera rig, based on the Prizma colour system which he had invented in 1913. The Prizma 3D colour method worked by capturing two different colour channels by placing filters over the lenses. This way he made his own version of the red/blue anaglyphic print. Kelleys 'Movies of the Future' was shown at Rivoli Theatre in New York City.
4.2 The first active-shutter 3D film
A year later in 1923 the first alternate-frame 3D projection system was unveiled. It used a technology called 'Teleview'. Which blocked the left and right eyes periodically in sync with the projector, thereby allowing you to see too separate images.
Teleview was not an original idea, but up to this point no one had been able to get the theory to actually work in a practical way that would allow for films to be viewed in a cinema. This is where Laurens Hammond comes in.
Hammons designed a system where two standard projectors would be hooked up to their own AC generators, running at 60Hz this meant that adjusting the AC frequency would increase or decrease the speed of the projectors.
'The left film was in the left projector and right film in the right. The projectors were in frame sync, but the shutters were out of phase sync.'(Symmes, 2006) This meant that the left image was shown, then the right image.
The viewing device was attached to the seats in the theatre. 'It was mounted on a flexible neck, similar to some adjustable "gooseneck" desk lamps. You twisted it around and centred it in front of your face, kind of like a mask floating just in front of your face.' (Symmes, 2006)
The viewing device consisted of a circular mask with a view piece for each eye plus a small motor that moved a shutter across in front of either the left or right eye piece depending on the cycle of current running through it. All of the viewing devices were powered by the same AC generator as the projectors meaning that they were all exactly in sync.
One of the major problems Hammond had to overcome was the fact that at the time film was displayed at 16 frames per second. With this method of viewing you are effectively halving the frame rate. 8 frames per second resulted in a very noticeable flicker.
To overcome this Hammond cut each frame up in to three flashes so the new 'sequence was: 1L-1R-1L-1R-1L-1-R 2L-2R-2L-2R-2L-2R and so on. Three alternate flashes per eye on the screen.' (Symmes, 2006)
This method of separating and duplicating certain frames effectively resulted in increasing the overall frame rate thereby eradicating the flicker.
There was only one film produced using this method, it was called M.A.R.S and displayed at the Selwyn Theatre in New York City in December 1922. The reason the technology didn't catch on was not due to the image, as the actual theory for producing the image has changed very little from the Teleview method to the current active-shutter methods which will be explained later.
As with a lot of 3D methods the reason this one did not become mainstream was due the viewing apparatus that was needed. Although existing projectors could be modified by linking them up to separate AC generator, meaning no extra equipment was needed, the headsets that were required did need a lot of investment and time to install. All of the seats in the theatre needed to be fitted with headsets, these were adjusted in front of the audience members. These also had to be linked up to the AC generator so as they were perfectly in sync, this meant that they had to be wired in to the seats.
These problems have since been overcome with wireless technologies such as Bluetooth as will be explained later.
4.3 The first polarised 3D film
The next and arguably one of the most important advancements in 3D technology came in 1929 when Edwin H. Land worked out a way of using polarised lenses (Polaroid) together with images to create stereo vision. (Find more on polarisation in chapter 8.6)
'Lands polarizing material was first used for projection of still stereoscopic images at the behest of Clarence Kennedy, an art history instructor at Smith College who wanted to project photo images of sculptures in stereo to his students'. (Zone, 2007)
In 1936 Beggar's Wedding was released in Italy, it was the first stereoscopic feature to include sound, it was exhibited using Polaroid filters. This was filmed using polarised technology.
The first American film to use polarising filters was shot in 1939 and entitled In Tune With Tomorrow, it was a 15 minute short film which shows 'through stop motion, a car being built piece-by-piece in 3D with the added enhancement of music and sound effects'. (Internet Movie Database, 2005)
Between 1939 and 1952 3D films continued to me made but with the Great Depression and the onset of the Second World War, the cinema industry was restricted with its output because of finances and as 3D films were more expensive to make their output started to be reduced.
Chapter 5: 'Golden Age' of 3D
(1952 - 1955)
'With cinema ticket sales plummeting from 90 million in 1948 to 40 million in 1951' (Sung, 2009) largely being put down to the television becoming coming in people's front rooms the cinema industry needed to find a way to encourage the viewers back the big screen, 3D was seen as a way to offer something extra to make viewers return.
In 1952 the first colour 3D film was released called Bwana Devil,it was the first of many stereoscopic films to follow in the next few years. The process of combining 3D and colour attracted a new audience to 3D films.
Between 1950 and 1955 there were far more 3D films produced that at any other time before or since, apart from possibly in the next couple of years from 2009 onwards, as the cinema industry tries to fight back again against falling figures, this time though because of home entertainment systems, video-on-demand, and legal and illegal movie downloads.
Towards the end of the 'Golden Age', around 1955, the fascination with 3D was starting to be lost. There were a number of reasons for this, one of the main factors was that in order for the film to be seen in 3D it had to be shown on two reels at the same time, which meant that the two reels had to be exactly in time else the effect would be lost and it would cause the audience headaches.
Chapter 6: Occasional 3D films
(1960 - 2000)
Between 1960 and 2000 there were sporadic resurgences in 3D. These were down to new technologies becoming available.
In the late 1960's the invention of a single strip 3D format initiated a revival as it meant that the dual projectors would no longer go out of sync and cause eye-strain. The first version of this single strip 3D format to be used was called Space-Vision 3D, it worked on an 'over and under' basis. This meant that the frame was horizontally split into two, during playback it was then separate in two using a prism and polarised glasses.
However, there were major drawbacks with Space-Vision 3D. Due to the design of the cameras required to film in this format, the only major lens that was compatible was the Bernier lens. 'The focal length of the Bernier optic is fixed at 35mm and the interaxial at 65mm. Neither may be varied, but convergence may be altered'(Lipton, 1982).This obviously restricted the creative filmmaking options and as a result was soon superseded by a new format called Stereovision.
Stereovision was similar to Space-Vision 3D in that is split the frame in two, unlike Space-Vision though, the frame was split vertically, and they were placed side-by-side. During projection these frames were then put through an anamorphic lens, thereby stretching them back to their original size. These also made use of the polarising method introduced by Land in 1929.
A film made using this process was called The Stewardess, released in 1969, it cost only $100,000 to make but at the cinema it grossed $26,000,000 (Lipton, 1982). Understandably the studios were very interested in the profit margin that arose from this film. As a result 3D once again became an interesting prospect for studios.
Up until fairly recently films were still shot and edited using old film techniques (i.e. not digitally). This made manipulating 3D films quite difficult, this lack of control over the full process made 3D less appealing to film makers.
'The digitisation of post-processing and visual effects gave us another surge in the 1990's. But only full digitisation, from glass to glass - from the camera's to projector lenses - gives 3D the technological biotope it needs to thrive' (Mendiburu, 2009).
Chapter 7: The Second 'Golden Age'
of 3D (2004 - present)
In 2003 James Cameron released Ghost of the Abyss, it was the first full length 3D feature film that used the Reality Camera System, which was specially designed to use new high definition digital cameras. These digital cameras meant that the old techniques used with 3D film no longer restricted the work-flow, and the whole process can be done digitally, from start to finish.
The next groundbreaking film was Robert Semecki's 2004 animated film Polar Express which was also shown in IMAX theatres. It was released at the same time in 2D and 3D, the 3D cinemas took on average 14 times more money that the 2D cinemas.
The cinemas once again took note, and since Polar Express was released in 2004, 3D digital films have become more and more prominent.
IMAX are no longer the only cinemas capable of displaying digital 3D films. A large proportion of conventional cinemas have made the switch to digital, this switch has enabled 3D films to be exhibited in a large range of cinemas.
CHAPTER 8: 3D TECHNOLOGIES
8.1 - 3D capture and display methods
Each different type of stereoscopic display projects the combined left and right images together onto a flat surface, usually a television or cinema screen. The viewer then must have a method of decoding this image and separating the combined image into left and right images and relaying these to the correct eye. The method that is used to split this image is, in the majority of cases, a pair of glasses.
There are two brackets of encoding method, passive and active. Passive means that the images are combined into one and then the glasses split this image in to two separate images for left and right eye. In this method the glasses are cheaper to produce and the expense usually comes in the equipment used to project the image. The second method is active display. This method works by sending the alternative images in a very quick succession (L-R-L-R-L-R), the glasses then periodically block the appropriate eye piece, this is done at such a fast rate that it appears to be continuous in both eyes.
There are various different types of encoding encapsulated within each of the two methods mentioned above.
The encoding can use either colour separation (anaglyph, Dolby 3D), time separation (active glasses) or polarisation (RealD). A separate method, which does not require the use of glasses is done by using a virtual space in front of the screen and is called autosterescopic.
In cinemas across the world at the moment there are several formats that are used to display 3D films. Three of the main distributors are Real-D, iMAX and Dolby-3D.
Once a 3D film has been finished by the studios, it then needs to be prepared for exhibition in various different formats, this can include amongst other things colour grading and anti ghosting processes.
At present there is not a universally agreed format for capturing or playing back 3D films, as a result there are several different versions, these are explained below.
A large majority of the latest wave of 3D technology options send the image using one projector, so removing the old problem of out sync left and right images. The methods that do use dual projectors are much more sophisticated that the older versions used in anaglyphic films so have eradicated the old problems of out of sync projectors.
8.2 - Ghosting & light efficiency
When you try and create two channels of images (left and right) and blend them into one frame, using passive or active systems, there are some errors that occur and have to be managed. Most of the systems looked at below tend to be good at one thing or the other and have incorporated methods to try and counter problems that have occurred.
The two main issues that arise from blending frames together (passive) and showing alternating frames (active) are ghosting and light efficiency.
Ghosting refers to the leakage of images between eyes. 'No 3D projection system perfectly isolates the left and right images. There is always some leaking from one eye to the other' (Mendiburu, 2009).
On most systems this leaking is minimal and not a problem. However, when the leaking raises over a couple of percent, the images appear to ghost or blur. It is especially noticeable on high contrast images.
The RealD polarisation method is the most affected by ghosting due to the methods it uses to split the images. RealD have incorporated a method of reducing this ghosting problem, the solution is called 'ghost-busting'.
The 'ghost-busting' process works by calculating the pattern of light which is expected to leak between eyes, this value is then subtracted from the original image. The drawback with this method is that it reduces the overall dynamic range of the image by the amount that is subtracted.
Colour separation methods such as Dolby-3D and anaglyph both also suffer from ghosting but to a lesser extent.
The poor light efficiency of 3D is another one of the major flaws that had to be overcome with all of the 3D display methods.
Colour separation methods suffer because by their very nature they have to filter certain colour ranges that enter each eye in order to create two images.
In the case of active shutter displays the light levels are diminished even more. As each eye is periodically turned off it reduces the light levels by 50%, plus the dark time between frame means that the overall light level is approximately 20% of the original.
In order to create stereoscopic pictures using polarised methods, the eye pieces filter out certain wavelengths of light, this has a similar effect of reducing light levels.
One solution to the low light problem is the installation of silver screens in cinemas, these reflect more light that the standards screens, thereby increasing the light levels.
8.3 - Colour separation - anaglyph
Anaglyphs are an example of passive 3D because the method works by combining the two images into one, then relying on the glasses to separate the signal into two channels.
Anaglyphs are one of the oldest methods for displaying 3D images, they are also the cheapest type of glasses to mass produce. The fact that they don't cost much to manufacture is the reason that why 3D is mentioned most people think of these red and blue glasses.
The anaglyph, proposed by D'Almeida (in one form at least) in 1858, used complementary-coloured filters over the left and right lenses to superimpose both images on a screen. Viewing devices with red and green lenses separated the images and selected the appropriate view for each eye (Lipton, 1982).
One of the problems with separating the colour channels in this way to portray 3D is that it reduces the overall luminance level of the image. In addition to this you are also only seeing half the colour in each eye so it is not a full representation of the original image.
Anaglyphs were most widely used for films in the early days of 3D. When Polaroid lenses were starting to be introduced in the late 1930's, it quickly became apparent that the disadvantages of anaglyphs such as poor colour separation were much less apparent in the new technology. As a result anaglyphs started to be phased out in the cinemas.
Today anaglyph glasses are still used because of their cost effectiveness, although they are mainly used for comic books and stereographic photography, not for moving pictures.
8.4 - Colour Separation - Dolby 3D
Dolby-3D is one of the most advanced technologies currently employed in the 3D market in terms of image quality.
It uses the same theory that was used in the anaglyph technology although in a much more advanced state which produces much improved results. As with anaglyphs, Dolby 3D is also a passive method of 3D.
Dolby believed that there were two key points for 3D to be successful in cinemas. They argued that if 3D is going to be widely accepted into cinemas then the technology needs to be portable and easily installed and moved from screen to screen. This way a film can be released on the largest screen with the highest capacity for seats, then after the film has been out for a while the equipment needed to be able to be easily moved to a smaller screen so the 3D release can still be shown but freeing up the larger screen for a conventional 2D film. This mobility is a major advantaged over systems that require installation of a special screen.
The seconds key point for successful 3D that was identified was the need for passive glasses, this way they would require significantly less maintenance in comparison to active glasses that would require charging or replacing batteries.
'Dolby 3D uses a “wavelength triplet” technique originally developed by the German company Infitec, specialists in 3D visualisation for computer-aided design. In this technique, the red, green and blue primary colours used to construct the image in the digital cinema projector are each split into two slightly different shades. One set of primaries is then used to construct the left eye image, and one for the right' (Slater, 2008).
The technique of splitting the image into two primary shades is done by inserting a filter wheel inside the digital projector. Unlike the polarizing method, the separation is carried out before the image is created. This occurs because the wheel is placed between the lamp and the DLP (Digital Light Processing) imaging chip. According to Dolby this creates a better image that mounting a filter in the image path, which is the method RealD employs.
The process of inserting the filer wheel can be done digitally, which means just pressing a switch can convert the projectors from 2D to 3D, and as the screen is the same as with 2D projectors, it means it's a very easy task converting the screen from one format to the other.
'Very advanced wavelength filters are used in the glasses to ensure that each eye only sees the appropriate image. As each eye sees a full set of red, green and blue primary colours, the 3D image is recreated authentically with full and accurate colours using a regular white cinema screen' (Slater, 2008).
The projectors in this format produce a very high frame rate, typically 144 frames per second. This is because the frame rate is effectively halved due to the images being split between the left and right eyes.
Further advantages of this technology are that because the glasses use wavelength filters and unlike the active shutter glasses, do not need to be battery powered, they are cheaper to run and maintain. However, they still aren't as cheap as polarised glasses, due to the complex structure of the filters which cost a lot to produce.
One of the major advantages of Dolby 3D is that it doesn't reduce the luminance level of the image, which is a side effect of both the active shutter rand polarisation methods it also has very high quality colour reproduction.
As the luminance is not reduced it means that unlike polarised methods, there is no need to install a special silver screen to boost the overall light level.
8.5 - Active shutter
This method involves periodically shutting off one eye piece followed by the other, it is done at such a fast rate that the viewer should not be able to notice any change due to the persistence of vision phenomenon.
The glasses are synced with the projector using either infra-red, Bluetooth, Direct Link Protocol (DLP-Link Protocol) or similar means to ensure that the timings that the eye piece is shut off is exactly in time with the image being projected, therefore ensuring that a 3D image is portrayed.
This is the same theory as was used in 1929, when Laurens Hammond came up with Teleview. However, were Teleview failed due to technical restrictions, current technology has overcame the earlier limitations.
The new LC (liquid crystal) shutter glasses work by shutting of alternating eyepieces, but unlike the Teleview system they do it by using polarized filters and a liquid crystal. When a voltage is applied to the eyepiece the filter and crystal become dark, when there is no voltage it is transparent. This alternate darkening is done in sync with the refresh rate of the screen, thereby creating a stereoscopic image.
This technology is mainly used in home 3D systems as the glasses are more expensive to produce, as a result it would not be economically viable for cinemas to purchase large quantities to distribute to cinemagoers. Unlike passive glasses they have to been powered, usually by a battery, which means there would be additional problems in cinemas when peoples glasses have ran out of battery etc.
However, a company called XpanD, probably the world's largest producer of LC active shutter glasses are trying to buck the trend. They have produced a method for producing a cheap digital projector system that still maintains a high quality image. This enables the cinemas to save money by not having to purchase special silver screens as in the case of RealD, this saving in money is cancelled out though by the increased cost of using and maintaining the LC shutter glasses.
One of the main advantages of this method of 3D display is that it reduces ghosting, which is a problem with most of display types. In the case of XpanD they have overcome the usual low light issues that you would associate with active glasses by using a very high shutter speed.
8.6 - Polarisation
When light travels from a source it has electric and magnetic vectors, these vectors move up and down in a random pattern along the Z axis as it moves away from the origin. This is said to be un-polarised.
'if such a polarising filter is held over the right projector lens, the light for the right image will be polarised in a plane (perpendicular to the filter surface) that can be controlled by rotating the filter in its plane' (Lipton, 1982).
Polarisation was first discovered in 1852 by William Bird Herapath, albeit only in a basic form. This science was built on by Anderton during the 1890's, and he first 'suggested that the use of polarised light for image selection for stereographic projection' (Lipton, 1982) was possible. But it was not until 1929 when Hand worked out a method of producing a new type of polarised filter (Polaroid), which was capable of working with moving images to create the first stereographic polarised images and then films.
At present there are the options of using either linear or circular polarisation. During the circular polarisation process, the projector polarised the images in a set direction (either clockwise or anti-clockwise), the left and right lenses of the glasses then polarise the incoming signal in the corresponding direction, thereby only allowing one of the two images through.
Tests have shown that circular polarised images usually offer better separation of the individual channels when compared to linearly polarised images but the filters that are required for it to work are more expensive to produce than the linear versions.
If linear polarized lenses are being used, the viewer will achieve the best results if the eyes are kept level, if they are tilted the 3D effects could start to be reduced, this is less noticeable with circularly polarized lenses.
8.7 - Polarisation - RealD & MasterImage
RealD is the most widely used cinema projection system across the globe. In the UK, Cineworld use equipment from RealD. This format uses a single projector and circular polarised glasses.
The stereo digital signal is decoded and sent to the projector, it is then beamed out at 24fps (frames per second) in each eye, which equates to 48fps.
Each of the 24 frames that are projected each second are flashed three time, this equates to a rate of 144 combines frames a second.
The projector buffers the left and right image and projects them in alternation, at a rate of 144 frames per second, presenting three “flashes” of each frame(Cowan, 2007).
The signal is projected through a RealD Z Screen which is placed in front of the projector, this Z Screen polarizes the image. The glasses are polarised to only allow the required image channel through the filter.
A large reason for the success of RealD is due to the fact that they use polarised glasses, this makes it a cheap option for cinemas as the glasses are easy to produce and quite often are disposed of after each use. These 'throw-away' glasses raise another problem however, as 3D films become more popular it could lead to very large quantities of plastic glasses being put in landfill, this is at a time where 'Green Policies' are at the forefront of political decisions could mean that a method of recycling the glasses will have to be used.
One of the drawbacks with this technology is the loss of luminance in the picture, the cinemas usually need to install a silver screen to compensate for this, which reflects approximately 2.4 times the amount of light when compared to the usual cinema screens. Installing these screens is an additional cost that the cinemas will have to absorb.
During the colour grading phase of the editing process the light levels of the image usually have to be raised to compensate for the overall loss of light when the viewer is watching the film.
RealD has the advantage of being able to reproduce the colours very effectively, this is due to the fact that the same colours are sent to each eye via the polarization method.
MasterImage is a similar method to RealD, in that it uses polarised glasses.
The main advantage the MasterImage holds over RealD is centred on the hardware that is required to project the image.
MasterImage use a easily portable device which consists of a 'High efficiency rotating circular polarizing filter which provides left and right image separation and bright richly coloured 3D images'(Masterimage, 2009).
This device can be moved from screen to screen depending on where it is needed, although you would still need a specialized silver screen due to the loss of luminance due to the polarizing process, so it is not as mobile as the company make out.
8.8 - IMAX 3D
IMAX were the first company to introduce mainstream 3D films, it was originally intended for analogue film.
The IMAX 3D methods are slightly different to the previously mentioned ones here in that they use two projectors. One each for the left and right images. IMAX 3D is available in digital in some theatres other still use film. Whereas all of the previous techniques are digital only.
With IMAX-3D the image is shot using two cameras, if being made specifically for the IMAX -3D theatres. All of the 3D films for this format of 3D are played back through two projectors. This reduces the luminance issues that are present in the other formats.
'As of 2010 the linear polarized filter system has become the Imax 3D standard. Linear polarization has a significant disadvantage compared to circularized polarization used in other systems such as Dolby 3D and RealD; with linear polarization you lose the 3D effect if you tilt your head. You may even need to experiment to get the best position for normal viewing' (3D Forums, 2009).
In addition to this, due to the large screen sizes that are used with IMAX cinemas, ghosting and focusing problems have been reported with 3D in this format, however these have been counteracted by the immersive experience by watching the 3D film on such a large screen.
8.9 - Autostereoscopic displays
One thing that all of the previous methods of viewing 3D formats have in common is that they require the user to wear glasses. For many people this is a disadvantage as the public have become so familiar with watching a standard 2D film where you do not need any extra add-ons to see a film. You could argue that having the filters required for 3D right next to your eyes would result in the best possible reproduction of the image but your average person that does not have or want to have any knowledge of the workings of 3D will probably care very little about this.
Autosterescopic technology is built in to the screen and requires no glasses to view the 3D image.
There are two main types of technology that are exploited in autosterescopic screens: lenticular lenses and parallax barriers.
The lenticular lens approach works by placing an cylindrical lens over each pair of pixels (left and right image), this lens then directs the image to either the left or right eye.
For this display to work it required the person to be standing a set distance from the device. If they stand to far away then the image will miss the eye-line for the person and they will not be able to see a 3D image.
'In the parallax barrier a mask is placed over the LCD display which directs light from alternate pixel columns to each eye'(3D Forums, 2009). One of the major advantages of this technology is that in can be easily switched from 2D to 3D because the mask is a liquid crystal later, which becomes transparent by turning off the current running through it.
'Although this technology is currently in existence today it is expensive and there are not too many companies developing it' (Totally 3D, 2009).
WRITE ABOUT NINTENDO DS 3D
8.10 Comparison of technologies
It is clear that there are advantages and disadvantages with all of the available formats. There is a place for all of the different methods as they all have different uses. Anaglyphs great strength is that they are the easiest to produce and the glasses are cheapest to make. While they don't look good for films, stereo-photography and comic books remain areas where these glasses are used.
At the other end of the scale are the Liquid Crystal Active Shutter glasses. These are the most expensive due to the electronics involved and at present are only being considered for home 3D systems.
TYPE OF 3D
Very cheap glasses
Poor colour reproduction. Worst of all 3D images.
Tilting eyes doesn't affect the 3D image. No need for silver screen to boost light levels.
Can result in colour bleeding between eyes.
LC Active Shutter
Good colour reproduction.
Low light levels, expensive glasses. Needs a very high frame rate to avoid flicker.
Viewer able to tilt head without losing 3D image.
More expensive that linear polarised glasses. Needs a silver screen to boost light level.
Extremely large screen creates an engulfing experience. Linear polarised glasses are cheaper than circular polarised glasses.
Eye level needs to be kept horizontal or could lead to poor reproduction of 3D image.
No need for glasses or any other method of filter as decoding filter is built into the screen.
Very expensive, poor viewing angle.
Chapter 9: 3d Cinematography
Cinematography is the art of controlling how a film looks, it includes shooting and editing the film.
With 3D film there are all of the techniques that you would expect of 2D film such as focus, lighting and sound but there are also added aspects which are unique to 3D film, controlling these is vital to creating an effective three dimensional film.
9.1 Interaxial distance
The first of these is the interaxial distance, this means the amount of space between the two cameras.
The standard distance that most directors start with and work from is about 2.5 inches, this is then altered to achieve the desired effect. The reason this distance is used is that it is the same distance as between our own eyes. This allows us to see the 3D world in the same relative manor as we would if we were physically standing there.
Orthostereoscopy is one case where you would not alter this 2.5inch interaxial distance. This method of 3D filmmaking is designed to perfectly replicate the way human vision works. As a result this 2.5 inch human eye separation distance must remain at all times. This method of 3D filmmaking is not commonly used though, and conventional 3D filmmaking doesn't restrict the altering of this distance
By adjusting this distance you are in affect widening or narrowing the difference between the images that each eye receives. This will have a scaling effect on any images that are displayed in the virtual space.
Moving the cameras apart will make the object grow and pushing them close together will have the opposite effect.
Extremes of these effects are known as hyper-stereoscopy and hypo-stereoscopy. Hyper-stereoscopy is where the cameras are so widely spaced that it creates an effect where all of the images appear to become miniatures. At the other end if this scale is Hypo-stereoscopy, where the cameras are so close together that it is almost a 2D image. As a result the objects appear flat, which is why it is sometimes referred to as 'cardboarding'.
With larger cameras it is often impossible to physically get the cameras close enough so that the two lenses are 2.5 inches apart, this is fixed by setting the cameras up pointing into mirrors that reflect the image on to the cameras sensors.
The next big control that directors have over the look of a film is the way that the cameras converge. Earlier in this report it was mentioned how human eyes converge on a subject in order for us to focus on an object.
In a similar method, the director has the option of changing the angle of the two cameras on an object in a scene. There are two methods for performing this. You can either do it on set by physically moving the cameras or you can do it in post by performing something called Horizontal Image Translation (HIT). Both methods have things going for and against them.
The benefit of doing the converging on set is that it is cheaper and requires less post-production. The drawbacks of doing it on set is that it can invoke a phenomenon known as keystoning. This is when the left and right edges of an image no longer match as they should. It occurs because when you angle the cameras inwards towards the point of focus, in inevitably means that the left and right sides if each image respectively are closer to the nearest camera, and equally the opposite side now becomes further away. Keystoning, when extreme can be very uncomfortable to watch.
The alternative to converging on set it to do it in post, using HIT. This process gives the director much more control over the angle of convergence. It works by shifting the images left or right to move them out of line. The drawbacks of this method are that it will be more costly and that you will have to shoot the scene wider that it will be displayed. Overshooting is necessary because after the image has been shifted you will lose the pixels that have moved off screen and it is necessary to crop the overlapping images so as you have two frames that are matches again.
Until real time HIT correction is possible live events filmed in 3D will have to rely on physically converging the cameras to create and set the depth.
If the cameras are not converged and stay parallel then there will still be a 3D effect achieved but the furthest point back in the 3D image will be level with the screen.
When the cameras are angled inwards and converge on an object then that object becomes level with the screen and anything behind that convergence point will appear behind the screen in the virtual 3D space (see figure 3: Convergence example).
9.3 Stereoscopic window
One of the big differences between how the viewer sees 2D and 3D is that when a person is watching a 2D medium then they are looking at a flat screen, the edges of the image are defined by the physical edges of the image or cinema screen. This is different with 3D. When a person is watching a 3D film, the screen becomes a window. The viewer can see objects behind the screen or in front of the screen.
One of the problems that had to be overcome with 3D was when this stereoscopic window was broken, as it created an uncomfortable viewing sensation.
In a 2D film if a object is half in the frame, then both of your eyes will see this and your brain tells you that the other half of the object is outside of the frame. In 3D, if a prominent object, such as a person is located half in and half out of the frame then it will result in each eye seeing different amounts of that person. This will cause our brain to have difficulties in compositing the left and right images in to the one image needed for 3D.
To fix this problem it is sometimes necessary to mask a small portion of the side of either the left or right image in order to make the edges of the two images match. This process is performed in post production.
9.4 Depth budget
Inside the cinema there are limits to where it is comfortable to view the 3D content, if the images exceed these areas then it can cause eye strain and headaches.
The optimum Z position to view the image is at screen depth, as the image moves behind the screen or in the opposite position towards the audience it gradually becomes more painful due to the limitations of the ability to independently control convergence and accommodation (as discussed in chapter 2). There are also areas at the extreme sides of the screen, where only one eye can see the image, these areas are also painful to view.
One of the main jobs of a 3D cinematographer is to try and fit the whole range of vision available in real world, into the stereoscopic comfort zone available in the cinema.
One solution to try and maintain the range of 3D space available is to 'float' the stereoscopic window. For this to make sense, you have to remember that, the entire range from nearest to furthest point of focus is 'x' feet. If there is a image being shown extremely close to the audience, the furthest point in the distance is equal to the closest image + 'x'. To solve this the stereoscopic window is floated nearer to the audience, so it appears that the screen is closer that it actually is. This is equally true for objects which need to be set far back in the screen.
9.5 Matching size of screen
It is important when the decisions are being made about the level of depth in the 3D image, that the final output medium is being considered. This is because if the film has been made to be screen on a 5 foot screen, then it is instead played on a 10 foot screen, it will double the levels of depth in the image.
This could then push the extremes of 3D beyond the comfortable levels and result in uncomfortable viewing experiences.
chapter 10: Creating 3d content
There are three basic ways to create 3D content. The first two methods involve creating new content in 3D formats, these are: computer generated images (CGI) and stereoscopic film making using two cameras.
The final method is converting existing 2D material in to 3D.
10.1 Computer Generated Images (CGI)
Out all of all of these methods, the one that offers most control is CGI. This type of 3D work is done digitally on a computer, the animator has absolute control, or as close as is possible, over all of the different aspects of the 3D space. He can build virtual environments. control convergence, motion parallax and focus.
Building 3D content this way is very time consuming but it does allow you to be extremely accurate with all of the necessary variables used in 3D.
This complete control over the image explains why a large proportion of recent 3D films have been of the animated CGI type.
Since animated films started to move from being hand drawn to computerized over the last couple of decades, most of the animated worlds are built in three dimensions to start with. So adding a separate camera and moving it a few inches away from the original is a relatively simple step. As a result creating true stereoscopic 3D animation isn't much of a jump, providing that the virtual environment is built in 3D anyway.
10.2 Dual camera filmmaking
The second method is camera based 3D. This is where on set you have two cameras positioned together at the required distance apart and converged at the required angle.
Using this method two cameras are connected together in a 3D rig. As with the CGI method there are many variables that can alter the 3D image which is being captured.
If the filmmaker is going to set the convergence on set, it is very important to consider the size of the screen that the final image will be portrayed on, as decisions taken when filming will affect the size of the 3D window.
If the amount of depth has been set manually during filming by setting the convergence and distance between cameras, it is much more difficult to adjust the image for a different screen size.
The advantages of filming with two cameras on set is that you have two real viewpoints to work from, this should provide you with the most detailed 3D image as you have more real data available.
The downside to filming with two cameras in 3D is that all of the costs related to capture, storage and editing are doubled, as there is twice the amount of data to be processed.
In addition to this it is necessary to employ a crew that has specific 3D knowledge, and as it is such a relatively new medium at present, the costs of specialist crew will be much higher.
10.3 2D-to-3D conversion
It is possible to convert existing 2D footage into 3D. It is however, a very expensive process when done to a high level.
There are several steps that can be taken to convert the picture.
One of the most powerful methods involve cutting the image up in to sections and then frame by frame manipulating the sections and overlapping them. This overlapping creates parallax(objects nearer to you move by a greater distance) and occlusions (objects further away are hidden by nearer objects), thereby generating a sense of 3D. This frame-by-frame rotoscoping is a very lengthy process.
A further method utilises the Pulfrich effect. Named after the German inventor Carl Pulfrich. He discovered that if the light to one of your eyes is slightly reduced, then when you view objects moving horizontally then they appear to move along an Z axis towards you.
The main method that is predominately used in big budget conversions is the 3D reconstruction and projection method. This is done my digitally modelling the 3D environment and then laying the original 2D frame on top of this. You then create a virtual viewpoint for the second channel. When combined these two channels produce the stereo vision.
The American company, 'In-Three' are one of the most well-known of the high end companies that offer the conversion facility. They were responsible for converting Tim Burtons 2010 film, 'Alice in Wonderland',
Cite This Dissertation
To export a reference to this article please select a referencing stye below: