This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
A 3D (three dimension) talking head is basically a 3D or stereoscopic 3D animated virtual human head. 3D provides perception of depth. In recent years, there has been an increased interest for animated talking heads in diverse applications. This includes automated tutors for e-learning, avatars in virtual environments, computer games, dialogue systems, web services etc.
Furthermore, the concept of talking heads that are able to interact with a user in a natural way using speech, gesture and facial expression, holds the potential of a new level of naturalness in human-computer interaction, where the machine is able to convey and interpret verbal as well as non-verbal communicative acts, ultimately leading to more powerful, efficient and intuitive interaction. Parameter-based facial animation is now a mature technology: The inclusion in the MPEG4 standard eases interoperability and integration with other multimedia content. This is evident in Audio-visual speech synthesis, i.e. production of synthetic speech with properly synchronized movement of the visible articulators, which is not only an important property of such talking heads that only improves realism, but also adds to the intelligibility of the speech output. Previously, visual speech synthesis has typically been aimed at modelling neutral pronunciation. However, as these systems they embody become more advanced, the need for affective and expressive speech arises. This presents a new challenge in acoustic as well as in visual speech synthesis.
Several studies have shown how articulation is affected by expressiveness in speech, in other words, articulatory parameters behave differently under the influence of different emotions. This interdependency between emotional expression and articulation has made it difficult to combine simultaneous speech and emotional expression in synthetic talking heads. Rather than trying to model speech and emotion as two separate properties, the strategy has been to incorporate emotional expression in the articulation from the beginning.
The integration of text-to-speech (TTS) synthesis and animation of synthetic faces defines visual text to speech systems The TTS informs the talking head when phonemes are spoken. The appropriate mouth shapes are animated and rendered while the TTS produces the sound.
The aim of this project was to essentially design a fully textured and animated 3D Talking Head which is being implemented in a website to be used during the University of Hertfordshire's School of Engineering and Technology Open Days for prospective postgraduate students. This talking head will allow the user to navigate around the website whilst being informed depending on what page is visited. Having this feature combined within a website, provides a different and unique touch to each of the pages, as it supplies an audible and visual spin to the site, making the user experience slightly different to the traditional box standard website.
The Measurable objectives were adapted from the feasibility study, in order to reach the project aims more effectively. These objectives were to:
Design and create a virtual human head, complete with full textures using a modeling software package.
Have full facial animation for when the head speaks, along with lip sync incorporation.
Implement a voice that plays in sync alongside the facial animation, complete with mouth movements that match the words being said.
Design and implement a website to house the Talking Head. Along with selectable menus
This report is revolved around the development stages of creating a '3D Talking Head'. The initial few chapters will inform the reader about various technologies involved in the creation of a talking head, providing a brief background into the history, along with existing example of how talking heads have been used enhance user capabilities. Chapter 4 to 5 is the most extensive part of this report and will detail the design, development and integration of the overall project, along with problems that occurred along the way, and how each of them were overcome in order to each the final outcome.
Chapter 5 outlines the quality assurance process in place during this project, the types of testing methods used and the results of each test. It will identify future requirements and improvements, and classify how further development could enhance the overall project.
The research conducted was to provide an insight into the following, which were basic elements needed to be understood and taken into consideration before design and implementation could begin. This include,
Animated Talking Heads
Text to Speech engines
Face and Body Animation
Lip Sync Technology
Technologies that cater for cross compatibility from both website design and animation
Talking heads have been with us for a long time , these talking heads usually act as communicators of a message or facilitators of outcomes. They could be lifelike, e.g. where you could make a well know actor like Tom Cruise could to talk, sing or cry but never sorry or abstract but well meaning. Their physical appearance doesn't usually correlate to their effectiveness in communication.
Research indicates that sound, graphics and knowledge go a long way to convey information, ideas and feelings faster than documents . Reeves  also suggest that the user interface is usually better if it is implemented with respect to what people would expect from the same kind of character it was in the real world; like personality, emotion etc.
Pelachaud  also suggested that that integrating non-verbal behaviour such as emotions, gesture and expression with expressive speech would go a long way in increasing realism.
Several projects have been carried out on developments of talking heads. Waxholm was a talking head system which was primary developed to retrieve information about the ferryboat services in the Stockholm archipelago . The system also had some information about facilities like hotels and restaurants on the islands. It featured a graphical interface with an animated talking head and a picture that visualized the system's domain . Textual information was presented by placing tables by the icons depicting the corresponding facilities, where the table with available hotels is below the picture of the hotel, the timetable is shown below the boat. Information provided by the user was also displayed at different places; the recognized destination was shown on the island and the recognized departure on the jetty.
The Waxholm project was initiated in 1992 as a research effort for building spoken dialogue systems. In this project, new dialogue management and parsing modules were developed and combined with TMH's existing speech synthesis and recognition. The goal was to acquire knowledge on how to develop the natural language modules and the other system modules needed to build spoken dialogue systems . Another important purpose was to collect spoken dialogue data. The fully automated Waxholm system has not been used in any extensive user studies.
The August system was also a conversational spoken dialogue system featuring an animated agent called August, whose personality was inspired by August Strindberg, the famous Swedish 19th century author . The August project was initiated as a way to promote speech technology and KTH in connection with Stockholm being the Cultural Capital of Europe in 1998. The spoken dialogue system as well as the animated character was developed during the first half of 1998 and the system was available for the general public at the Culture Center in Stockholm, daily from August 1998 to March 1999,
The overall purpose of the project was to expose speech technology to the general public, and in this way get practical experience from moving a research system outside the lab environment, and at the same time collect data on how people might interact with animated agents . August could answer questions covering a number of topics, for example giving the location of restaurants in Stockholm, sharing facts about the author August Strindberg or exchange social utterances. The dialogues can be considered as quite shallow since the system primarily answered questions and only occasionally initiated one-level clarification sub-dialogues. This meant that the dialogues were user driven, which of course influenced the dialogue data collected.
August was a spoken dialogue system with multiple domains. The first issue that had to be handled was how the system should communicate which domains it could handle, without explicitly asking the users to ask certain questions . To make it possible to give hints on topics of conversation, a thought balloon was added. If the user asked August something that he did not understand August would state that he did not understand, while at the same time indicate that he was 'thinking' by displaying Why don't they ask me about Strindberg? As text in the thought balloon. The users would now also ask August what he could talk about
Research generally indicates that sound, graphics and knowledge convey information, ideas and feelings faster than documents . Reeves  also suggest that the user interface is usually better if it is implemented with respect to what people would expect from the same kind of character it was in the real world; like personality, emotion etc. Integrating non-verbal behaviour such as emotions, gesture and expression with expressive speech would go a long way in increasing realism.
Signals from visual and audio channels complement each other and this complementary relation between audio and video cues help in ambiguous situations e.g. some phonemes can easily be confused acoustically but can also easily be differentiated visually . This could also aid people who are hard of hearing. The figure below shows the Visual Text To Speech architecture.
Current talking heads are usually software programs that communicate with the user via visual, vocal or textual means. These systems incorporate facial animation, speech processing and appropriate graphical user interface. Some of the present talking head systems are based on the VTTS architecture.
In recent years, there has been tremendous advancements in the development of talking heads. In 2004, Xface toolkit an open source software was developed for developers who want to embed 3D facial animation to their software as well as researchers who want to focus on related topics without the hassle of implementing a full framework from scratch . The main design principles for Xface were its ease of use and extendibility.
A basic understanding in the effect of talking heads is clearly important in the field of information communication and related areas hence it is very important for designers of talking heads to make informed decisions on how to make them fulfil their required roles more effectively in an interactive manner as facilitators of information.
3D Animated Talking Heads
MPEG4 FACE AND BODY ANIMATION
TEXT TO SPEECH ENGINES
TALKING HEAD APPLICATIONS
In computer animation, 3-D (three dimensions or three-dimensional) head describes a virtual human head that provides the perception of depth. An interactive 3D head makes a user feel involved in the scene. This experience is sometimes called virtual realityÂ .Several plug-ins are usually required in web browser that interact with 3-D images. Viewing 3D images may also require additional equipment. Creation of a 3D image involves a three-phase process of tessellation, geometry and rendering. In the first phase, models are created of individual objects using linked points that are made into a number of individual polygons (tiles). In the next stage, the polygons are transformed in various ways and lighting effects are applied. In the third stage, the transformed images are rendered into objects with very fine detail.
Monoscopic View of a 3D Head
To generate a 3D stereovision, two factorsÂ ConvergenceÂ andÂ Parallax are required. The angle formed by your eyes and the observed object is known as the convergence. The higher the angle value is, the nearer the observed object is to your two eyes, and vice versa. Therefore, when the convergence is fixed, any object between you and the convergence point will be closer to you, while the object beyond the convergence point will be farther away from you.
The parallax images are the images passing through to your left and right eyes. All 3D stereo media contain a pair of parallax images that individually, and simultaneously, pass to your left and right eyes. This is to convince your brain that there is an existence of depth in the media.
When the target object offsets to the right in the left image, and offsets to the left in the right image, then your binocular focus is lead to fall behind the display. This phenomena is calledÂ Positive Parallax.
When the paired parallax images superimpose on the display, then your binocular focus is lead to fall on the same display, which is theÂ Zero Parallax.
When the target object offsets to the left in the left image, and offsets to the right in the right image, then your binocular focus is lead to fall in front of the display. This phenomena is callÂ Negative Parallax.
Research on ways to represent human behaviour and especially human faces has been going on for the past few decades. Efforts on creating computer animated human faces date back to the early 1970's. Mostly driven by entertainment end game industry but also medical science and telecommunication companies, the field of computer based facial animation has gone far from the first model presented by Parke in 1972. Nowadays the research is focusing on entire complex and multi-application motion pictures based on computer animation . Over the years there has been extensive progress in facial animation research leading to adoption of a standard language which enables an artist to control a facial animation system through the same interface, reuse the facial animation sequences or let any other face tracker drive any facial animation system on any platform. This is basically what the MPEG4-FA standard is all about. The figure below present's taxonomy of facial animation.
MPEG4 Facial Animation
The Moving Pictures Experts Group released MPEG- 4 as an ISO standard in 1999 . The standard basically focuses on a broad range of multimedia topics; this includes natural and synthetic audio and video as well as graphics in 2D and 3D. Unlike former MPEG standards, MPEG-4 mainly concerns communication and integration of multimedia content. It is the only standard that involves face animation, and has been widely accepted in the academia, while gaining attention from industry. MPEG-4 Facial Animation (FA) describes the steps to create a talking agent by defining various necessary parameters in a standardized way . There are mainly two phases involved in creating a talking agent; setting the feature points on the static 3D model which defines the regions of deformation on the face, and generation and interpretation of parameters that will modify those feature points in order to create the actual animation. MPEG-4 abstracts these two steps from each other in a standardized way, and gives application developers freedom to focus on their field of expertise. For creating a standard conforming face, MPEG-4 defines 84 feature points (FPs) located in a head model. They describe the shape of a standard face and should be defined for every face model in order to conform to the standard . These points are used for defining animation parameters as well as calibrating the models when switched between different players.
FIGURE 2: DISTRIBUTION OF FEATURE POINTS ACROSS THE FACE
The figure above shows the set of FPs., before using them for the animation on a particular model, they have to be calibrated. This can be done using face animation parameter units (FAPU). FAPU are defined as fractions of distances between key facial features like eye-nose separation, as shown in Figure 2. They are specific to the actual 3D face model that is used. While streaming FAPs, every FAP value is calibrated by a corresponding FAPU value as defined in the standard. Together with FPs, FAPU Serve to achieve independence of face model for MPEG-4 compliant face players .By coding a face model using FPs and FAPU, developers can freely exchange face models without worrying about calibration and parameterization for animation.
Text to Speech Engine
Prior to recent development in speech processing, Bell Lab in the 1930's had developed the VOCODER in an attempt to simulate human speech . It was keyboard operated key synthesizer that was clearly intelligible. Dudley made advances in this area creating the VODER which was displayed as an exhibition at the New York World's Fair 1950's and the first complete Text-to-speech system was completed in 1968 .
The Text-to-speech engine design has different design algorithms, models and modules which software developers have adapted in their research and software products. A common architectural platform is discussed below.
The Engine is made up of two parts. These are the "Front End" and the "Back End" . The front end takes the input in the form of text and outputs a symbolic linguistic translation or representation. The Back End takes the representation at its input and outputs the synthesized speech wave form. Taking this into consideration these parts can be further classified into three modules :
FIGURE 2 A DIAGRAM TO REPRESENT THE VARIOUS MODULES IN A TEXT TO SPEECH ENGINE 
Talking heads in applications
Various products have been developed for creating 3D Talking Heads. These include Extreme 3D, Light Wave 3D, Ray Dream Studio, 3D Studio MAX, Softimage 3D, Crazy Talk 6.0 and Visual Reality. The Virtual Reality Modeling Language (VRML) allows the creator to specify images and the rules for their display and interaction using textual language statements. Going by current web based models, a wide range of companies have developed their talking heads to undertake a similar role to the one needed for this project. Several websites have created services which allow other businesses to create, customize and then buy their talking heads, ready to implement into their own site.
Microsoft Word also has features that allow the user to have their own personal avatar on screen. These avatars provide them with options of what they would like to do next, in most cases based off a problem or issue that has occurred, allowing users to either troubleshoot the issue and investigate how and why the error has occurred or simply discard the problem and proceed. In most applications a talking is there to supply a user friendly approach, to direct users around the medium, supplying useful information, and preventing uncertainty as and when issues occur.
REQUIREMENTS AND ANALYSIS
Before proceeding with the design and implementation of the project, a detailed analysis of the requirement and design architecture of the project needed to be carefully considered. This includes:
The user requirements
The technical requirements
To make the project more successful, the requirements of the users have to be considered.
This basically has to do with what information potential users will like to get from a talking head website during open days and also, how they ll want the website to look like.
Also, the aims and objectives have to be incorporated into it.
After several enquiries from students, the following results were below were found. They make up the user requirements:
Students wanted to get information regarding - the department itself, courses in the
Department, facilities available, student life, accommodation, social life
The user interface had to be friendly and easy to use as well
An important requirement was to make the navigation of the website easy and
The use of colour and text that would be easy to read in the application is also
Also the movement of the lips of the head has to be properly synchronized with the words being uttered
The technical requirements for this project are being divided into two parts. Hardware and Software
Pentium IV 2GHz or higher recommended
512MB RAM or higher recommended
60 GB disk space or higher recommended
Duplex Sound Card/VGA Card/Keyboard/Mouse/Microphone/Speaker
Display Resolution: 1024 x 768 or higher
Video Memory: 128MB RAM or higher recommended
Anaglyph 3D glasses (Red and Cyan)
Real Illusion Crazy Talk 6.0
Macromedia Dreamweaver CS4
Macromedia Flash CS4
Adobe Photoshop CS4
Imtoo AVI to SWF Converter
Crazy Talk 6.0
Crazy talk is a piece of software for easy and rapid creation of professional quality 3D graphics for Flash and Web Developers. Creates animation from images, with enhanced facial fitting, natural life-like head movement, editable lips, mouth, teeth and eyes. Supports lip synchronization. The software supports the following outputs , MPEG-4, NTSC, PAL and HD, Flash FLV, TGA, BMP Sequence. 3D Stereo Vision Output, YouTube video direct publishing and Advanced web output
Macromedia Dreamweaver CS4
Macromedia Dreamweaver is a program used to develop websites. It allows users to manipulate the way a web page is viewed by directly changing it on the interface rather than the code. For the purpose of this project, HTML (Hyper Text Mark Up Language) is used for the interface. This makes it a very easy tool to design web pages with. It also lets programmer's change the appearance of the web pages by using the HTML code. The program is also useful to use as it allows the integration of sound and visual effects by generating the code automatically for the user from its own library reference
Macromedia Flash CS4
Flash is a program used to create animation, video, advertisements, and various web page Flash components, which can be integrated into web pages. It is also used to develop rich Internet applications. Flash can manipulate vector and raster graphics, and supports bidirectional streaming of audio and video. It contains a scripting language called ActionScript. Files in the SWF format, traditionally called "ShockWave Flash" movies, "Flash movies" or "Flash games", usually have a .SWF file extension and may be an object of a web page
This is an application used for Image editing, creation and manipulation. The program is basically used for design. For the purpose of this project it will be used to design the website background
Imtoo AVI to SWF converter
This software is used to convert avi video to swf. Which is the format compatible with the website to be designed
The figure below shows the design architecture for this project
A Creation and development of the 3d talking head
B Animation, Addition of voice and Lip synchronization,
C Design of website
D Overall system integration to for the talking head website
Design and Implementation
Having decided on softwares to be used, the next phase will be to begin the design of this project. The design is basically divided into several sections, this include
Creation and modeling of 3d talking head using crazy talk software
Overall System Integration to form the Talking Head Application
CREATION AND MODELLING OF 3D HEAD USING CRAZY TALK 6.0
The software has three menu tabs on top; model, scripting and output
The model tab brings out the Model page. The model pageÂ provides the starting point of the application. The model page interface features tools for image selection, image processing, wire frame fitting, profile style setting, standby motion, background mask editing, and background with camera movement. After selecting an image to be used as the model, the image processing tools could be used to enhance the quality of the image. You can then use the fitting tools to fit a wire frame to the image. This creates the talking image which can then be animated with a script along with gestures and expressions for creating a talking message. This applicationÂ provides types of profiles to fit the models according to their characteristics. Changing the background image and specifying if the image moves along with the camera or the movement of the model is simpler.
This model page has 10 main tabs on the left hand side corresponding to specific areas of customization
This includes tabs for
Background mask editing
Model motion settings
At the top of the software user interface, there are also tabs for preview, basic facial mode and detailed facial mode.
Import Image: Firstly an image was chosen for the creation of the talking head. he image was downloaded from the internet. And using the import tab it was imported into the software to commence the creation. The figure below shows the image.
The camera capture tab is usually used if u want to capture an image with your webcam and use same for the creation. In this project, since the image used for this project was imported, the tab wasn't needed.
Image processing - The image processing tools are used to enhance the selected image quality, rotate it, or crop it to use only a portion of the original image. This allows you to focus specifically on the facial details within an image thereby resulting in more accurately talking heads.
Face Fitting: The automatic face fitting tool in crazy talk creates four basic anchor points around the head which allows you to create aÂ model in a matter of a few mouse clicks. This process is totally automatic and requires little or no complex frame fitting techniques. After creating a basic frame to fit the face, the fitting tools were then used to increase the accuracy of the wire frame by adjusting the frame points with more precision. Crazy talk shows the estimation about the position #of the four points defining the eye and mouth positions. But the number indicators on the face could be clicked and moved to adjust the four points to get a better animation. The reset button when clicked cancels all already done work. For this project, the anchor points were adjusted to fit the head to produce a more accurate wire frame.
Face Orientation: theÂ Face OrientationÂ button was used to adjust the head profile style and define the face orientation of the head model. TheÂ Rotate tool was also used to fit the angle of the model's face. It ensured the 3D mesh of the head matched the facial angle of the character in the photo to generate the best result of the head rotation animation.
Background Mask Editing: The original colour of the image background was used. It wasn't edited. The background settings well were left at the default.
Eye Settings: CrazyTalkÂ provides a virtual eye template gallery to match the design style of theÂ VividEye templates.TheÂ EyeOpticsÂ simulate the specularity and shadow effects on the eyeballs, which implies the spirit of the eyeballs. You may then generate brilliant eyes by increasing the specularity to add clarity to the eyeballs or add pale and dull effects to the eyes. This feature facilitates you with creating sparkling, crystalline, or turbid eyeballs.The original eyes of the imported image were used for the creation because of its realistic looks.
The mouth settings tools were used to modify the inner mouth and throat color for animation scripts. This was done by clicking the mouth tab and using the sliders to adjust the color levels of the inner mouth, theÂ Brightness,Â Contrast,Â Hue, andÂ Saturation. TheseÂ sliders were actually adjusted until desired throat color was achieved. The mouth of the model was wide open during this operation to allow you to better see the color change. These mouth setting tools were also used to choose the teeth for the model and the lips as well. This was chosen from the teeth template. The resume button was usually used to clear all the changes made to the model if there was a mistake
Model Motion Settings: this tool was used to set the idle disposition of the model, and as well the head motion strength.
The wire frame surrounding the face is automatically generated by crazy talk according to the 4 points set in theÂ Face Fitting panel. The points and lines define the range of the model in the image and how the facial features of the model in the photo are mapped to the ones of the 3D virtual head. The head frame define the head area of the model, includes the facial features, the nose, the mouth, or even objects such as hair or long ears that are attached to the head.
The scripting page is used to add talking scripts to the model. This could be done by inserting a pre recorded voice, direct recording, or by mere typing text into the built in text to speech engine. The figure below shows the scripting page.
For this project, the software's built in text to speech engine was used to create scripts for the talking model. Crazy talkÂ supports SAPI compliant Text-to-Speech engines; it currently uses the Microsoft text to speech engine. To create the talking scripts, the TTS dialog box was opened and the required text typed into the dialog box. In this case the required output for the head was typed accordingly and saved. The figure below shows the scripting page.
The required text was typed in the editor window and the voices adjusted by using the volume, pitch and speed sliders to achieve the desired effects. Afterwards the preview button is used to play the text. The reset button was basically used to restore the sliders to the default settings. When done the ok button was clicked. The website had a total of seven (7) pages and seven (7) different talking heads. Each for the Homepage, courses page, facilities page, student life page, accommodation page, study facilities page, sports facilities, and labs page. For each head the the process was repeated and the required animation script created
The animation script consists of many small parts or sequences. TheÂ TimelineÂ tab on the Script page was used to add expressions, gestures, facial movements, and special effects to the complete timeline or to the individual sequences. Thus the timeline tab enabled customization of the model's face to show specific movements and expressions that match the speech or text of the script. Lip sync was also automatically done by the software.
3D STEREO OUTPUT
After modelling, Animating and Scripting, the next stage was to create a 3D stereo Output of the talking head. The output menu tab in the application brings out the output page as shown in the figure below. The default output format which is AVI was used. The stereo vision box was checked and anaglyph red/ cyan chosen. The display distance was also set as this goes a long way in affecting the convergence of the media during playback. The original resolution circle was checked and the output sized left at default which was 720 by 720. After that the export button was clicked to save the file. The figure below shows the output page
The output was a 3D stereovision of the talking head in Audio Video Interleave (AVI) format which could be viewed with anaglyph red/cyan glasses as shown in figure
The procedure was repeated for all the talking heads meant for the various web pages. AVI format isn't compatible with web pages. So in order to be able to incorporate this heads into the website, they had to be converted to Small Web Format (SWF)
CONVERSION TO SWF
Imtoo AVI to SWF converter was used to convert talking head file in AVI format to SWF, which is the format compatible with the website to be designed. This was basically done by, opening the application, clicking the add files button and selecting the talking head files to be converted. After which the files were checked and the convert button clicked. There was provision to alter the default settings of the outputted file but for the purpose of this project, the default settings were used. The figure below shows the application as used to convert the files to SWF
This process was used to convert all the animated 3D talking heads to SWF format.
The last stage of development was to create a website to house the talking head. The aim was to develop a website that allowed prospective postgraduate students to get important information about the school of engineering and technology. The site would have pages containing information about postgraduate courses offered, accommodation, study facilities, sports facilities, student life, and department labs. Since the heads are 3D stereo vision, Red/Cyan anaglyph glasses are required to view the website. Since the major concept of this project is about designing a 3D talking head, it was needless having non 3D element on the website. So the web pages basically majorly had 3D content which was the talking head
They were a total of 7 pages for this website, the home page was the starting point after which others were designed accordingly.
The home page is the most important page of a website. It is the one page that all your visitors will view. A poor home page can destroy any chance of achieving your website objectives within a few seconds. A good home page is the blueprint for every successful website.
This homepage had to be brief and straight to the point, therefore only a short welcomed speech was made by the 3D talking head upon opening the page. At the top of the page, there was a University of Hertfordshire Logo. In terms of navigating around the website, a menu bar runs along the top of the webpage, providing the user with the option of viewing the remaining pages.
This page gives information about the various electrical and electronics engineering postgraduate courses obtainable in the school.
This page gives information about student life obtainable in the university, about clubs, associations etc
This page provides information as regards the various accommodation options available to prospective students.
This page has sub pages for study facilities and sport facilities. The study facilities page gives information about the Learning Resource Centres while the sports facilities page gives information about the university sports village.
This page gives information about the laboratories available in the department.
Testing and Maintenance Strategies
This chapter focuses on the testing carried out on the 3d talking head website and also the various maintenance strategies which could be subsequently carried out. Testing refers to a systematic process of checking to see whether a product or service being developed is meets the specified requirements. Many organizations have a department devoted to Testing and maintenance which is sometimes known as quality assurance. A quality assurance system is usually set up so as to increase a company's credibility and customer confidence; it also aids to improve work processes and efficiency, and to enable a company to better compete with others.
Testing is a crucial and probably the most important part to the quality assurance aspects. Two main types of testing were performed:
System Testing - This was done by checking out all features and functions within the website so as to identify any dysfunctions, bugs or areas of improvement.
User Testing - This involved allowing users to test the website and collecting their opinions, criticisms, strengths and areas of improvement of the project.
The goal of this testing is to separate each part of the system and check that every part is working perfectly. Usability, functionality and navigation were tested. The website was explored thoroughly, checking all the web pages, the interface, links and interactive features of the talking head to detect any broken links and problems with the interactivity. The links were checked to confirm if they linked to the corresponding page. One of the problems discovered was basically that of broken links. Some of the links when clicked showed a blank page. This was a website design problem which had to do with linking and naming of the various pages. I had to go back to Dreamweaver to correct the error. Another problem here was that the talking head on the accommodation page had a different background. This was also an error which occurred during the modeling and design of the head. This issue was simply resolved by going back to the application and re designing a head with same background so as to make all the 3d talking heads uniform. The system was evaluated thoroughly so as to ensure correction.
User testing was based on the verbal feedback received from external users of the developed application. The main problem users had was the lip synchronization element, as some of the words from a visual aspect didn't match some of the audio perfectly, yet this could be enhanced if a different software tool was used to generate the mouth movement. Another issue was the fact that the 3d head was only blinking when it was idle. When talking the eyes were usually open. Actually this was a problem I really tried resolving, but the major setback I had was the fact that if I wanted the eyes to blink while talking, I needed to use artificial eyes from the software eyes template. But the eyes generally made the talking head look very artificial and unreal. Another recommendation mentioned was that the website may have looked even more appealing if flash was embedded to enhance the visual aspect as well as user interactivity when selections were made. Apart from the three suggestions, the overall system received positive feedback from all areas.
This briefly outlines the content administration system created for this website
Adding new content to the website is quite an easy process. Since Dreamweaver was used to design the website, any additions must also be done in Dreamweaver. When the page to in which content is to be added is opened, the text and image place holder are visible. Adding texts involves placing the cursor at the required position and typing accordingly. To add images, a new image place holder is inserted into the page; this has to be the same size as the image to be inserted. The image holder is then double clicked and the required file chosen to put it on the page. To add a new web page, the page has to be designed in Dreamweaver as well. To add a new 3d talking head to a page, the head has to be first designed and animated in Crazy talk software which was the software used for modeling and animation for this project. After which it could be inserted into a page by first inserting an image place holder then inserting the image into the holder.
This is quite a straight forward process. It basically involves opening the page in Dreamweaver and deleting whatsoever needs to be deleted.
Updating content would require opening the relevant page within Dreamweaver. For textual updates that would require deleting the text to be replaces and typing new texts. For images, it will involve deleting the image and inserting a new one into the image place holder.
The key aims of this project were to essentially design and animate a 3d Talking Head. And also design a website to house the talking head. This talking head website was designed to be used during open days to provide information to prospective postgraduate students; the aims of this project were undoubtedly met.
The concept of embedding 3D into the project was basically to add depth and reality to the talking head. The work was basically done in two parts, firstly designing the 3D talking head and secondly designing the website to house the talking head. This project was completed using a wide range of techniques, software tools and implementing techniques. This included Crazy talk 6.0, Adobe Photoshop, Modelling, Macromedia Dreamweaver, HTML, Imtoo AVI to SWF converter etc.
In conclusion this project has great potential in the future of human computer interaction, In the area of web surfing, it presents users with a more realistic and interactive experience. It also presents an alternative to the traditional website and hence could go a long way in being a substitute for people with disabilities like eyesight problems. In terms of personal development, delving into a developing area like this is always interesting and challenging as well. This project has helped me gain knowledge in a number of key areas.
The project was actually completed within the given time frame. This was made possible by the use of appropriate action plans and Gantt chart. This plans outlined the various tasks to be accomplished within a given time frame. In general the Gantt chart was essential as it helped ensured that the work was up to date.
During the project, there were actually times when some tasks took more time than was planned but there were also times some tasks took less time than was actually planned. This actually helped balance up everything.
[ENTER YOUR REFERENCES HERE - Information obtained from books, journals, or the Internet must be referred to in the text by a superscripted number or by using the author's name and the date of the publication e.g. [Pitt, 1973] and then described in detail in the references section at the back of the report. The back page of 'Electronics Letters' gives examples of a standard format for references, which is used to enable computer searches to be carried out. You might find it useful to refer to journals and publications for ideas of how this is done. An example is: Pitt, C.W, "Sputtered Glass Optical Waveguides", Electronic Letters, 1973, 9 pp 401-403. Note that references must be specific. Non-specific references should be given in the Bibliography. References to text books should include the author, title, edition number or year, name of publisher and page numbers or section numbers. Each reference must be referred to at least once in the main body of the report. References to web pages should include the title of the page, not just the URL. Students will be penalised if material from published sources is included in their reports without full acknowledgement and attribution of the source of the material.]
Pitt, C.W, "Sputtered Glass Optical Waveguides", Electronic Letters, 1973, 9 pp. 401-403
Siau, J, et. all, "Biometrics", Some Journal, 2005, pp. 101-104