Recent advances in networking and digital media technologies have created a large number of networked multimedia applications. Those applications are often deployed in a distributed network environment that makes multimedia contents vulnerable to privacy and malicious attacks. For insecure environments, it is possible for an enemy to tamper with images during transmission. To guarantee trustworthiness, image authentication techniques have emerged to confirm content integrity and prevent forgery. These techniques are required to be robust against normal image processing and transmission errors, while being able to detect malevolent tampering on the image. Such authentication techniques have wide applicability in law, commerce, journalism and national defence. In the literatures, methods of image content authentication can be categorized into either digital signature based or watermarking based. A digital signature (or crypto-hash) is a set of extracted features, which captures the essence of image content in compact representation. It is stored as an extra file and later used for authentication. Signature based methods can work on both the integrity protection of the image and repudiation prevention of the sender.
Watermarking, on the other hand, is an invasive method that really embeds a message into an image data and the hidden message is later extracted to verify the authenticity of image content. Watermark-based approaches only work for protecting the integrity of the image. The major difference between a watermark and a digital signature is that the embedding process of the former requires the content of the media to change.
The contribution of this paper is to develop a signature based image authentication scheme, which tries to overcome the severe constraints on security and the data transmission capability imposed by a wireless environment. To obtain such results, the proposed scheme generates only one fixed-length digital signature per image regardless of the image size and the packet loss during transmission The robustness of the generated scheme is achieved by employing the concept of structural features, whereas security is achieved by adopting a filter parameterization technique.
The major differences that differentiate the proposed scheme from existing state-of-the-art approaches are: (1) it works at a semi-fragile level, which means that some manipulations on the image will be considered acceptable; (2) more robustness - it can tolerate a range of attacks while accurately locating the tampered area - is achieved by exploiting the concept of structural digital signature (SDS); (3) the integration of the SDS and key dependent parametric wavelet filters makes the scheme more efficient to security attacks; (4) the computation complexity is reduced because of the framework of a
Lifting-based wavelet transform; and (5) the ability to support efficient and accurate tamper localization in spite of information loss in large areas or high variant areas.
Networked multimedia applications are often deployed in a distributed network environment that makes multimedia contents vulnerable to privacy and malicious attacks. For insecure environments, it is possible for an enemy to tamper with images during transmission. To guarantee trustworthiness, image authentication techniques have emerged to confirm content integrity and prevent forgery. These techniques are required to be robust against normal image processing and transmission errors, while being able to detect malevolent tampering on the image.
The proposed scheme exploits the scalability of a structural digital signature in order to achieve a good tradeoff between security and image transfer for the networked image applications. In this scheme, multi-scale features are used to be making digital signatures robust to image degradations and key dependent parametric wavelet filters are employed to improve the security against forgery attacks. This scheme is also able to analyze tampering areas in the attacked image.
The proposed scheme exploits the scalability of a structural digital signature in order to achieve a good tradeoff between security and image transfer for the networked image applications. Multi-scale features are used to make digital signatures robust to image degradations and also key dependent parametric wavelet filters are employed to improve the security against forgery attacks.
Multimedia authentication techniques have been widely used in the content authentication of digital media and integrity. Current multimedia authentication schemes can be divided into 2 categories according to the Authenticator digital watermarking-based and digital signature-based. In this paper, a comprehensive overview of current temper detection and authentication methods based on the digital watermarking is presented. We discuss authentication watermarking system and its desirable features, common methods of attack and their countermeasures and survey some popular authentication watermarking schemes, finally give some suggestions for researches.
Digital Watermarking describes methods and technologies that hide information, for example a number or text, in digital media, such as images, audio or video. The information is not embedded in the frame around the data which means the embedding takes place by manipulating the content of the digital data, which means. The hiding process has to be such that the changes of the media are imperceptible. For images this means that the changes of the pixel values have to be invisible.
Furthermore, the watermark must be either fragile or robust, depending on the application. By "robust" we mean the capability of the watermark to resist manipulations of the media, such as loss compression where compressing data and then decompressing it retrieves data hat may well be different from the original, but is close enough to be useful in some way, scaling, and cropping, just to enumerate some. In some cases the watermark may need to fragile. "Fragile" means that the watermark would resist only up to a certain, predetermined extent or should not resist tampering. The first applications are came to mind were related to copyright Protection of digital media.
In the past duplicating art work was quite complicated and required a top level of expertise for the counterfeit to look like the original. However, in the digital world this is never correct. Now it is possible for almost anyone to manipulate digital data or duplicate data and not lose data quality. Similar to the process when artists creatively signed their paintings with a brush to claim copyrights and artists of today can watermark their work by hiding their name within the image.
This embedded watermark permits identification of the owner of the work. It is clear that this concept is also applicable to other media such as digital audio and video. Currently unauthorized distribution of digital audio over the Internet in the MP3 format is a big problem. In this scenario a digital watermarking may be useful to set up controlled audio distribution and to provide efficient means for copyright.
Software Requirements Specification
Feasibility study is a high-level capsule version of the entire System analysis and entire Design Process. The study begins by a classifying the problem definition. Feasibility is to determine if it is worth doing. Once acceptance problem definition has been generated, the analyst develops a logical model of the system. A search for alternatives is a analyzed carefully. There are three parts in feasibility study.
Evaluating the technical feasibility is the complex part of a feasibility study. This is because of issues like performance, costs on (account of the kind of technology to be deployed) etc. A number of issues have to be considered while doing a technical analysis. Understand the different technologies involved in the proposed system before starting the project we have to be very clear about what are the technologies that are required for the development of the new system. Check whether the organization currently possesses the required technologies
Proposed project is beneficial only if it can be turned into information systems that will meet the organizations operating requirements. Simply stated, this feasibility test asks if the system will work when it is developed and installed. Are there major barriers for Implementation? Here are questions that will help to test the operational feasibility of a project: Is there sufficient support for the project from management and users? If the current system is well liked and used to the extent that persons will not be able to find reasons for change, there may be resistance from users. Are the current business methods acceptable to the users? If they are not, Users may welcome a change which will bring about a more operational and useful systems. Have the users been involved in the planning and development of the project? Early involvement of users reduces the chances of resistance to the system and in general and increases the likelihood of successful project. Since the proposed system was to help in reducing the hardships encountered in the existing manual system, the new system will be considered to be operational feasible.
Economic feasibility attempts to weigh the costs of developing and implementing a new system, against the benefits that would be provided by the new system in place. This feasibility study gives the economic justification for the new system to the top management. A simple economic analysis which gives the actual comparison of costs and benefits are much more meaningful in this scenario. In addition, this proves to be a useful point of reference for comparing actual costs as the project progresses. There could be various types of benefits on account of automation. These could include increased customer satisfaction, improvement in quality of product, better decision making, timeliness of information, expediting activities, better documentation and record keeping, improved accuracy of operations, faster retrieval of information, better employee morale.
A Requirement can Range from a high level abstract statement of a service or of a System constraint to a detailed mathematical functional Specification.
Functional Requirements should describe all the required functionality or System Services.
Non Functional Requirements:
The Non Functional Requirements Should Define System Properties And Constraints.
3.2.3 System Requirements:
System Requirements are more detailed specifications of system functions, services and constraints than user requirements.
The first step in developing anything is to state the requirements. This applies just as much to leading edge research as to simple programs and to personal programs, as well as to large team efforts. Being vague about your objective only postpones decisions to a later stage where changes are much more costly.
The problem statement should state what is to be done and not how it is to be done. It should be a statement of needs, not a proposal for a solution. A user manual for the desired system is a good problem statement. The requestor should indicate which features are mandatory and which are optional, to avoid overly constraining design decisions. The requestor should avoid describing system internals, as this restricts implementation flexibility. Performance specifications and protocols for interaction with external systems are legitimate requirements. Software engineering standards, such as modular construction, design for testability, and provision for future extensions, are also proper.
Many problems statements, from individuals, companies, and government agencies, mixture requirements with design decisions. There may sometimes be a compelling reason to require a particular computer or language; there is rarely justification to specify the use of a particular algorithm. The analyst must separate the true requirements from design and implementation decisions disguised as requirements. The analyst should challenge such pseudo requirements, as they restrict flexibility. There may be politics or organizational reasons for the pseudo requirements, but at least the analyst should recognize that these externally imposed design decisions are not essential features of the problem domain.
A problem statement may have more or less detail. A requirement for a conventional product, such as a payroll program or a billing system, may have considerable detail. A requirement for a research effort in a new area may lack many details, but presumably the research has some objective, which should be clearly stated.
The most problem statements are incomplete, ambiguous or even inconsistent. Some requirements, are just plain wrong. Some requirements, although precisely stated, have impose unreasonable implementation costs or unpleasant consequences on the system behaviour. Some requirements seem reasonable at first but do not work out as well as the thought or request. The problem statement is just a starting point for understanding the problem, not an immutable document. The purpose of the subsequent analysis is to fully understand the problem and implications. There is no reasons to expect that the problem statement prepared without a fully analysis will be correct.
The analyst must work with the requestor to refine the requirements so they represent the requestor's true intent. This involves challenging the requirements and probing for missing information. The psychological, organizational, and political considerations of doing this are beyond the scope of this book, except for the following piece of advice: If you do exactly what the customer asked for, but the result does not meet the customer's real needs, you will probably be blamed anyway.
Execute N number of channels
Generate the Public key and private key
Find neighbor for all channels
Enter the destination
Find possible paths to reach destination
Find the shortest path
Encode the image
Send to destination
From destination receive the encoded image
Apply wavelet decomposition
Apply content based verification
Shows the results
Non- Functional Requirements:
The major non-functional Requirements of the system are as follows
The system is designed with completely automated process hence there is no or less user intervention.
The system is more reliable because of the qualities that are inherited from the chosen platform java. The code built by using java is more reliable.
This system is developing in the high level languages and using the advanced front-end and back-end technologies it will give response to the end user on client system with in very less time.
The system is designed to be the cross platform supportable. The system is supported on a wide range of hardware and any software platform, which is having JVM, built into the system.
The system is implemented in web environment. The apache tomcat is used as the web server and windows xp professional is used as the platform.
Interface The user interface is based on HTML and XHTML
Processor : Intel Pentium 4
RAM : 1 GB
Hard Disk : 80 GB
Microsoft Windows XP Professional
In this chapter tools and technologies that are used in this project is clearly mentioned. The characteristic features of the technologies are briefly described in this chapter.
Features of JAVA:
A programming tool or software tool is a program or application that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs that can be combined together to accomplish a task. The Chapter describes about the software tool that is used in our project.
Java is associated to C++, which is the straight descendent of C. A lot of the characters of Java are derived as of these two languages. Java language rules are derived from C. Several of java's object-oriented characteristics were prejudiced through C++. In detail, several of Java's significant characteristics come from, are response to its predecessors. In computer programming languages for the past three decades the formation of java was deeply rooted in the process of enhancement and adaptation. For these reasons, this part reviews the series of actions and services that lead up to Java. The need to resolve a fundamental problem that the preceding languages couldn't be solved is the determination behind each innovation in language design. Java is no exception.
The Java Buzzwords
No argument of the origin of Java is complete with no look at the Java buzzwords. Even though the basic services that necessitate the creation of Java are portability and safety, in moulding the final form of the language other factors also played an important role. The following are the list of buzzwords in Java:
For programs to be dynamically downloaded to all the various types of platforms connected to the Internet, some means of generating portable executable code is needed .As you will see, the same mechanism that helps ensure security also helps create portability. Indeed, Java's solution to these two problems is both elegant and efficient.
The Byte code
The key that allows the Java to solve the security and portability problems is that the output of Java compiler is Byte code. Byte code is a highly optimized set of instructions designed to be executed by the Java run-time system, which is called the Java Virtual Machine (JVM). That is, in its standard form, the JVM is an interpreter for byte code.
Translating a Java program into byte code helps makes it much easier to run a program in a wide variety of environments. The reason is, once the run-time package exists for a given system, any Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about Java that prevents on-the-fly compilation of byte code into native code. Sun has just completed its Just In Time (JIT) compiler for byte code. When the JIT compiler is a part of JVM, it compiles byte code into executable code in real time, on a piece-by-piece, demand basis. It is not possible to compile an entire Java program into executable code all at once, because Java performs various run-time checks that can be done only at run time. The JIT compiles code, as it is needed, during execution.
Java Virtual Machine (JVM)
Beyond the language, there is the Java virtual machine. The Java virtual machine is an important element of the Java technology. The virtual machine can be embedded within a web browser or an operating system. Once a piece of Java code is loaded onto a machine, it is verified. As part of the loading process, a class loader is invoked and does byte code verification makes sure that the code that's has been generated by the compiler will not corrupt the machine that it's loaded on. Byte code verification takes place at the end of the compilation process to make sure that is all accurate and correct. So byte code verification is integral to the compiling and executing of Java code.
Java byte code
Picture showing the development process of JAVA Program
Java programming uses to produce byte codes and executes them. The first box indicates that the Java source code is located in a. Java file that is processed with a Java compiler called javac. The Java compiler produces a file called a. class file, which contains the byte code. The .Class file is then loaded across the network or loaded locally on your machine into the execution environment is the Java virtual machine, which interprets and executes the byte code.
Java architecture provides a portable, robust, high performing environment for development. Java provides portability by compiling the byte codes for the Java Virtual Machine, which is then interpreted on each platform by the run-time environment. Java is a dynamic system, able to load code when needed from a machine in the same room or across the planet.
Compilation of code
When you compile the code, the Java compiler creates machine code (called byte code) for a hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to execute the byte code. The JVM is created for overcoming the issue of portability. The code is written and compiled for one machine and interpreted on all machines. This machine is called Java Virtual Machine.
Compiling and interpreting Java Source Code
During run-time the Java interpreter tricks the byte code file into thinking that it is running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95 or SunSARC station running Solaris or Apple Macintosh running system and all could receive code from any computer through Internet and run the Applets.
Java was designed to be easy for the Professional programmer to learn and to use effectively. If you are an experienced C++ programmer, learning Java will be even easier. Because Java inherits the C/C++ syntax and many of the object oriented features of C++. Most of the confusing concepts from C++ are either left out of Java or implemented in a cleaner, more approachable manner. In Java there are a small number of clearly defined ways to accomplish a given task.
Java was not designed to be source-code compatible with any other language. This allowed the Java team the freedom to design with a blank slate. One outcome of this was a clean usable, pragmatic approach to objects. The object model in Java is simple and easy to extend, while simple types, such as integers, are kept as high-performance non-objects.
The multi-platform environment of the Web places extraordinary demands on a program, because the program must execute reliably in a variety of systems. The ability to create robust programs was given a high priority in the design of Java. Java is strictly typed language; it checks your code at compile time and run time.
Java virtually eliminates the problems of memory management and de-allocation, which is completely automatic. In a well-written Java program, all run time errors can -and should -be managed by your program.
For Java, a widget toolkit is used known as Swings. Swing is part of Sun Microsystems' Java Foundation Classes (JFC) an API for giving a graphical user interface (GUI) for Java programs. Swing was developed to give a more complicated set of GUI components than the previous Abstract Window Toolkit. Swing provides a local look and feel that emulates the look and sense of several platforms, and also supports a pluggable look and feel that allows applications to include a look and sense dissimilar to the essential platform.
Java Swing Class ladder
The class JComponent, descend straight from the Container which is the origin class for the most part of Swing's user interface components.
Fig 4.1 Java swing class ladder( source: url 1)
Swing contains components which will be used to build a GUI. To learn and know these swing programs, AWT Programming knowledge is not necessary.
Swing introduced a method that endorsed the look and sense of each component in an application to be changed without making significant changes to the application code. The beginning of support for the pluggable look and logic allows swing components to follow the outward show of local components while still keep the benefits of platform independence. This trait also makes it easy to make an application written in Swing look very different from local programs if necessary.
The tool used in this project is JFrameBuilder.
It is totally ocular GUI improvement tool used for Swing applications.
It is simple to employ visual java GUI improvement tool.
It is very much useful to java developers to make complicated GUI applications using drag and drop interface and made them to not to spend more time on writing code.
JFrameBuilder supports the swing components. a few of them are: