The Diversities And Difficulties Computing Technology Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

New dimensions are being added in computing technology as everyday new inventions are made by modern scientific applications. The primary aim of this paper is to review and unify theories of Computing Technologies which are emerging and in which interdisciplinary research and development is prominent. In this paper industrial demonstrations, Evalualutions, exploitation strategies & research papers of latest development in computing technologies are studied and a generalization is given. We also stated the hurdles which the researchers and scientists are facing in the development of emerging computing technologies.

We were studying semantic computing and felt that the computer science students should know the latest research areas of computer science, so that it is easy for them to select the specific research field. This article provides generalization of current fields of research and development of computer science and technology, which would be the focus of researchers of tomorrow. This paper is organized as follows: section I is devoted to introduction of this research topic and motivation of this research. In order to have better understanding of current emerging fields, section II will discuss several such computing technologies. Discussion will be given in section III and conclusion is the last section.

Neural computing deals with neurally-inspired information processing systems and modeling of the brain. The difference between humans and neural networks is mainly based on accuracy and memory. Human memory is not permanent because it is stored in brain cells and cells continuously die and born, while neural network remembers whatever is taught to it because the things which are taught to it are stored in its memory permanently and neural network does the same work thousand times or more, which was taught to it the first time. Hence a properly trained neural network can rival anything

even it be a human. The fundamental weakness in current theories of neural networks is the total absence of an autonomous system i.e. a system which makes decisions independently & current neural networks require constant baby sitting i.e. human intervention to work properly. Autonomous robots and machines require autonomous neural network and an autonomous neural network requires algorithm which has autonomous learning ability. Neural computer is in the beginning stage unless and until such algorithms are developed [17].


Following is the guiding proverb of recovery oriented computing. "If a problem has no solution, it may not be a problem, but a fact, not to be solved, but to be coped with over time." Shimon Peres i.e. errors by hardware, software and people should be thought of as facts and not problems that one must solve and fast recovery is the solution [6]. Integrated diagnostic support, system-wide undo support, redundancy and isolation are the characteristics of recovery oriented computing. Basically ROC was developed for reliable internet services. Recovery oriented computing provides higher availability because it reduces recovery time. ROC also reduces total cost of ownership because failures become less and which in-turn decrease system administration. Trustworthy systems are set by Bill Gates as a new target for software developers in Microsoft [6]. ROC is dependent on the participation from industry & it needs benchmarks that are realistic and based on real world environment [5].


Basically it is an unconventional form of computing. It is a model of computing where computational processes are time-inversible. Computational programs cannot be reversed because they destroy information. It is easy to define a virtual machine which saves information and thus allow reversal. Any program which runs on this machine should be reversible. Hence, one can go back and forth in a program and accuracy of the calculation can easily be identified. If there is an error, it can be detected easily even if, it is of one bit. Also random corruptions can easily be identified. At the other end ROC computations require extra memory storage and they are slower. They may not be appropriate. It is not possible to define an exact inverse due to round-off problems because

existing floating point hardware does not support it. However slow computations using integers are appropriate to perform a safety function [18].


Semantics of computational context are dealt in semantic computing so that, naturally expressed user intensions can be satisfied. The term 'context' includes service, text, audio, video, hardware, network and the list goes on. The major goal of semantic computing is to make the meaning of web contents and their semantics understandable by the computers. As text is the primary medium in which information is interchanged, so directly it becomes the first target that the computer understands the meaning of the text [19].


Computers worn by the humans are called wearable computers. Wearable computing includes areas of pattern recognition, interface design, augmented reality and use of wearable for disabilities and it is an active topic of research. European Commission worked on an integrated project named [email protected] to investigate wearable computing to be worn easily as clothing. Its project volume was 23.7 million € and under contract number 004216. It has a funding of 14.6 million €. It is the largest project of wearable computing till now. In the domains of emergency response, production, maintenance and healthcare, prototypes were developed and tested in the end user environments in this project. Prototypes were studied in design workshops to be evaluated rapidly. All wearable applications cannot be implemented on a single overall system. However, an open wearable computing software framework (OWCF) was designed which provides services of context detection, wearable user interfaces (WUI), speech recognition, content management, localization and workflows. In wearable computing there is strong indication that it will effect out-of-office workplace as much as the personal computer has effected the office environment [1].


Perceptual computers will allow people to make subjective judgments. Encoder, CWW engine and decoder are the three main parts of a perceptual computer. Architecture of a personal computer is shown in the figure.1. The input and output of a perceptual computer are perceptions, so it is easy

for humans to interact with computers, using vocabulary words. Perceptual computer maps words into words using mathematics of type-2 rule-based fuzzy logic sets and systems. So mathematical details become hidden from the human and

become only the concern of the designer [20].


Biologically inspired computing requires that we should look at nature in its totality. Such as, phylogeny (the evolution of species), spigenesis (lifetime learning) and ontegency (the development of individual organisms. Solid- state devices are faster than the basic components of biological systems, but biological systems can implement much higher-level operations. Success of biologically inspired computing is dependent on modeling of biological processes, which is very difficult because they span the range of chemical, physical and biological phenomena. The term noumena refers to "the thing itself", i.e., the reality as it is and independent of our understanding or perception of it. On the other hand the term phenomena means the understanding of a thing, which we can gather in our conceptual framework, which is independent of the reality of the thing. It is very difficult or even impossible to make a phenomenological framework that captures all of the noumena [2].


The context of an event or idea is the previous situation, which is related to it and assists in the understanding of the event or idea. In Context-aware computing adaptation decisions are made by the designers during the design process which provides flexibility to the designer in designing, but it places extreme load on the designer, because the number of adaptation decisions are very high in number. For example, if we consider only four contexts such as battery level, network speed, CPU usage and memory availability in mobile computing and If we assume that only four situations are possible to these contexts i.e. (best, good, bad or worst), then

adaptation rule is, IF (Battery-Level is ai)AND (Network-Speed is bi) AND (CPU-Usage is ci) AND (Memory-Availability is di) THEN (Action ei). If we assume the worst case then adaptation policy will have 44 i.e. 256 adaptation rules. Maintenance of such a large

number of rules is very difficult and it will cause distraction from the primary application logic [3]. In today's mobile platform applications, context-aware computing is the key missing technology and the next-generation Web services and applications are taking it into account. However, context is a very complex subject and wide-ranging issues are included in it [4].


Self-management is the primary goal of autonomic computing. Complexity of computer networks creates a hurdle in further growth and to reduce complexity, we need computer systems which are capable of self-management. Protecting, healing, configuring and optimizing are the characteristics of autonomic computing. Large enterprises have started making self-managed systems their part which are self-adaptable and autonomic. The behavior and response of today's systems can be altered using open industry standards of autonomic systems [21]. On the other hand, autonomic computing is still in the beginning [22].


In trusted computing, hardware and software enforce the computers to give their behavior in expected ways. Actually cryptography is used to enforce the expected behavior. Authorization, authentication, file protection and memory protection are the characteristics of trusted operating systems. It also provides Mandatory Access Control (MAC), Discretionary Access Control (DAC) and object reuse protection services. Microsoft has used trusted computing technology in windows vista and windows 7 [23]. People are able to run their sensitive applications on Next Generation Secure Application Base (NGSCB) which is presented by Microsoft. On the other hand deficiencies in hardware are removed by Intel Trusted Execution Technology [24].


Computing technology which is related to or arises from emotion is called affective computing. Human intelligence is directly related to emotion and emotion also effects decision-making processes. Changes in hormones, blood pressure and heart rate are influenced by emotions. To make complicated

and interactive programs, researchers on intelligent agents have started working on psychological affective models. The preference and emotion of users is understandable and adaptable by intelligent agents which employ emotional model of human-computer interface [25]. Affective Computing is

more than a subset of computer science and it has most complex real time problems yet to be solved. Complex naturally-expressed and naturally-occurring human emotions to be understood by the computers requires knowledge of psychology, physiology and neuroscience, so that computers respond and understand humans in real-time [26].


Computationally hard tasks are solved by the use of in-exact solutions in soft computing. Soft computing is the hybrid of evolutionary and genetic algorithms, artificial neural networks (ANN) and fuzzy sets and systems (FSS). Soft computing is the combination of problem solving technologies, which are emerging to solve real world problems. Hence, it makes a virtue out of necessity.

Systems with very large-scale solution spaces have models which are impractical or very expensive. So they require approximate reasoning systems, which can handle incomplete information. A set of computing tools to perform approximate reasoning and search tasks are provided by soft computing [27].


Digital computation can be performed by the use of photons of invisible or infrared beams in optical computing. As light creates small amount of heat as compared to electric current, so designing of powerful processing systems become feasible. Optical computing provides higher data rate and processing power and is the only computing technology, which can replace electronics. Optical materials are lightweight, compact, inexpensive to manufacture and posses higher storage density. If, a conventional computer solves a problem in 11 years, it can be solved by optical computer in one hour because optical computing systems have speeds which are 107 times faster than the latest electronics systems. But a major obstacle in making a complete optical computer is cascadability i.e. to make a large number of integrated all-optical gates which is a very complex problem. However, to find materials which have low power response, reliability, optical efficiency and speed is also a challenge. Furthermore, optical computing requires contributions from computer architects, physicists, material scientists and optical engineers [11]. New conducting polymers are being used as switches

like transistors but are 1000 times faster then silicon [12].


High performance of hardware and flexibility of software are

combined in reconfigurable computing. They use field-programmable get arrays. The gap between application specific integrated circuits (ASIC) and general purpose processors (GPPs) is filled by reconfigurable computing on a single IC [8]. In reconfigurable computing, analysis and test of circuit is fast and efficient and complex digital functions

can easily be implemented. Also the functional design can be

changed by users whenever they want [9].


To solve advanced computational problems making use of computer clusters and supercomputers is called high performance computing (HPC). Computer systems have already been reached Teraflops region and are named as HPC computers. HPC technology provides high productivity and utilization factors and therefore, they have become fundamental part of efficient data centers and cloud computing. The applications which require higher networking bandwidth and computation power need HPC such as climate forecast, high energy physics and genomics data. Continuous growth in HPC performance and scalability requires that the price of optical cables become comparable to the price of copper cables [10].


Problems which are beyond the reach of IT are solved using specialized software algorithms and massive computation power of supercomputers is known as deep computing. High-powered IBM® System xâ„¢ and BladeCenter® servers featuring Intel® Xeon® multi-core processors which can compute advanced algorithms are available. The IBM provides access on demand to high performance computing resources known as Deep Computing Capacity On Demand (DCCoD) [7].


DNA Computing also called Biomolecular computing is a fast developing interdisciplinary area. Instead of silicon-based systems, it utilizes molecular biology, DNA and Biochemistry. The solution of Hamiltonian Path Problem (HPP) through biochemical procedure by DNA molecules was proved by

Adleman. The main idea of DNA computing is that data is encoded in DNA strands and bio-operations are applied on the DNA strands to simulate logical and arithmetical operations. A mix of 1018 DNA strands can operate 104 times faster than the speed of advanced supercomputers. Presenting Numerical values, which are solved by DNA strands is still an issue [13]. The use of natural enzymes which act only on certain nucleotide sequences is the current limitation of DNA computing [14]. The future of DNA computing has yet to be decided [15].


The results based on this research are qualitative rather than quantitative as these technologies are based on several factors and the numerical data for proper comparisons of all the technologies is unavailable. Hence qualitatively, technologies are compared. Also some of the technologies are at the

beginning stage and only few companies are working on them. Such as IBM and few other companies are involved in deep computing and they do not share their statistics. Hence, conclusions are made qualitatively.


In this paper current methods, techniques, breakthroughs and hurdles in the way of emerging computing technologies have been reviewed. Moreover, merits and demerits, reviewed the current projects in computing technology being performed by leading computing organizations, discussed weaknesses in current computing technologies and available and unavailable solutions to those weaknesses and reviewed platforms on which these computing technologies are being implemented.

As a conclusion, Context-aware computing, Wearable computing and DNA computing will be the technologies of tomorrow and in future they will be the focus of research and development. Context-aware computing and wearable computing will be successful in near future while DNA computing will be successful somewhere between 2050 and 2060. These technologies will impact on daily human life.