the definition of multiprocessing can vary with context, it may be multiple cores on one die, multiple chips in one package, multiple packages in one system unit, dual core, core 2 duo , quad core .Many people confuse this term with multi core but technically they both are different from each other, while in multi processor there are single or multiple cores on separate chips and in multi core they are on single chip. Chip Multiprocessors, also known as multi-core computing, involves more than one processor placed on a single chip and can be thought of the most extreme form of tightly-coupled multiprocessing but they do not exactly come under definition of multi processor
In these type of computers all CPUs may be equal or some may be reserved for special purposes. As sometimes software or hardware may requires that only one CPU respond to all hardware interrupts and rest of the work may be equally distributed. Multiprocessor system will only have advantage if software running on system understands that it have pair of processors.
1.2.1 Symmetric Multiprocessing (SMP)
Get your grade
or your money back
using our Essay Writing Service!
SMP involves multiprocessor computer architecture where two or more identical processors can connect to a single shared main memory.SMP was one of the earliest multiprocessor architecture. One of the easiest and cheapest ways to improve hardware performance is to put more than one CPU on the board and by making them all run in parallel, doing the same job.
The majority of typical desktop multi-processor systems are based on the symmetric multiprocessing architecture (SMP) that offers all processors a common FSB (and consequently a memory bus). This architecture provides almost identical memory access latencies for any processor. But on the other hand, a common system bus is a disadvantage of the entire memory system in terms bandwidth. Indeed, if a multi-threaded application is critical to memory bandwidth, its performance will be limited by this memory organization.
It is only suitable for smaller system with up to 8 processors.SMP market formed with entry level servers and workstations but slowly shifted to desktop Pc's and Laptops. SMP is extremely common in the modern computing world
1.2.2 Asymmetric Multiprocessing (ASMP)
In asymmetric multiprocessing the program tasks (or threads) are strictly divided by type between processors and typically, each processor has its own memory address space. These features make asymmetric multiprocessing difficult to implement. Due to this complexity in implementation, it was not adopted by many vendors or programmers during its brief period between 1970 - 1980. ASMP system assigns certain tasks only to certain processors. For example one processor may be assigned the work of handling all the interrupts whereas other may be assigned the work of I/O. ASMP computers are comprised of multiple physical processors that are unique. Processors may be either master or slave: master processors are more capable than slaves and they have full control over slave processor and what slave processor does.
Differences between ASMP and SMP
Hardware : In the symmetrical multiprocessing design, there are no master or slave processors. In this case each processor is non-unique and has equal power. This means that they can share memory between themselves and can interact with each other directly, regardless of how many there are in the system
Software: Because SMP systems have no master or slave processors, each logical unit is able to complete a given task. In an ASMP system, a certain processor may not be able to complete a task for a number of reason special purpose nature of the processor (e.g. a coprocessor), and thus tasks must be given to it by master processor. Therefore, it is up to the programmer to make sure the processors are being used to their maximum potential.
ASMP is not much used in today's hardware implementation but still used as logically in software's in which main task is done by one processor and other smaller task can be done by other processor. It is responsibility of the programmer to decide which task is to assign to which processer it is not determined at hardware level
1.2.3 Non-Uniform Memory Access multiprocessing (NUMA)
It is designed for increasing stability .In NUMA instead of having a single bus connecting all the processors and other resources, the processor and memory modules are divided into partitions. And each partition is called node. These nodes contain processors and memory modules and all these nodes are connected by a high speed interconnect network. Memory of nodes becomes local to processor of these and nodes and non local for processor of other nodes. Processor can access its own local memory faster than non local memory.
Always on Time
Marked to Standard
After 1970's speed of processors crossed speed of memory .so processors has to stall while they wait for memory accesses to complete. this means more of high-speed cache memory and But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems make the problem considerably worse because when two processors need to access memory in parallel one of them had to wait. NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory.
1.3 Types of Multi-Processor based on Processor coupling
1.3.1Tightly coupled: in this multiple CPUs are connected at bus level. These may be SMP , share common memory or NUMA share both local and shared memory. Intel Xeon and AMD opteron has their onboard cache and the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system provides access to shared memory RAM.
1.3.2 Loosely-coupled multiprocessor systems are based on multiple standalone single or dual processor interconnected via a high speed communication system like high speed Ethernet. These are often called clusters and are explained later.
Multi Processors is not necessary that they will be more faster then multi core or single core processors. Unless software or operating system supports multi processor .Some application only use single processor .Even some high end application may not require two processors, but certainly when using two high end software multi processors will outperform the other processors.
- Cost effective : as they processors share many common resources
- Reliable: reliability of system is also increased. The failure of one processor does not affect the other processors though it will slow down the machine. Suppose I have five processors and one of them fails due to some reasons then each of the remaining four processors will share the work of failed processor. So it means that system will not fail but definitely failed processor will affect on its speed.
- More speed: we increase the number of processors then it means that more work can be done in less time.
1.5 Limitation & Disadvantages
The performance of modern multiprocessor systems is increasingly limited by interconnection delays or long latencies of memory subsystems. Multiprocessor would have a disadvantage over a processor due to its ability to process more than one application at a time making the opportunity for errors to occur should the wrong process is selected.
Some other limitations
- Longer distances and slower (as compared to multi core) bus speeds when shuttling data between the two CPU cores.
- Access to RAM is serialized
- The nature of the different programming methods would generally require two separate code-trees to support both uni processor and SMP systems with maximum performance.
1.6 Future Scope With clock speed of processors already hit the barrier due to generating heat problems the need of multi processors is surely there. Moreover With the release of New windows 7 and other high end software's like 3dsmax 2009 and adobe masters collection CS4 the computation power needed for desktop computer is increasing. So requirement of high end processors for desktops is also increasing
True parallelism can be only achieved in multi computer. In simplest language multi computer means having 2 or more CPU (multi CPU that are complete computers (with onboard CPU, storage, power supply, network interface, etc.)) connected or network and network may be public , private or internet .
A computer made up of several computers. The term generally refers to an architecture in which each processor has its own memory rather than multiple processors with a shared memory. They can be used for multiple tasking that are processor extensive. One CPU can do 3D rendering while on other you can play game or do video processing
2.2 Types of Multi -computer
- Distributed computing
- Cluster Computing (Grid computing)
- Cloud computing
- desktop clustering
2.2 Distributed computing
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Distributed computing refers to the means by which a single computer program runs in more than one computer at the same time. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with common goal. A distributed system is a collection of independent computers that appear to the users of the system as a single coherent system.
In particular, the different elements and objects of a program are being run or processed using different computer processors. In a distributed computing setup, there is one or more servers which contain the blueprint for the coordinated program efforts, the information needed to access member computers, and the applications that will automate distribution of the program processes when such is needed. It is also in the distributed computing administrative servers that the distributed processes are coordinated and combined, and they are where the program outputs are generated.
With more and more advances in technology the same system may be characterized both as parallel and distributed. Parallel computing may be seen as a particular tightly-coupled form of distributed computing, and distributed computing may be seen as a loosely-coupled form of parallel computing
2.2.1 Application of Distributed computing:-
- when data is produced in one physical location and it is needed in another location.
- It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer.
- Folding@Home looking for cure of cancer
- BOINC (Berkeley Open Infrastructure for Network Computing)
- BURP - to develop a publicly distributed system for rendering 3D animations.
- FreeHAL@home - to parse and convert big open source semantic nets for use in FreeHAL
- AQUA@home - predicts the performance of superconducting adiabatic quantum computers.
- SETI@home which is a project dedicated to finding signs of extraterrestrial life
2.2.2 Distributed computing Projects
2.3 Cluster computing
It may also be said as type of distributed computing linking local computers. Cluster computing is the technique of linking two or more computers into a network (usually through a local area network) in order to take advantage of the parallel processing power of those computers. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. Large scale cluster computing is called grid computing. The size of grid computing may vary from being small - confined to a network of computer workstations within a corporation, for example - to being large, public collaboration across many companies and networks.
Cluster computers are used in computational simulations of weather or vehicle crashes.
2.3.1 Types of cluster computing
- High Availability Clusters: designed to ensure constant access to service applications. The clusters are designed to maintain redundant nodes that can act as backup systems in the event of failure. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy
- High-performance Clusters: to exploit the parallel processing power of multiple nodes. They are most commonly used to perform functions that require nodes to communicate as they perform their tasks - for instance, when calculation results from one node will affect future results from another node.
- Load-balancing Clusters: These operate by routing all work through one or more load-balancing front-end nodes, which then distribute the workload efficiently between the remaining active nodes.
2.4 Desktop Clustering (Multi-Computer environment)
Desktop Multi-computer system will make faster and more comfortable. But it will not speed-up the actual computing power but surely it will increase the speed if you want to run application at a time. For example you use two processes intensive Application, like for example video editing and 3D rendering, all at the same time. In multi computer system one can run video editing on one CPU and 3d rendering on other CPU.
One may also think about multiple processors in one computer, but the way computers are build multiple processor system doesn't double the system speed. It makes some multi-CPU aware applications go faster or allow for faster multitasking, but it is not like having two computers.
A better idea is to actually have two networked computers, each for the specific task. Desktop clustering needs KVM (Keyboard, Video or Visual Display Unit, Mouse) switch.KVM is a hardware device that allows a user to control multiple computers from a single keyboard, video monitor and mouse. a smaller number of computers can be controlled at any given time. Modern devices have also added the ability to share USB devices and speakers with multiple computers. Control of Keyboard and mouse is switched from one computer to another by the use of a switch or buttons on the KVM device, with the KVM passing the signals between the computers and the keyboard, mouse and monitor depending on which computer is currently selected. Though keyboard and mouse work only on one computer at a time but computer can execute to application in parallel.
The second layer is a Software KVM or KM switch. This will allow a seamless mouse and keyboard transition from one computer to another via network connection. simply move the mouse cursor to the edge of your monitor (or multi-monitors) and it will automatically transfer control to the other computer - no need to press any hotkeys. The illusion is as if you have one big multi-monitor system. Also software KM switch usually handles the copy and paste between two computers so the illusion of one system is pretty good. Then comes sharing of disk space using Ethernet .which is not as fast as SATA but still provide enough speed to watch videos or easy transfer of data.
True multi-tasking desktop where we blurred edges of the two computers involved using network discs and hardware/software KM combination. Using today's new computers dual core, we have now 4 processors and in case of 3D rendering we can actually use the other computer as a network rendering clients which will work together as 4 rendering nodes.
It is a new emerging computing technology that uses the internet and central remote servers to maintain data and applications. Cloud computing allows consumers and businesses to use applications without installation and access their personal files at any computer with internet access. It typically involves the provision of dynamically scalable and often virtualized resources as a service over the internet. Computation is run on an supporting infrastructure which is independent of the applications themselves.
Some current cloud computing examples:
- Microsoft Live
- Microsoft Work Space (Beta)
- Google Wave (Preview)
- Google Chrome OS (Under development )
Cloud computing does not necessarily include grid computing, resources as a utility, or self managing computing. Each of these can however be used in some cloud computing systems, but cloud computing can also be done with free and decentralized architectures.
2.5.1 Types of Cloud Computing Services
- Software as a service
- Software is provided to end users in an "On-demand" fashion.
- Reduces upfront costs, i.e. buying multiple licenses
- "Utility-based" computing
- Infrastructure as a service
- An "infrastructure" referring to much of the background hardware (contrast to software) needs of an organization
- Platform as a service
- When the software needed to develop cloud applications are themselves provided in a "software as a service" fashion
Application are provided as SaaS that is to say they are provide as services over the Internet. Some providers are
- Google (GOOG)
- NetSuite (N)
- Taleo (TLEO)
- Concur Technologies (CNQR)
2.5.2 Primary Benefits of Cloud Computing
- By running business applications over the internet from centralized servers rather than from on-site servers, companies can cut some serious costs. Furthermore, while avoiding maintenance costs, licensing costs and the costs of the hardware required to run servers on-site, companies are able to run applications much more efficiently from a computing standpoint
- Flexibility to choose multiple vendors that provide reliable and scalable business services, development environments, and infrastructure that can be leveraged out of the box and billed on a metered basis-with no long term contracts
- infinite computing resources
- Start small then increase the resources when really needed.
- The ability to use and pay on demand.
- Reduced costs due to operational efficiencies, and more rapid deployment of new business services reducing total cost of ownership.
- open source software, open standard and open systems.
2.5.3 Future Scope
Multi computer provides true parallelism. It is of great advantage to IT companies.
With heat becoming major problem for increasing clock speeds. Multi computer system will be surely helpful to get required resources without much expenditure
Many projects like SETI@home ,that uses millions of pc world wide are very cost effective as company itself does not need to maintain a large servers.
Another major project is cloud computing that deploys grids or other types but it expands the services. Uses of servers will be decreased thus decreasing power consumption and requirement for cooling solution to data banks.
Griding, distributive and Cloud computing infrastructures are next generation platforms that can provide tremendous value to companies of any size.. Increases profitability by improving resource utilization. Costs are driven down by delivering appropriate resources only for the time those resources are needed.
After reading so far about both the multi processor and multi computer it is clear that both have advantage and disadvantages in certain areas. So it is mainly the requirement of work. For Large companies multi computer is more effective and cost efficient, whereas same will be costlier for home users. Multi processors are much more suitable for home, small scale office and general purpose office uses.
In today's global competitive market, companies must innovate and get the most from its resources to succeed. This requires enabling its employees, business partners, and users with the platforms and collaboration tools that promote innovation. They can help companies achieve more efficient use of their IT hardware and software investments and provide a means to accelerate the adoption of innovative hardware.