Digital Electronics Demorgans Theorem Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Digital electronics are electronics systems that use digital signals. Digital electronics are representations of Boolean algebra and are used in computers, mobile phones, and other consumer products. In a digital circuit, a signal is represented in one of two states or logic levels. The advantages of digital techniques stem from the fact it is easier to get an electronic device to switch into one of two states, than to accurately reproduce a continuous range of values.

Digital electronics or any digital circuit are usually made from large assemblies of logic gates, simple electronic representations of Boolean logic functions.

To most electronic engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuit

One advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degrading because of noise. For example, a continuous audio signal, transmitted as a sequence of 1s and 0s, can be reconstructed without error provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s. An hour of music can be stored on a compact disc as about 6 billion binary digits.

In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware. In an analog system, additional resolution requires fundamental improvements in the linearity and noise charactersitics of each step of the signal chain.

Computer-controlled digital systems can be controlled by software, allowing new functions to be added without changing hardware. Often this can be done outside of the factory by updating the product's software. So, the product's design errors can be corrected after the product is in a customer's hands.

Information storage can be easier in digital systems than in analog ones. The noise-immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly

Disadvantages

In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat as well. In portable or battery-powered systems this can limit use of digital systems.

For example, battery-powered cellular telephones often use a low-power analog front-end to amplify and tune in the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can be easily reprogrammed to process the signals used in new cellular standards.

Digital circuits are sometimes more expensive, especially in small quantities.

The sensed world is analog, and signals from this world are analog quantities. For example, light, temperature, sound, electrical conductivity, electric and magnetic fields are analog. Most useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors.

Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist-Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.

In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of large blocks of related data can completely change. Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing.

Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path.

These schemes help the system detect errors, and then either correct the errors, or at least ask for a new copy of the data. In a state-machine, the state transition logic can be designed to catch unused states and trigger a reset sequence or other error recovery routine.

Embedded software designs that employ Immunity Aware Programming, such as the practice of filling unused program memory with interrupt instructions that point to an error recovery routine. This helps guard against failures that corrupt the microcontroller's instruction pointer which could otherwise cause random code to be executed.

Digital memory and transmission systems can use techniques such as error detection and correction to use additional data to correct any errors in transmission and storage.

On the other hand, some techniques used in digital systems make those systems more vulnerable to single-bit errors. These techniques are acceptable when the underlying bits are reliable enough that such errors are highly unlikely.

A single-bit error in audio data stored directly as linear pulse code modulation (such as on a CD-ROM causes, at worst, a single click). Instead, many people use audio compression to save storage space and download time, even though a single-bit error may corrupt the entire song.

Analog issues in digital circuits

Digital circuits are made from analog components. The design must assure that the analog nature of the components doesn't dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances, and filter power connections.

Bad designs have intermittent problems such as "glitches", vanishingly-fast pulses that may trigger some logic but not others, "runt pulses" that do not reach valid "threshold" voltages, or unexpected ("undecoded") combinations of logic states.

Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity. On the other hand, in the high-precision domain, analog circuits require much more power and area than digital equivalents.

Construction

A digital circuit is often constructed from small electronic circuits called logic gates. Each logic gate represents a function of boolean logic. A logic gate is an arrangement of electrically controlled switches.

The output of a logic gate is an electrical flow or voltage , that can, in turn, control more logic gates.

Logic gates often use the fewest number of transistors in order to reduce their size, power consumption and cost, and increase their reliability.

Integrated circuits, are the least expensive way to make logic gates in large volumes. Integrated circuits are usually designed by engineers using electronic design automation software.

Another form of digital circuit is constructed from lookup tables. Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software.

When the volumes are medium to large, and the logic can be slow, or involves complex algorithms or sequences, often a small microcontroller is programmed to make an embedded system. These are usually programmed by software engineers.

When only one digital circuit is needed, and its design is totally customized, as for a factory production line controller, the conventional solution is a programmable logic controller, or PLC. These are usually programmed by electricians, using ladder logic.

Recent developments

The discovery of superconductivity has enabled the development of Rapid Single Flux Quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements.

Flip-flop (electronics)

Flip-flop schematics from the Eccles and Jordan patent filed 1918, one drawn as a cascade of amplifiers with a positive feedback path, and the other as a symmetric cross-coupled pair.

In digital circuits, a flip-flop is a term referring to an electronic circuit (a bistable multivibrator) that has two stable states and thereby is capable of serving as one bit of memory. Today, the term flip-flop has come to mostly denote non-transparent (clocked or edge-triggered) devices, while the simpler transparent ones are often referred to as latches; however, as this distinction is quite new, the two words are sometimes used interchangeably.

A flip-flop is usually controlled by one or two control signals and/or a gate or clock signal. The output often includes the complement as well as the normal output. As flip-flops are implemented electronically, they require power and ground connections.

Implementation

Flip-flops can be either simple (transparent) or clocked. Simple flip-flops can be built around a pair of cross-coupled inverting elements: vacuum tubes, bipolar transistors, field effect transistors, inverters, and inverting logic gates have all been used in practical circuits â€" perhaps augmented by some gating mechanism (an enable/disable input). The more advanced clocked (or non-transparent) devices are specially designed for synchronous (time-discrete) systems; such devices therefore ignores its inputs except at the transition of a dedicated clock signal (known as clocking, pulsing, or strobing). This causes the flip-flop to either change or retain its output signal based upon the values of the input signals at the transition. Some flip-flops change output on the rising edge of the clock, others on the falling edge.

Clocked flip-flops are typically implemented as master-slave devices where two basic flip-flops collaborate to make it insensitive to spikes and noise between the short clock transitions; they nevertheless also often include asynchronous clear or set inputs which may be used to change the current output independent of the clock.

Flip-flops can be further divided into types that have found common applicability in both asynchronous and clocked sequential systems: the SR ("set-reset"), D ("data" or "delay”), T ("toggle"), and JK types are the common ones; all of which may be synthesized from (most) other types by a few logic gates. The behavior of a particular type can be described by what is termed the characteristic equation, which derives the "next" (i.e., after the next clock pulse) output, Qnext, in terms of the input signal(s) and/or the current output, Q.

Uses

A single flip-flop can be used to store one bit, or binary digit, of data.

Any one of the flip-flop types can be used to build any of the others.

The data contained in several flip-flops may represent the state of a sequencer, the value of a counter, an ASCII character in a computer's memory or any other piece of information.

One use is to build finite state machines from electronic logic. The flip-flops remember the machine's previous state, and digital logic uses that state to calculate the next state.

The T flip-flop is useful for constructing various types of counters. Repeated signals to the clock input will cause the flip-flop to change state once per high-to-low transition of the clock input, if its T input is "1". The output from one flip-flop can be fed to the clock input of a second and so on. The final output of the circuit, considered as the array of outputs of all the individual flip-flops, is a count, in binary, of the number of cycles of the first clock input, up to a maximum of 2n-1, where n is the number of flip-flops used.

One of the problems with such a counters that the output is briefly invalid as the changes ripple through the logic. There are two solutions to this problem. The first is to sample the output only when it is known to be valid. The second, more widely used, is to use a different type of circuit called a synchronous counter.

This uses more complex logic to ensure that the outputs of the counter all change at the same, predictable time.

Frequency division: a chain of T flip-flops as described above will also function to divide an input in frequency by 2n, where n is the number of flip-flops used between the input and the output.

A flip-flop in combination with a Schmitt trigger can be used for the implementation of an arbiter in asynchronous circuits.

Clocked flip-flops are prone to a problem called metastability, which happens when a data or control input is changing at the instant of the clock pulse. The result is that the output may behave unpredictably, taking many times longer than normal to settle to its correct state, or even oscillating several times before settling. Theoretically it can take infinite time to settle down. In a computer system this can cause corruption of data or a program crash.

Flip-flop setup, hold and clock-to-output timing parameters.

The metastability in flip-flops can be avoided by ensuring that the data and control inputs are held valid and constant for specified periods before and after the clock pulse, called the setup time (tsu) and the hold time (th) respectively. These times are specified in the data sheet for the device, and are typically between a few nanoseconds and a few hundred picoseconds for modern devices.Unfortunately, it is not always possible to meet the setup and hold criteria, because the flip-flop may be connected to a real-time signal that could change at any time, outside the control of the designer. In this case, the best the designer can do is to reduce the probability of error to a certain level, depending on the required reliability of the circuit. One technique for suppressing metastability is to connect two or more flip-flops in a chain, so that the output of each one feeds the data input of the next, and all devices share a common clock. With this method, the probability of a metastable event can be reduced to a negligible value, but never to zero. The probability of metastability gets closer and closer to zero as the number of flip-flops connected in series is increased.

So-called metastable-hardened flip-flops are available, which work by reducing the setup and hold times as much as possible, but even these cannot eliminate the problem entirely. This is because metastability is more than simply a matter of circuit design. When the transitions in the clock and the data are close together in time, the flip-flop is forced to decide which event happened first.

However fast we make the device, there is always the possibility that the input events will be so close together that it cannot detect which one happened first. It is therefore logically impossible to build a perfectly metastable-proof flip-flop.

Another important timing value for a flip-flop is the clock-to-output delay (common symbol in data sheets: tCO) or propagation delay (tP), which is the time the flip-flop takes to change its output after the clock edge. The time for a high-to-low transition (tPHL) is sometimes different from the time for a low-to-high transition (tPLH).

When connecting flip-flops in a chain, it is important to ensure that the tCO of the first flip-flop is longer than the hold time (tH) of the second flip-flop, otherwise the second flip-flop will not receive the data reliably. The relationship between tCO and tH is normally guaranteed if both flip-flops are of the same type.

Flip-flop integrated circuits

Integrated circuit (ICs) exist that provide one or more flip-flops. For example, the 7473 Dual JK Master-Slave Flip-flop or the 74374, an octal D Flip-flop, in the 7400 series.

De Morgan's laws

In logic, De Morgan's laws or De Morgan's theorem are rules in formal logic relating pairs of dual logical operators in a systematic manner expressed in terms of negation. The relationship so induced is called De Morgan duality.

not (P and Q) = (not P) or (not Q)

not (P or Q) = (not P) and (not Q)

De Morgan's laws are based on the equivalent truth-values of each pair of statements.

The law is named after Augustus De Morgan (1806â€"1871)who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by algebraisation of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Although a similar observation was made by Aristotle and was known to Greek and Medieval logicians, De Morgan is given credit for stating the laws formally and incorporating them in to the language of logic. De Morgan's Laws can be proved easily, and may even seem trivial. Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments

Mathematical formulation

In strict mathematical terms, the rule states that each of the following claims is logically equivalent to the one next to it and may be transformed from one to the other in either direction:

where:

is the negation operator (NOT)

is the conjunction operator (AND)

is the disjunction operator (OR)

Logical applications

De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula.

Negation of a disjunction

In the case of its application to a disjunction, consider the following claim: it is false that either A or B is true, which is written as:

In that it has been established that neither A nor B is true, then it must follow that A is not true and B is not true; If either A or B were true, then the disjunction of A and B would be true, making its negation false.

Working in the opposite direction with the same type of problem, consider this claim:

This claim asserts that A is false and B is false (or "not A" and "not B" are true). Knowing this, a disjunction of A and B would be false, also. However the negation of said disjunction would yield a true result that is logically equivalent to the original claim. Presented in English, this would follow the logic that "Since two things are false, it's also false that either of them are true

Arithmetic logic unit

A typical schematic symbol for an ALU: A & B are the data (registers); R is the output; F is the Operand (instruction) from the Control Unit; D is an output status

In computing, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and logical operations. The ALU is a fundamental building block of the central processing unit(CPU) of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers. The processors found inside modern CPUs and graphics processing units (GPUs) have inside them very powerful and very complex ALUs; a single component may contain a number of ALUs.

Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a report on the foundations for a new computer called the EDVAC

Practical overview

A simple 2-bit ALU that does XOR, AND, OR, and addition (click the image for a further explanation)

Most of a processor's operations are performed by one or more ALUs. An ALU loads data from input registers, an external Control Unit then tells the ALU what operation to perform on that data, and then the ALU stores it's result into an output register. Other mechanisms move data between these registers and memory.

Simple operations

Most ALUs can perform the following operations:

Integer arithmetic operations (addition, subtraction, and sometimes multiplication and division, though this is more expensive)

Bitwise logic operations (AND,NOT, OR, XOR)

Bit-shifting operations (shifting or rotating a word by a specified number of bits to the left or right, with or without sign extension). Shifts can be interpreted as multiplications by 2 and divisions by 2.

Complex operations

An engineer can design an ALU to calculate any operation, however complicated it is; the problem is that the more complex the operation, the more expensive the ALU is, the more space it uses in the processor, and the more power it dissipates, etc.

Therefore, engineers always calculate a compromise, to provide for the processor (or other circuits) an ALU powerful enough to make the processor fast, but yet not so complex as to become prohibitive. Imagine that you need to calculate the square root of a number; the digital engineer will examine the following options to implement this operation:

Design an extraordinarily complex ALU that calculates the square root of any number in a single step. This is called calculation in a single clock.

Design a very complex ALU that calculates the square root of any number in several steps. But--and here's the trick--the intermediate results go through a series of circuits that are arranged in a line, like a factory production line. That makes the ALU capable of accepting new numbers to calculate even before finished calculating the previous ones. That makes the ALU able to produce numbers as fast as a single-clock ALU, although the results start to flow out of the ALU only after an initial delay. This is called calculation pipeline.

Design a complex ALU that calculates the square root through several steps. This is called interactive calculation, and usually relies on control from a complex control unit with built-in microcode.

Design a simple ALU in the processor, and sell a separate specialized and costly processor that the customer can install just beside this one, and implements one of the options above. This is called the co-processor.

Tell the programmers that there is no co-processor and there is no emulation, so they will have to write their own algorithms to calculate square roots by software. This is performed by software libraries.

Emulate the existence of the co-processor, that is, whenever a program attempts to perform the square root calculation, make the processor check if there is a co-processor present and use it if there is one; if there isn't one, interrupt the processing of the program and invoke the operating system to perform the square root calculation through some software algorithm. This is called software emulation.

The options above go from the fastest and most expensive one to the slowest and least expensive one. Therefore, while even the simplest computer can calculate the most complicated formula, the simplest computers will usually take a long time doing that because of the several steps for calculating the formula.

Powerful processors like the Intel Core and AMD64 implement option #1 for several simple operations, #2 for the most common complex operations and #3 for the extremely complex operations. That is possible by the ability of building very complex ALUs in these processors.

Inputs and outputs

The inputs to the ALU are the data to be operated on (called operands) and a code from the control unit indicating which operation to perform. Its output is the result of the computation.

In many designs the ALU also takes or generates as inputs or outputs a set of condition codes from or to a status register. These codes are used to indicate cases such as carry-in or carry-out, overflow, divide-by-zero, etc.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.