# Computer Logarithmic Number System Computer Science Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In Computer Arithmetic there are many different ways to represent numbers. They can be distinguished as Conventional number representation and Unconventional number representation. Conventional number representation includes Fixed Point representation and Floating Point representation and Unconventional number representation include Exotic representations, redundancy in arithmetic, rational number system, Logarithmic Number System, and Residue Number System. There are also some Hybrid Number representation systems. In Digital Signal Processing (DSP), based on the application and requirement one can choose a number system representation that best suits the application. In this project we have used Logarithmic Number Representation. We have showed that this number system (LNS) can be used as an alternative to the conventional number system representation and we have showed that the Discrete Cosine Transform (DCT) that is used in MPEG encoding can be done using Logarithmic Number System (LNS).

## ACKNOWLEDGEMENT

## TABLE OF CONTENTS

Introduction

Summary

Aim

Objective

Methodology

Chapter 1

References

3. Chapter 2

2.1 Number System Representations

2.2 Types of Number System Representations

2.3 Focus of the Project

## INTRODUCTION:

## Summary:

The logarithmic number system (LNS) was first introduced by Schwartzlander. The Logarithmic Number System (LNS) is an alternative way to represent numbers besides the conventional fixed-point and floating-point arithmetic. This paper shows that one of the key parts of MPEG encoding, the Discrete Cosine Transform (DCT), can be performed by LNS arithmetic. At low precisions, LNS DCTs give a comparable visual result with shorter word lengths compared to fixed pointed DCTs. Possible hardware approaches are investigated for the DCT using a fast algorithm. This paper shows the LNS implementation takes less area than the fixed-point implementation.

## 1.2 Aim:

The aim of the project is to show that the usage of Logarithmic Number System in Digital Signal Processing.

## 1.3 Objective:

The objective of the project is to show that the Discrete Cosine Transform (DCT), that is a main part in MPEG (Moving Picture Experts Group) encoding can be done using the Logarithmic Number System.

## 1.4 Methodology:

Besides the Conventional Number systems, the Unconventional Number System like Logarithmic Number System is used in the project to achieve the aim and objective of the project.

## CHAPTER 1: Literature Survey

## 2.1 Number System Representations:

In Computer Arithmetic there are many different ways to represent numbers. They can be distinguished as Conventional number representation and Unconventional number representation. Conventional number representation includes Fixed Point representation and Floating Point representation and Unconventional number representation include Exotic representations, redundancy in arithmetic, rational number system, Logarithmic Number System, and Residue Number System. There are also some Hybrid Number representation systems. In Digital Signal Processing (DSP), based on the application and requirement one can choose a number system representation that best suits the application. Each representation has its own advantages and disadvantages.

Floating Point arithmetic offers the best solution for applications that require large dynamic range of numbers to be processed, however, there is also another alternative. The alternative is the Logarithmic Number System (LNS). In LNS, a real positive number is represented by its logarithm (generally in base 2), and the hardware operators compute on these logarithms. The advantage of this LNS coding is that the divisions, multiplications, and square roots can be performed easily and also there is no rounding error in LNS multiplication and division. However, addition and subtraction using LNS is much complicated than its counterpart Floating Point. It was shown that LNS is more efficient in speed and area compared to floating point for ASICs and FPGAs.

As Logarithmic Number System is used instead of conventional fixed point and floating point representation, below is a brief discussion about each representation.

## 2.2 Types of Number System Representations:

Fixed Point Representation:

Fixed point number representation is used to represent real data type numbers that has fixed number of digits after/before the radix point/decimal point (i.e. (.)). They are used to represent fractional values, generally that are in base 10 or base 2.

A fixed point data type value is essentially an integer that is scaled by a particular factor that is determined by the type. For example, let us consider a value 22.30. Now this number in a fixed point data type can be represented as 2230 with a scaling factor of 1/100. Similarly the value 2230000 can be represented as 2230 with a scaling factor of 1000.

The scale factor is usually a power of 10 or a power of 2. However, there are also other scaling factors which may be used occasionally. For example: A fixed point type with a scaling factor of (1/3600) can be used for a time value in hours to obtain the values with one second accuracy.

The largest value of a fixed point type is the maximum (largest) value that can be represented in the underlying integer type, multiplied by the scale factor; and is similar in case of the minimum (smallest) value. For example: let us consider a fixed point type represented as a binary integer with 'n' bits in 2's complement notation, with a scaling factor of 1/2fb (where the last fb bits are fraction bits). Then the minimum representable value is âˆ’2n-1/2fb and the maximum value is (2n-1-1)/2fb.

In Digital Signal Processing, the fixed point processors represent each number with a minimum of 16 bits. If we consider base 2 then the minimum possible set of numbers represented are 2^16=65536. To represent these many number there are four different ways. Unsigned integer (any number between 0 and 65536). Signed integer uses two's complement notation and has the range from -32768 to 32767. Unsigned fraction, in which all the 65536 levels are spread (distributed) uniformly between 0 and 1. Signed fraction, in which the range from -32768 to 32767, are equally spaced (distributed) between -1 and 1.

Floating Point Representation:

Numbers that are too large or too small to be represented as integers can be described using Floating point representation. In this, the numbers are generally represented approximately to a fixed number of significant digits and are scaled using an exponent. Normally, scaling is done for bases of 2, 10, 16.

The typical representation of a number in floating point notation is as below:

## Significant digits Ã- baseexponent

In this the radix/decimal point can float (i.e it can be placed anywhere relative to the significant digits of the number), hence the name Floating point.

In Digital Signal Processing, the floating point processors represent each number with a minimum of 32 bits. If we consider base 2 then the minimum possible set of numbers represented are 2^32= 4294967296. The smallest and largest numbers are ±1.2x10-38 and ±3.4Ã-1038 respectively. In this representation the numbers are not uniformly distributed, instead the numbers are unequally distributed between the above two extremes, so that the gap between any two numbers is ten million times smaller than the value of the numbers. That means there is a large gap between large numbers and a small gap between small numbers. Floating point digital signal processors can also handle fixed point numbers.

The operations like Multiplication, Division, Square roots are considered to be more complicated in the conventional fixed point and floating point representations and there is an alternate representation called the Logarithmic Number System in which the above operations can be performed very easily.

Figure: Binary storage format of the FP number

Logarithmic Number System:

Logarithmic Number System (LNS) is used for representing real numbers in computers and digital hardware. This is most widely used for Digital Signal Processing. In application specific systems these logarithmic number system (LNS) representation finds its extensive use.

In Logarithmic Number System, a number, X, is represented by the logarithm, x, of its absolute value as below:

X\rightarrow\{s,x=\log_b(|X|)\}

where s is a bit which denotes the sign of X (s = 0 if X > 0 and s = 1 if X < 0).

A two's complement binary word is used to represent the number x. A floating-point number with the significand being always equal to 1 can be considered as an LNS.

Multiplication, Division, Powers, Roots operation becomes very simple using LNS, as they are converted to addition, subtraction, multiplication and division respectively. But, the operations like addition, subtraction are not linear operations and needs a table-lookup process and hence, becomes more complex using LNS and also they can be calculated using the below formulae:

logb( | X | + | Y | ) = x + sb(z)

logb( | | X | âˆ’ | Y | | ) = x + db(z),

where z = y âˆ’ x is the difference between the logarithms of the operands, the 'sum' function is sb(z) = logb(1 + bz), and the 'difference' function is db(z) = logb(1 âˆ’ bz).

The LNS adders size increases exponentially with increasing operands' word lengths. Thus the LNS arithmetic systems usually have advantages at low precisions, which is a desirable feature for portable devices.

LNS arithmetic is very advantageous only when the application involves two conditions.

1) The application contains more number of easy operations like multiplication (x), Division (/), Square (x2), roots (âˆšx) than the difficult operations like addition (+) and subtraction (-).

2) The precision requirement for the application is too low (i.e. less than 16 bits), it is because the cost of the difficult operations increases exponentially as precision increases.

## 2.3 Focus of the Project

This project focuses on three methods.

Logarithmic Number System

Berkeley MPEG Encoder and LNS

Hardware Implementation of LNS DCT

## (i) Logarithmic Number System

The logarithmic number system (LNS) was first introduced by Schwartzlander. Logarithmic Number System (LNS) is used for representing real numbers in computers and digital hardware. This is most widely used for Digital Signal Processing. In application specific systems these logarithmic number system (LNS) representation finds its extensive use.

In Logarithmic Number System, a number, X, is represented by the logarithm, x, of its absolute value as below:

X\rightarrow\{s,x=\log_b(|X|)\}

where s is a bit which denotes the sign of X (s = 0 if X > 0 and s = 1 if X < 0).

A two's complement binary word is used to represent the number x. A floating-point number with the significand being always equal to 1 can be considered as an LNS.

Multiplication, Division, Powers, Roots operation becomes very simple using LNS, as they are converted to addition, subtraction, multiplication and division respectively. But, the operations like addition, subtraction are not linear operations and needs a table-lookup process and hence, becomes more complex using LNS and also they can be calculated using the below formulae:

logb( | X | + | Y | ) = x + sb(z)

logb( | | X | âˆ’ | Y | | ) = x + db(z),

where z = y âˆ’ x is the difference between the logarithms of the operands, the 'sum' function is sb(z) = logb(1 + bz), and the 'difference' function is db(z) = logb(1 âˆ’ bz).

The LNS adders size increases exponentially with increasing operands' word lengths. Thus the LNS arithmetic systems usually have advantages at low precisions, which is a desirable feature for portable devices.

LNS arithmetic is very advantageous only when the application involves two conditions.

1) The application contains more number of easy operations like multiplication (x), Division (/), Square (x2), roots (âˆšx) than the difficult operations like addition (+) and subtraction (-).

2) The precision requirement for the application is too low (i.e. less than 16 bits), it is because the cost of the difficult operations increases exponentially as precision increases.

## Binary storage format of the LNS number:

Fig: Binary storage format of the LNS number.

## Multiplication, Division, Addition and Subtraction using LNS:

Multiplication:

Multiplication is very simple in LNS. The operation is as explained below: Let x and y be two fixed point numbers then their multiplication using logarithm is simply the addition of logarithm of x and logarithm of y as show in below equation.

From above, in the Binary storage format of the LNS number, the sign bit is obtained by EX-ORing the sign bits of the multiplicand and the multiplier and the flag bits for infinities, zero, and NANs are encoded for exceptions in the same ways as mentioned in the IEEE 754 standard. The logarithmic numbers are two's complement fixed point numbers. Hence the above addition yields the exact result if there are no underflows or overflows.

Underflows result in zero and Overflows result in infinities (i.e. ±âˆž). If the two numbers that are summed up result in a too large number that is too large to be represented within the word length, then Overflows occur. Similarly, if the two numbers that are summed up result in a too small number that is too small to b represented within the word length, then underflows occur.

Multiplication in Floating Point representation is very complicated. In Floating Point multiplication the two exponents are added and the mantissas are multiplied.

The addition of exponents is performed by using the property below:

The exponents that are added being integers, there are chances that this addition may result in a large number that is too large to store in the exponent field. This produces an overflow event which is to be detected in order to set the exceptions to infinity. The range of the two mantissas is [1, 2). As the two mantissas are to be multiplied the product will be in the range of [1, 4). So there is a possibility of right shifting of one to re-normalize the mantissas. An increment of the exponent is required to right shift the mantissa and this leads to the overflow which is also to be detected.

Division:

Division is very simple in LNS. The operation is as explained below: Let x and y be two fixed point numbers then their division using logarithm is simply the subtraction of logarithm of x and logarithm of y as show in below equation.

Similar as in multiplication, the sign bit in division is also obtained by EX-ORing the sign bits of the Dividend and the Divisior. The Overflows and the underflows are the possible exceptions in case of LNS division as they were for addition.

In Floating Point representation, the Division operation is performed by dividing the mantissa and by subtracting the exponent of the divisor from the exponent of the dividend. The range of the mantissa is [1, 2) and hence the range of the quotients are in the range of (0.5, 2) and left shifting by one is needed to renormalize the mantissa. A left shift of the mantissa requires a decrement of the exponent and detection of possible underflow.

A decrement of the exponent is required to left shift the mantissa and this leads to the underflow which is also to be detected.

Addition/Subtraction:

Addition and Subtraction in LNS is much complicated to Multiplication and Division is LNS. The addition/subtraction operation in LNS is as explained below:

Let A and B be two numbers in which (|A|â‰¥|B|). Then those two numbers in LNS are represented as below:

Now, we want to compute C = A ± B, then

where f ( EB âˆ’ EA) is given by:

The symbol ± represents addition or subtraction (i.e (+) is for addition and (-) is for subtraction). Note that if (EB âˆ’ EA) = 0, then f ( EB âˆ’ EA) = âˆ’âˆž, for subtraction and hence it has to be detected.

In Floating Point representation the addition is very simple and straightforward. However, it also requires that the exponents of both the operands must be equal before performing the addition. To do so, the mantissa of the operand that is having smaller exponent value is shifted to the right by the difference between the two exponents. Again this operation requires shifter with variable length which can shift the smaller mantissa upto the length of the mantissa. After the two operands are aligned properly, the mantissas are added and the exponent will become equal to the larger of the two exponents. If there is an overflow due to the addition of the mantissas then it is shifted right by one and the exponent should be incremented. This incrementation of the exponent may sometimes result in an overflow event that requires the sum be set to ±infinity.

In Floating Point representation the subtraction also very simple and is similar to Floating Point addition. The only difference is that in in addition the mantissas are added where as in this the mantissas are subtracted. A possible exception is the "catastrophic" cancellation. This "catastrophic" cancellation happens when two mantissas are of nearly same values which result in many number of leading zeros in the difference mantissa. Hence this also need a shifter with variable length.

Properties of Logarithmic Number System:

It has large dynamic range.

Additions are non linear.

Multiplications, divisions, square roots and exponentiations are easy.

Very advantageous at low precisions.

The adders cost is exponential to word length.

## (ii) Berkeley MPEG Encoder and LNS

## Brief Introduction to MPEG (Moving Picture Experts Group) Technology:

MPEG stands for Moving Picture Experts Group. This Group sets standards for audio and video compression and transmission.

The MPEG compression technique is an asymmetric technique. In this the encoder is more complex compared to the decoder. The encoder needs to be algorithmic or adaptive whereas the decoder is 'dumb' and carries out fixed actions. In broadcasting this can be considered as advantageous in which there are small number of expensive complex encoders and large number of inexpensive decoders. The MPEG standards give very little information regarding structure and operation of the encoder and implementers can supply encoders using proprietary algorithms. One such encoding tool is Berkeley MPEG encoding tool.

http://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/MPEG_Compression_Overview.svg/800px-MPEG_Compression_Overview.svg.png

Fig: MPEG Compression Overview

There are different types of MPEG standards. They are:

The MPEG standards consist of different Parts. Each part covers only a specific aspect of a whole specification. These standards also mention the Levels and the Profiles. To define a set of available tools, Profiles are used. To define the range of appropriate values for the properties that are associated with the standards, Levels are used. Different compression formats and ancillary standards of MPEG as given below:

MPEG 1 (1993): This is the first MPEG compression standard for both video and audio. The main aim of this standard is to encode the moving pictures and audio into a bit rate of a CD (Compact Disc). To achieve the low bit rate, the MPEG 1 standard down samples the images, and also it uses picture rates that are only 24 Hz - 30 Hz and hence the quality of MPEG 1 is medium quality. Also MPEG 1 has the famous audio compression format that is Layer 3 (MP3).

MPEG 2 (1995): This standard is a generic coding of moving pictures and the audio that is associated with them. Audio and Video transportation of broad casting quality for televisions. It supports interlacing and high definition. It is the used for over-the-air digital television ATSC, DVB and ISDB, also the digital satellite TV services such as Dish Network, digital cable television, SVCD and DVD Video.

MPEG 3: MPEG Actually there is no MPEG 3. When it was created its intention is to standardize the scalable and multi resolution compression and was targeted for High Definition Television (i.e. HDTV). Later it was found to be redundant and then merged with the MPEG 2 standard, hence there is no MPEG 3 standard.

MPEG 4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4 uses further coding tools with additional complexity to achieve higher compression factors than MPEG-2.[19] In addition to more efficient coding of video, MPEG-4 moves closer to computer graphics applications. In more complex profiles, the MPEG-4 decoder effectively becomes a rendering processor and the compressed bitstream describes three-dimensional shapes and surface texture.[19] MPEG-4 supports Intellectual Property Management and Protection (IPMP), which provides the facility to use proprietary technologies to manage and protect content like digital rights management.[20] It also supports MPEG-J, a fully programmatic solution for creation of custom interactive multimedia applications (Java application environment with a Java API) and many other features.[21][22][23] Several new higher-efficiency video standards (newer than MPEG-2 Video) are included, notably:

o MPEG-4 Part 2 (or Simple and Advanced Simple Profile) and

o MPEG-4 AVC (or MPEG-4 Part 10 or H.264). MPEG-4 AVC may be used on HD DVD and Blu-ray Discs, along with VC-1 and MPEG-2.

In addition, the following standards, while not sequential advances to the video encoding standard as with MPEG-1 through MPEG-4, are referred to by similar notation:

* MPEG-7 (2002): Multimedia content description interface. (ISO/IEC 15938)

* MPEG-21 (2001): Multimedia framework (MPEG-21). (ISO/IEC 21000) MPEG describes this standard as a multimedia framework and provides for intellectual property management and protection.

Moreover, more recently than other standards above, MPEG has started following international standards; each of the standards holds multiple MPEG technologies for a way of application.[24][25][26][27][28] (For example, MPEG-A includes a number of technologies on multimedia application format.)

* MPEG-A (2007): Multimedia application format (MPEG-A). (ISO/IEC 23000) (e.g., Purpose for multimedia application formats[29], MPEG music player application format, MPEG photo player application format and others)

* MPEG-B (2006): MPEG systems technologies. (ISO/IEC 23001) (e.g., Binary MPEG format for XML[30], Fragment Request Units, Bitstream Syntax Description Language (BSDL) and others)

* MPEG-C (2006): MPEG video technologies. (ISO/IEC 23002) (e.g., Accuracy requirements for implementation of integer-output 8x8 inverse discrete cosine transform[31] and others)

* MPEG-D (2007): MPEG audio technologies. (ISO/IEC 23003) (e.g., MPEG Surround[32] and two parts under development: SAOC-Spatial Audio Object Coding and USAC-Unified Speech and Audio Coding)

* MPEG-E (2007): Multimedia Middleware. (ISO/IEC 23004) (a.k.a. M3W) (e.g., Architecture[33], Multimedia application programming interface (API), Component model and others)

* Supplemental media technologies (2008). (ISO/IEC 29116) Part 1: Media streaming application format protocols will be revised in MPEG-M Part 4 - MPEG extensible middleware (MXM) protocols.[34]

* MPEG-V (under development): Media context and control. (ISO/IEC FCD 23005) (a.k.a. Information exchange with Virtual Worlds) [35][36] (e.g., Avatar characteristics, Sensor information, Architecture[37][38] and others)

* MPEG-M (under development): MPEG eXtensible Middleware (MXM). (ISO/IEC FCD 23006) [39][40][41] (e.g., MXM architecture and technologies[42], API, MPEG extensible middleware (MXM) protocols[43])

* MPEG-U (under development): Rich media user interfaces. (ISO/IEC FCD 23007)[44][45] (e.g., Widgets)

There are four main important parts in MPEG encoding. They are:

Conversion from RGB color to YUV color

Estimation of the Motion

Discrete Cosine Transform (DCT) and Inverse DCT and

Variable Length Coding

In MPEG encoding the DCT takes a huge portion of the computing. The 8 x 8 2 - dimensional DCT is give as below:

The MPEG uses three types of pictures. They are the 'I' picture, the 'P' picture and the 'B' picture.

The 'I' pictures are completely specified by the results of the IDCTs for that picture. The 'P' pictures and the 'B' pictures use data from other pictures to achieve motion compression. The 'P' picture depends on previous 'P' picture or 'I' picture. The 'B' picture depends on earlier or later 'I' picture or 'P' picture.

Generally, a 'GOP' (Group of Pictures) comprises of a 'I' picture along with zero or more 'B' pictures and/or 'P' pictures.

Of many encoding tools, Berkeley MPEG encoding tool is one of an MPEG encoding software tool that is used to perform MPEG encoding. In this method of this project we have focused on DCT part in MPEG-1 encoding system, and the Berkeley MPEG encoding tool is little modified such that its DCT function is performed by Logarithmic Number System arithmetic.

LNS features can be introduced into MPEG encoding and decoding techniques. It was observed that at same precision the videos that are encoded using LNS DCT has better visual clarity and better Signal to Noise Ratio when compared with the videos that are encoded using fixed point DSC.

Below are the encoded images for comparison using different encodings. The first figure, i.e fig 1 is an encoded video frame with F=4 for LNS, the second, i.e fig 2 is an encoded video frame with fixed-point encoding and third, i.e fig 3 is an encoded video frame with 64-bit floating-point encoding encoding. When the same precision is used the LNS has approximately the same visual effect as that of the fixed-point. The LNS and the fixed-point has not much difference than 64-bit floating-point encoding at F=5.

Fig 1: 10-bit LNS (F=4)

Fig 2: 16-bit FXP (F=4)

Fig 3: 64-bit FP

LNS need only few bits compared to fixed point encoding when same precision is used. In MPEG-1 encoding, for a range of -2048 to 2047, the fixed-point number needs an integer part with 12 bits (212 = 4096) where as the LNS requires only 6 bits (one sign bit, one sign bit for the exponent and 4 bits for the exponent). Hence the word is (12 + F) bits wide is the fixed-point word length, where as the (6 + F) bits wide is the LNS word length.

For practical observation, a file group of 305 frames with a more complex background and motion is encoded using LNS and fixed-point DCTs with different precision values and then the Signal to Noise Ratios (SNR) of the video are recorded and are as show in the below table 1.

From the above table it can be observed that the overall SNR and SNR for each color component: R (Red), G (Green) and B (Blue). The last line of the above shows the encoding SNR for an ideal 64 bit floating point DCT. Also from the above table it can be observed that the Logarithmic Number System encoding has a better Signal to Noise Ration compared to fixed-point number encoding under same precision conditions. LNS yields much better results than fixed point at lower precision values. However, there is a decrease in the quality in both encoding cases. From above table, clearly, LNS with F=4 is much better than the fixed point with F=6. The SNR for the G (Green) signal component has a better result compared to the other two signal components, R (Red) and B (Blue). The reason is, in MPEG files, the G (Green) component is very closely related to the luminance signal, Y, that is encoded with larger resolution. The LNS is very close to the floating point SNR at F=7.

## (iii) Hardware Implementation of LNS DCT

One of the desirable properties of the Logarithmic Number System is to have less number of adders. Many scientists have researched on fast algorithms for DCT/IDCT calculations. Chen's DCT algorithm is one of the fast DCT algorithm and it has the least number of additions. Chen's DCT has 26 additions and 16 multiplications for a 1D 8 point DCT. The desired hardware has 26 adders and 16 multipliers and in one clock cycle it can perform 1D DCT.

To still reduce the hardware, i.e the number of adders and multipliers, the 1D DCT has to be performed in two clock cycles by calculating the first and the second four output elements of the 1D DCT. By doing so, the hardware gets reduced to 14 adders and 10 multipliers and 22 MUXs word length.

Fig: LNS Adder

Below table, i.e table II lists the hardware comparison of fixed point and LNS implementation at precision F=4.

Chapter 3:-

VHDL

VHDL is a hardware description language that offers a broad set of constructs for describing even the most complicated logic in a compact fashion. The VHDL language is designed to fill a number of requirements throughout the design process:

Allows the description of the structure of a system--how it is decomposed into subsystems, and how those subsystems are interconnected.

Allows the specification of the function of a system using familiar programming language forms.

Allows the design of a system to be simulated prior to being implemented and manufactured. This feature allows you to test for correctness without the delay and expense of hardware prototyping.

Provides a mechanism for easily producing a detailed, device-dependent version of a design to be synthesized from a more abstract specification. This feature allows you to concentrate on more strategic design decisions, and reduce the overall time to market for the design.

## VHDL - History

early `70s: Initial discussions

late `70s: Definition of requirements

mid -`82: Contract of development with IBM, Intermetrics and TI

mid -`84: Version 7.2

mid -`86: IEEE-Standard

1987: DoD adopts the standard -> IEEE.1076

mid -`88: Increasing support by CAE manufacturers

late `91: Revision

1993: New standard

1999: VHDL-AMS extension

VHDL is a language which is permanently extended and revised. The original standard itself needed more than 16 years from the initial concept to the final, official IEEE standard. When the document passed the committee it was agreed that the standard should be revised every 5 years. The first revision phase resulted in the updated standard of the year 1993.

Independently of this revision agreement, additional effort is made to standardize "extensions" of the pure language reference. These extensions cover for examples packages (std_logic_1164, numeric_bit, numeric_std, ...) containing widely needed data types and subprograms, or the definition of special VHDL subsets like the synthesis subset IEEE 1076.6.

The latest extension is the addition of analogue description mechanisms to the standard which results in a VHDL superset called VHDL-AMS.

Designed by IBM, Texas Instruments, and Intermetrics as part of the DoD funded VHSIC program

Standardized by the IEEE in 1987: IEEE 1076-1987

Enhanced version of the language defined in 1993: IEEE 1076-1993

Additional standardized packages provide definitions of data types and expressions of timing data

IEEE 1164 (data types)

IEEE 1076.3 (numeric)

IEEE 1076.4 (timing)

## VHDL - Application Field

· Hardware design

ASIC: technology mapping

FPGA: CLB mapping

PLD: smaller structures, hardly any use of VHDL

Standard solutions, models, behavioural description, ...

· Software design

VHDL - C interface (tool-specific)

Main focus of research (hardware/software co-design)

## VHDL Structural Elements

· Entity : Interface

· Architecture : Implementation, behaviour, function

· Configuration : Model chaining, structure, hierarchy

· Process : Concurrency, event controlled

· Package : Modular design, standard solution, data types, constants

· Library : Compilation, object code

## Usage

Descriptions can be at different levels of abstraction

Switch level: model switching behavior of transistors

Register transfer level: model combinational and sequential logic components

Instruction set architecture level: functional behavior of a microprocessor

Descriptions can used for

Simulation

Verification, performance evaluation

Synthesis

First step in hardware design

## Describing the Interface: The Entity Construct

The interface is a collection of ports

Ports are a new programming object: signal

Ports have a type, e.g., bit

Ports have a mode: in, out, inout (bidirectional)

VHDL supports four basic objects: variables, constants, signals and file types (1993)

Variable and constant types

Follow traditional concepts

The signal object type is motivated by digital system modeling

Distinct from variable types in the association of time with values

Implementation of a signal is a sequence of time-value pairs!

Referred to as the driver for the signal

## Example Entity Descriptions

## Describing Behavior: The Architecture Construct

Description of events on output signals in terms of events on input signals: the signal assignment statement

Specification of propagation delays

Type bit is not powerful enough for realistic simulation: use the IEEE 1164 value system

## Libraries and Packages

Libraries are logical units that are mapped to physical directories

Packages are repositories for type definitions, procedures, and functions

User defined vs. system packages

## Configurations

Separate the specification of the interface from that of the implementation

An entity may have multiple architectures

Configurations associate an entity with an architecture

Binding rules: default and explicit

Use configurations

## The Process Construct

Statements in a process are executed sequentially

A process body is structured much like conventional C function

Declaration and use of variables

if-then, if-then-else, case, for and while constructs

A process can contain signal assignment statements

A process executes concurrently with other concurrent signal assignment statements

A process takes 0 seconds of simulated time to execute and may schedule events in the future

We can think of a process as a complex signal assignment statement!

## Concurrent Processes: Full Adder

Each of the components of the full adder can be modeled using a process

Processes execute concurrently

In this sense they behave exactly like concurrent signal assignment statements

Processes communicate via signals

## Chapter Code:-

library IEEE;

use IEEE.STD_LOGIC_1164.ALL;

use IEEE.STD_LOGIC_ARITH.ALL;

use IEEE.STD_LOGIC_UNSIGNED.ALL;

---- Uncomment the following library declaration if instantiating

---- any Xilinx primitives in this code.

--library UNISIM;

--use UNISIM.VComponents.all;

entity adder is

Port ( A,B : in STD_LOGIC_VECTOR (03 downto 0);

S : in STD_LOGIC;

STATUS,C : out STD_LOGIC_VECTOR (03 downto 0));

end adder;

architecture Behavioral of adder is

signal c0 :std_logic_vector(3 downto 0):="0000";

begin

process(A,B)

begin

if (s/='1') then

c0<="ZZZZ";

else

c0<= a+b;

if (a(3)='1' and b(3)='1') then

status(0)<='1';

status(2)<='0';

status(3)<='0';

elsif(a(3)='0' and b(3)='0') then

status(0)<='0';

status(2)<='0';

status(3)<='0';

elsif ((a(3)='0' and b(3)='1') or (a(3)='1' and b(3)='0' )) then

status(0)<='0';

status(2)<='1';

status(3)<='0';

else

status(0)<='1' ;

status(2)<='1';

status(3)<='0';

if (c0="0000") then

status(1)<='1';

else

status(1)<='0';

end if;

end if;

end if;

end process;

c<=c0;

end Behavioral;

library IEEE;

use IEEE.STD_LOGIC_1164.ALL;

use IEEE.STD_LOGIC_ARITH.ALL;

use IEEE.STD_LOGIC_SIGNED.ALL;

---- Uncomment the following library declaration if instantiating

---- any Xilinx primitives in this code.

--library UNISIM;

--use UNISIM.VComponents.all;

entity Shift is

Port ( S : in STD_LOGIC;

A : in STD_LOGIC_VECTOR (3 downto 0);

B : in STD_LOGIC_VECTOR (03 downto 0);

C : out STD_LOGIC_VECTOR (03 downto 0);

STATUS : out STD_LOGIC_VECTOR (03 downto 0));

end Shift;

architecture Behavioral of Shift is

begin

process(A,B)

variable i,l:integer:=0;

variable j:integer:=4;

variable k,p,m:std_logic_vector(3 downto 0);

begin

p:="0000";

if(S='1')then

k:=B;

m:=k;

i:=conv_integer(A);

if(i<0)then

k:="ZZZZ";

else

while(i>0) loop

k(2):=m(3);

k(1):=m(2);

k(0):=m(1);

m:=k;

end loop;

p(2):=k(3);

if(k="0000")then

p(1):='1';

end if;

end if;

C<=k;

STATUS<=p;

else

C<="ZZZZ";

STATUS<="ZZZZ";

end if;

end process;

end Behavioral;