Digital modulation and demodulation using quadrature phase shift keying in matlab
Chapter 1 Digital Communications
1.0 Digital Communication
1.1 Introduction
Communication Process:
When we think of communication, we usually think of people talking or listening to each other. This may happen face to face, or it may occur through the assistance of a telephone, radio, or television.
Basically, communication is the transfer of information. Life In our modern, complex world depends more and more on the transfer of information. The increasing dependency on the transfer of information has stimulated the growth of more and more communication systems. This surge in communication and communication systems has been referred to as a technological revolution.
This shows understand the transfer of information in a communication system
The communication system will consist of at least the three parts shown. The channel can be as simple as the air that carries the sound of your voice, or as complex as the satellite network required to carry a television program around the world.
The most common problem encountered by the communication process is interference. Interference is any force that disrupts or distorts the information or message while it is being "channeled." It could be noise, as in the case of normal conversation, or atmospheric weather changes, as In the case of radio or television
The biggest cause of interference, however, is a simple misinterpretation of the intended message. Cultural, economic, and political diversities allow people to receive the same message but interpret it differently.
Communication Systems:
Communication system is a combination of processes and hardware used to accomplish the transfer of Information (communication).
A system is a group of interrelated parts. We find that there are systems all around us. In nature, we can also find examples of systems that have been created by people. An automobile, a washing machine, and an electric drill are examples.
1.2 TYPES OF COMMUNICATION:
Based on the requirements, the communications can be of different types:
Pointtopoint communication: In this type, communication takes place between two end points. For instance, in the case of voice communication using telephones, there is one calling party and one called party. Hence the communication is pointtopoint.
Pointtomultipoint communication: In this type of communication, there is one sender and multiple recipients. For example, in voice conferencing, one person will be talking but many others can listen. The message from the sender has to be multicast to many others.
Broadcasting: In a broadcasting system, there is a central location from which information is sent to many recipients, as in the case of audio or video broadcasting. In a broadcasting system, the listeners are passive, and there is no reverse communication path.
In simplex communication, the communication is oneway only.
In halfduplex communication, communication is both ways, but only in one direction at a time.
In fullduplex communication, communication is in both directions simultaneously.
Simplex communication: In simplex communication, communication is possible only in one direction. There is one sender and one receiver; the sender and receiver cannot change roles.
Halfduplex communication: Halfduplex communication is possible in both directions between two entities (computers or persons), but one at a time. A walkietalkie uses this approach. The person who wants to talk presses a talk button on his handset to start talking, and the other person's handset will be in receiving mode.
When the sender finishes, he terminates it with an over message. The other person can press the talk button and start talking. These types of systems require limited channel bandwidth, so they are low cost systems.
Fullduplex communication: In a fullduplex communication system, the two parties—the caller and the called—can communicate simultaneously, as in a telephone system. However, note that the communication system allows simultaneous transmission of data, but when two persons talk simultaneously, there is no effective communication! The ability of the communication system to transport data in both directions defines the system as fullduplex.
1.3 ANALOG VERSUS DIGITAL TRANSMISSION:
In analog communication, the signal, whose amplitude varies continuously, is transmitted over the medium. Reproducing the analog signal at the receiving end is very difficult due to transmission impairments. Hence, analog communication systems are badly affected by noise.
In a digital communication system, 1s and 0s are transmitted as voltage pulses. So, even if the pulse is distorted due to noise, it is not very difficult to detect the pulses at the receiving end. Hence, digital communication is much more immune to noise as compared to analog communication.
1.4 Digital Modulation:
Firstly, what do we mean by digital modulation? Typically the objective of a digital communication system is to transport digital data between two or more nodes. In radio communications this is usually achieved by adjusting a physical characteristic of a sinusoidal carrier, the frequency, phase, amplitude or a combination thereof. This is performed in real systems with a modulator at the transmitting end to impose the physical change to the carrier and a demodulator at the receiving end to detect the resultant modulation on reception.
* Modulation is the process of varying some characteristic of a periodic wave with an external signal.
* Modulation is utilized to send an information bearing signal over long distances.
* Radio communication superimposes this information bearing signal onto a carrier signal.
* These high frequency carrier signals can be transmitted over the air easily and are capable of traveling long distances.
* The characteristics (amplitude, frequency, or phase) of the carrier signal are varied in accordance with the information bearing signal.
* In the field of communication engineering, the information bearing signal is also known as the modulating signal.
* The modulating signal is a slowly varying signal  as opposed to the rapidly varying carrier frequency.
The principal of a digital communication system is that during a finite interval of time, it sends a waveform from a finite set of possible waveforms, in contrast to an analog communication system, which sends a waveform from an infinite variety of waveform shapes, with theoretically infinite resolution. In a DCS (digital communication system), the objective of the receiver is not to reproduce a transmitted waveform with precision. The objective is to determine from a noiseperturbed signal which waveform from the finite set of waveforms was sent by the transmitter.
Why Digital?
· The primary advantage is the ease with which digital signals, compared with analog signals, is regenerated. The shape of the waveform is affected by two basic mechanisms.
 As all transmission lines and circuits have some nonideal frequency transfer function, there is a distorting effect on the ideal pulse.
 Unwanted electrical noise or other interference further distorts the pulse waveform.
Both of these mechanisms cause the pulse shape to degrade.
* With digital techniques, extremely low error rates producing high signal fidelity are possible through error detection and correction but similar procedures are not available with analog.
* Digital circuits are more reliable and can be reproduced at a lower cost than analog circuits.
* Digital hardware lends itself to more flexible implementation than analog circuits.
* The combination of digital signals using Time Division Multiplexing (TDM) is simpler than combining analog signals using Frequency Division Multiplexing (FDM).
Metrics for Digital Modulation
• Power Efficiency
 Ability of a modulation technique to preserve the fidelity of the digital message at low power levels
 Designer can increase noise immunity by increasing signal power
 Power efficiency is a measure of how much signal power should be increased to achieve a particular BER for a given modulation scheme
 Signal energy per bit / noise power spectral density: Eb / N0
• Bandwidth Efficiency
 Ability to accommodate data within a limited bandwidth
 Tradeoff between data rate and pulse width
 Throughput data rate per hertz: R/B bps per Hz
• Shannon Limit: Channel capacity / bandwidth
 C/B = log2(1 + S/N)
Disadvantages of Digital Systems
* Digital systems tend to be very signal processing intensive compared with analog.
* Digital systems need to allocate a significant share of their resources to the task of synchronization at various levels. With analog signals synchronization is accomplished more easily.
* One disadvantage of digital communication system is nongraceful degradation. When the SNR drops below a certain threshold, the quality of service can change form very good to very poor. Most analog systems degrade more gracefully.
Formatting
The goal of the first essential processing step, formatting is to ensure that the source signal is compatible with digital processing. Transmit formatting is a transformation from source information to digital symbols. When data compression in addition to formatting is employed, the process is termed source coding.
The digital messages are considered to be in the logical format of binary 1's and 0's until they are transformed by pulse modulation into base band (pulse) waveforms. Such waveforms are then transmitted over a cable.
No channel can be used for the transmission of binary digits without first transforming the digits to waveforms that are compatible with the channel. For base band channels, compatible waveforms are pulses.
The conversion from a bit of streams to a sequence of pulse waveforms takes place in the block labeled, modulator. The output of a modulator is typically a sequence of pulses with characteristics that correspond to the digits being sent. After transmission through the channel the pulse waveforms are recovered (demodulated) and detected to produce an estimate of the transmitted digits.
Formatting in a digital Communication System
Symbols
When digitally transmitted, the characters are first encoded into a sequence of bits, called a bit stream or base band signal. Group of K bits can then be combined to form new digits, or symbols, from a finite or alphabet of M = 2^K such symbols. A system using a symbol set size of M is referred to as Marray system.
Waveform Representation of Binary Digits
Digits are just abstractions  way to describe the message information. Thus we need something physical that will represent or carry the digits.
Thus binary digits are represented with electrical pulses in order to transmit them through a base band channel. At the receiver, a determination must be made regarding the shape of pulse. The likelihood of correctly detecting the pulse is a function of the received signal energy (or area under the pulse).
PCM Waveform Types
When pulse modulation is applied to a binary symbol, the resulting binary waveform is called a PCM waveform. There are several types of PCM waveforms. These waveforms are often called line codes. When pulse modulation is applied to nonbinary symbol, the resulting waveform is called an Mary pulse modulation waveform.
The PCM waveforms fall into the following four groups.
1) Non return to zero (NRZ)
2) Return to zero (RZ)
3) Phase encoded
) Multilevel binary
The NRZ group is probably the most commonly used PCM waveform.
In choosing a waveform for a particular application, some of the parameters worth examining are
1) DC component
2) Self clocking
3) Error detection
) Bandwidth compression
5) Differential encoding
6) Noise immunity
The most common criteria used for comparing PCM waveforms and for selecting one waveform type from many available are
1) Spectral characteristics
2) Bit synchronization capabilities
3) Error detection capabilities
) Interference
5) Noise immunity
6) Cost and complexity of implementation
Bits per PCM Word and Bits per Symbol
Each analog sample is transformed into a PCM word up to group of bits. The number of quantization levels allowed for each sample can describe the PCM word size; this is identical to the number of values that the PCM word can assume. We use
L=2^l
Where L is the number of quantization levels in PCM word, l is the number of bits needed to represent those levels.
MARY Pulse Modulation Waveforms
There are three basic ways to modulate information onto a sequence of pulses; we can vary the pulse's amplitude, position, or duration. This leads to the names
1) PAM (pulse amplitude modulation)
2) PPM (pulse position modulation)
3) PDM/PWM (pulse duration modulation/ pulse width modulation)
When information samples without any quantization are modulated on to the pulses, the resulting pulse modulation can be called analog pulse modulation. When the information samples are first quantized, yielding symbols from an Mary alphabet set, and the modulation on to pulses, the resulting pulse modulation is digital and we refer to it as Mary pulse modulation.
Baseband modulation with pulses has analogous counterparts in the area of bandpass modulation. PAM is similar to amplitude modulation, while PPM and PDM are similar to phase and frequency modulation respectively.
Spectral Density
The spectral density of a signal characterizes the distribution of the signals energy or power in the frequency domain.
Energy Spectral Density
We can relate the energy of a signal expressed in time domain to the energy expressed in frequency domain as:
∞
Ex = ∫ x²(t) dt
∞
∞
= ∫ X (f) ² df
∞
Where X (f) is the Fourier transform of the non periodic signal x (t).
Let ψ (t) = X (f) ²
∞
Ex = 2 ∫ ψx (f) df
∞
Power Spectral Density
The power spectral density function Gx (t) of the periodic signal x (t) is real, even and nonnegative function of frequency that gives the distribution of the power of x (t) in the frequency domain.
∞
Gx (t) = ∑ Cn² ∫(fnfo)
n =∞
PSD of a periodic signal is a discrete function of frequency.
∞
Px = ∫ Gx (t) df
∞
∞
= 2 ∫Gx (F) df
0
If x (t) is a nonperiodic signal it cannot be expressed by a Fourier series, and if it is a nonperiodic power signal (having infinite energy) it may not have a Fourier transform. However we still express the PSD of such signals in a limiting sense.
Chapter 2 Modulation and Demodulation
2.0 Modulation and Demodulation
Since the early days of electronics, as advances in technology were taking place, the boundaries of both local and global communication began eroding, resulting in a world that is smaller and hence more easily accessible for the sharing of knowledge and information. The pioneering work by Bell and Marconi formed the cornerstone of the information age that exists today and paved the way for the future of telecommunications.
Traditionally, local communication was done over wires, as this presented a costeffective way of ensuring a reliable transfer of information. For longdistance communications, transmission of information over radio waves was needed. Although this was convenient from a hardware standpoint, radiowaves transmission raised doubts over the corruption of the information and was often dependent on highpower transmitters to overcome weather conditions, large buildings, and interference from other sources of electromagnetic.
The various modulation techniques offered different solutions in terms of costeffectiveness and quality of received signals but until recently were still largely analog. Frequency modulation and phase modulation presented certain immunity to noise, whereas amplitude modulation was simpler to demodulate. However, more recently with the advent of lowcost microcontrollers and the introduction of domestic mobile telephones and satellite communications, digital modulation has gained in popularity. With digital modulation techniques come all the advantages that traditional microprocessor circuits have over their analog counterparts. Any shortfalls in the communications link can be eradicated using software. Information can now be encrypted, error correction can ensure more confidence in received data, and the use of DSP can reduce the limited bandwidth allocated to each service.
As with traditional analog systems, digital modulation can use amplitude, frequency, or phase modulation with different advantages. As frequency and phase modulation techniques offer more immunity to noise, they are the preferred scheme for the majority of services in use today and will be discussed in detail below
2.1 Digital Frequency Modulation:
A simple variation from traditional analog frequency modulation can be implemented by applying a digital signal to the modulation input. Thus, the output takes the form of a sine wave at two distinct frequencies. To demodulate this waveform, it is a simple matter of passing the signal through two filters and translating the resultant back into logic levels. Traditionally, this form of modulation has been called frequencyshift keying (FSK).
2.2 Digital Phase Modulation:
Spectrally, digital phase modulation, or phaseshift keying, is very similar to frequency modulation. It involves changing the phase of the transmitted waveform instead of the frequency, these finite phase changes representing digital data. In its simplest form, a phasemodulated waveform can be generated by using the digital data to switch between two signals of equal frequency but opposing phase. If the resultant waveform is multiplied by a sine wave of equal frequency, two components are generated: one cosine waveform of double the received frequency and one frequencyindependent term whose amplitude is proportional to the cosine of the phase shift. Thus, filtering out the higherfrequency term yields the original modulating data prior to transmission.
* Modulate and demodulate/detect blocks together are called a modem.
* The frequency down conversion is performed in the front end of the demodulator.
* Only formatting, modulation, demodulation/detection and synchronization are essential for a digital communication system.
* FORMATTING transforms the source information into bits.
* From this point up to pulse modulation block, the information remains in the form of a bit stream.
* Modulation is the process by which message symbols or channel symbols are converted to waveforms that are compatible with the requirements imposed by transmission channel. Pulse modulation is an essential step because each symbol to be transmitted must first be transformed from a binary representation to a base band waveform.
* When pulse modulation is applied to binary symbols, the resulting binary waveform is called a PCM waveform. When pulse modulation is applied to nonbinary symbols, the resulting waveform is called an Mary pulse modulation waveform.
* Band pass modulation is required whenever the transmission medium will not support the propagation of pulse like waveforms.
* The term band pass is used to indicate that the base band waveform gi (t) is frequency translated by a carrier wave to a frequency that is much larger than the spectral content of gi (t).
* Equalization can be described as a filtering option that is used in or after the demodulator to reserve any degrading effects on the signal that were caused by the channel. An equalizer is implemented to compensate for any signal distortion caused by a no ideal hi(t)
* Demodulation is defined as a recovery of a waveform (band pass pulse) and detection is defined as decisionmaking regarding the digital meaning of that waveform.
2.3 Linear Modulation Techniques
* Digital modulation techniques may be broadly classified as linear and nonlinear. In linear modulation techniques, the amplitude to the modulation signal S (t) varies linearly with the modulating digital signal m (t).
* Linear modulation techniques are bandwidth efficient.
* In a linear modulation technique, the transmitted signal S (t) can be expressed as:
S (t) = Re [Am (t) exp (j2pfct)]
= A [mr(t)cos(2pfct)  mI(t)sin(2pfct)]
Where
A is the amplitude
fc is the carrier frequency
m (t) = mr(t) + mI(t) is a complex envelope representation of the modulated signal which is in general complex form.
* From the equations above, it is clear that the amplitude of the carrier varies linearly with the modulating signal.
* Linear modulation schemes, in general do not have a constant envelope.
Linear modulation schemes have very good spectral efficiency.
Normalized Radian Frequency
Sinusoidal waveforms are of the form:
X (t) =Acos (wt+f)  (1)
If we sample this waveform, we obtain
X[n] =x (nTs)
=Acos (wnTs+f)
=Acos (wn+f)  (2)
Where we have defined w to be Normalized Radian Frequency:
w=wTs
The Signal in (2) is a discrete time cosine signal, and w is the discrete time radian frequency. w has been normalized by the sampling period. w has the units of radians/second, w=wTs has the units of radians; i.e. wis a dimensionless quantity. This is entirely consistent with the fact that the index n in x[n] is a dimensionless. Once the samples are taken from x (t), the time scale information is lost. The discrete time signal is just a sequence of numbers, and these numbers carry no information about the sampling period, which is the information required to reconstruct the time scale. Thus an infinite number of continuous time sinusoidal signals can be transformed into the same discrete time sinusoid by sampling. All we need to is to change the sampling period with changes in frequency of the continuous time sinusoid.
2.4 Baseband Transmission
Baseband Demodulation/Detection
· The filtering at the transmitter and the channel typically cause the received pulse sequence to suffer from ISI (Inter Symbol Interference), thus the signal is not quiet ready for sampling and detection.
· The goal of the demodulator is to recover the pulse with best possible signal to noise ratio (SNR), free of any ISI.
· Equalization is a technique used to help accomplish this goal. Every type of communication channel does not require the equalization process. However equalization process embodies a sophisticated set of signal processing techniques, making it possible to compensate for channel induced interference.
· A received band pass waveform is first transformed to a base band waveform before the final detection step takes place.
· For liner systems, the mathematics of detection is unaffected by a shift in frequency.
* According to the equivalence theorem, all linear signalprocessing simulations can take place at base band (which is preferred for simplicity) with the same result as at band pass. Thus the performance of most digital communication systems will often be described and analyzed as if the transmission channel is a base band channel.
Chapter 3 p/4 Quadrature
3.0 p/4 Quadrature Phase Shift Keying (p/4 QPSK)
3.1 Linear Modulation Techniques
* Digital modulation techniques may be broadly classified as linear and nonlinear. In linear modulation techniques, the amplitude to the modulation signal S (t) varies linearly with the modulating digital signal m (t).
* Linear modulation techniques are bandwidth efficient.
* In a linear modulation technique, the transmitted signal S (t) can be expressed as:
S (t) = Re [Am (t) exp (j2pfct)]
= A [mr(t)cos(2pfct)  mI(t)sin(2pfct)]
Where
A is the amplitude
fc is the carrier frequency
m (t) = mr(t) + mI(t) is a complex envelope representation of the modulated signal which is in general complex form.
* From the equations above, it is clear that the amplitude of the carrier varies linearly with the modulating signal.
* Linear modulation schemes, in general do not have a constant envelope.
Linear modulation schemes have very good spectral efficiency.
There are three major classes of digital modulation techniques used for transmission of digitally represented data:
* Amplitudeshift keying (ASK)
* Frequencyshift keying (FSK)
* Phaseshift keying (PSK)
All convey data by changing some aspect of a base signal, the carrier wave, (usually a sinusoid) in response to a data signal. In the case of PSK, the phase is changed to represent the data signal. There are two fundamental ways of utilizing the phase of a signal in this way:
* By viewing the phase itself as conveying the information, in which case the demodulator must have a reference signal to compare the received signal's phase against; or
* By viewing the change in the phase as conveying information — differential schemes, some of which do not need a reference carrier (to a certain extent).
A convenient way to represent PSK schemes is on a constellation diagram. This shows the points in the Argand plane where, in this context, the real and imaginary axes are termed the inphase and quadrature axes respectively due to their 90° separation. Such a representation on perpendicular axes lends itself to straightforward implementation. The amplitude of each point along the inphase axis is used to modulate a cosine (or sine) wave and the amplitude along the quadrature axis to modulate a sine (or cosine) wave.
In PSK, the constellation points chosen are usually positioned with uniform angular spacing around a circle. This gives maximum phaseseparation between adjacent points and thus the best immunity to corruption. They are positioned on a circle so that they can all be transmitted with the same energy.
In this way, the moduli of the complex numbers they represent will be the same and thus so will the amplitudes needed for the cosine and sine waves. Two common examples are binary phaseshift keying (BPSK) which uses two phases, and quadrature phaseshift keying (QPSK) which uses four phases, although any number of phases may be used. Since the data to be conveyed are usually binary, the PSK scheme is usually designed with the number of constellation points being a power of 2.
3.2 Amplitude Shift Keying (ASK)
Amplitude shift keying  ASK  in the context of digital communications is a modulation process, which imparts to a sinusoid two or more discrete amplitude levels. These are related to the number of levels adopted by the digital message.
For a binary message sequence there are two levels, one of which is typically zero.
Thus the modulated waveform consists of bursts of a sinusoid. In Amplitude Shift Keying the Amplitude varies whereas the phase and frequency remains the same as shown in following .
One of the disadvantages of ASK, compared with FSK and PSK, for example, is that it has not got a constant envelope. This makes its processing (eg, power amplification) more difficult, since linearity becomes an important factor. However, it does make for ease of demodulation with an envelope detector.
Thus demodulation is a twostage process:
å Recovery of the band limited bit stream
å Regeneration of the binary bit stream
3.3 Frequencyshift keying (FSK)
Frequencyshift keying (FSK) is a method of transmitting digital signals. The two binary states, logic 0 (low) and 1 (high), are each represented by an analog waveform. Logic 0 is represented by a wave at a specific frequency, and logic 1 is represented by a wave at a different frequency. In frequency Shift Keying the frequency varies whereas the phase and amplitude remains the same.
Phase shift keying (PSK)
Phase Shift Keying (PSK) was developed during the early days of the deepspace program. PSK is now widely used in both military and commercial communication systems.
In phase shift Keying the phase of the transmitted signal varies whereas the amplitude and frequency remains the same.
The general expression for the PSK is as
Where,
ji(t) = the phase term will have M discrete values, given by,
ji(t) = 2pi /M
3.4 Binary PSK
In binary phase shift keying we have two bits represented by the following waveforms;
S0(t) = A cos (wt) represents binary "0"
S1(t) = A cos (wt + p) represents binary "1"
For Marray PSK, M different phases are required, and every n (where M=2n) bits of the binary bit stream are coded as one signal that is transmitted as
A sin (wt + qj)
where j=1,....., M
3.5 Quadra phaseShift Modulation
Taking the above concept of PSK a stage further, it can be assumed that the number of phase shifts is not limited to only two states. The transmitted "carrier" can undergo any number of phase changes and, by multiplying the received signal by a sine wave of equal frequency, will demodulate the phase shifts into frequencyindependent voltage levels.
This is indeed the case in quadraphaseshift keying (QPSK). With QPSK, the carrier undergoes four changes in phase (four symbols) and can thus represent 2 binary bits of data per symbol. Although this may seem insignificant initially, a modulation scheme has now been supposed that enables a carrier to transmit 2 bits of information instead of 1, thus effectively doubling the bandwidth of the carrier
Euler's relations state the following:
Now consider multiplying two sine waves together, thus
From Equation 1, it can be seen that multiplying two sine waves together (one sine being the incoming signal, the other being the local oscillator at the receiver mixer) results in an output frequency double that of the input (at half the amplitude) superimposed on a dc offset of half the input amplitude.
Similarly, multiplying by gives which gives an output frequency double that of the input, with no dc offset.
It is now fair to make the assumption that multiplying by any phaseshifted sine wave yields a "demodulated" waveform with an output frequency double that of the input frequency, whose dc offset varies according to the phase shift.
Thus, the above proves the supposition that the phase shift on a carrier can be demodulated into a varying output voltage by multiplying the carrier with a sinewave local oscillator and filtering out the highfrequency term. Unfortunately, the phase shift is limited to two quadrants; a phase shift of π/2 cannot be distinguished from a phase shift of π/2. Therefore, to accurately decode phase shifts present in all four quadrants, the input signal needs to be multiplied by both sinusoidal and cosinusoidal waveforms, the high frequency filtered out, and the data reconstructed. The proof of this, expanding on the above mathematics, is shown below.
Thus,
Quadrature PSK (QPSK)
If we define four signals, each with a phase shift differing by 90 degree, then we have quadrature phase shift keying (QPSK).
The input binary bit stream {dk}, k = 0,1,2,..... arrives at the modulator input at a rate 1/T bits/sec and is separated into two data streams dI (t) and dQ (t) containing odd and even bits respectively.
dI(t) = d0, d2, d4 ,....... (Even Bits)
dQ(t) = d1, d3, d5 , ...... (Odd Bits)
QPSK (or 4PSK) is a modulation technique that transmits 2bit of information using 4 states of phases.
Each symbol corresponds to two bits.
General expression:
In QPSK the carrier phase can change only once every 2T secs. If from one T interval to the next one, neither bit stream changes sign, the carrier phase remains unchanged. If one component aI(t) or aQ (t) changes sign, a phase change of p/2 occurs. However if both components change sign then a phase shift of p occurs.
If a QPSK modulated signal undergoes filtering to reduce the spectral side lobes, the resulting waveform will no longer have a constant envelop and in fact, the occasional 180o shifts in phase will cause the envelope to go to zero momentarily.
In the above we are shown an NRZ bitstream. In QPSK the odd bits will go to the Inphase Component whereas the even bits will go to the Quadrature Components. The Duration of I and Q Components will be doubled to match the Polar NRZ waveform
3.6 Advantages and Disadvantages of PSK
Advantage:
Some of the advantages of the Phase modulation are,
Bandwidth Efficiency
In order to improve on the bandwidth efficiency of band pass data transmission, we can increase the number of symbol states
Disadvantages:
Some of the disadvantages of PSK are,
Reduced immunity to noise
As a general rule, we know that as the number of symbol states is increased, the tolerance to noise is reduced
Two exceptions to this rule, QPSK and orthogonal MFSK
Decreased immunity to noise compared to binary
Increased transmission power compared to binary
Increased complexity compared to binary
Lower transmission quality compared to binary
3.4 Binary Phase Shift Keying (BPSK)
In BPSK, the phase of a constant amplitude carrier signal is switched between two values according to two possible signals m1 and m2 corresponding to binary 1 and 0, respectively. Normally the two phases are separated by 180°. If the sinusoidal carrier has an amplitude Ac and energy per bit
Eb = ½(Ac2Tb)
Then the transmitted BPSK signal is either
SBPSK(t) = √(2Eb/Tb) cos (2pfct+θc)
0≤t≤Tb (binary 1)
or
SBPSK(t) =  √(2Eb/Tb) cos (2pfct+θc)
0≤t≤Tb (binary 0)
It is often convenient to generalize m1 and m2 as a binary data signal m(t), which takes on one of two possible pulse shapes, Then the transmitted signal may be represented as
SBPSK(t) = m(t)√(2Eb/Tb) cos (2pfct+θc)
The BPSK signal is equivalent to a double sideband suppressed carrier amplitude modulated waveform, where cos (2pfct) is applied as a carrier, and the data signal m(t) is applied as the modulating waveform.
3.5 Quadrature Phase Shift Keying (QPSK)
S (t) =SI(t) Cos (2pfct)SQ(t) Sin (2pfct)
Where SI (t) is the inphase component of the modulated wave, and SQ(t) is the quadrature component. The Cosine and Sin waves of the carrier wave are inphase quadrature with each other.
This modulation scheme is characterized by the fact that the information carried by the transmitted wave is contained in the phase. In QPSK, the phase of the carrier takes on one of four equally spaced values, such as p/4, 3p/4, 5p/4 and 7p/4.
Si(t) = {Ö(2E/T) Cos [2pfct+ (2i1)p/4] 0≤t≤T
0 else where}
Where i=1, 2, 3, 4
E=transmitted signal energy per symbol
T=symbol deviation
Fc=carrier frequency=nc/T
As Cos (A+B) =CosACosBSinASinB
Si(t)={Ö(2E/T)Cos[(2i1)p/4]Cos(2pfct) Ö(2E/T)Sin[(2i1)p/4]Sin(2pfct)
0}  (1)
Where i=1, 2, 3, 4
Observations Based on eq. (1)
1) There are only two orthonormal basis functions, f1(t) and f2(t), contained in the expansion of Si (t).
f1(t) = Ö(2/T) Cos (2pfct) 0≤t≤T
f2(t) = Ö(2/T) Sin (2pfct) 0≤t≤T
2) There are four message points, and the associated signal vectors can be defined by:
Si= {Ö(E) Cos [(2i1)p/4]  Ö(E) Sin [(2i1)p/4]}
Si1 Si2
Where i=1, 2, 3, 4
Signal Space Characteristics of QPSK
Input di bit 0≤t≤T 
Phase of QPSK signal (radians) 
Coordinates of message points Si1 Si2 

10 
p/4 
+ÖE/2 
ÖE/2 
00 
3p/4 
ÖE/2 
ÖE/2 
01 
5p/4 
ÖE/2 
+ÖE/2 
11 
7p/4 
+ÖE/2 
+ÖE/2 
Table 3.1 Signal Space Characteristics of QPSK
Accordingly, a QPSK signal is characterized by having a two dimensional signal constellation (i.e. N=2) and four message points (i.e. M=4).
Constellation Diagram for QPSK
Waveform Representation of QPSK
QPSK has twice the bandwidth efficiency of BPSK, since two bits are transmitted in a single modulation symbol. The phase of the carrier takes on one of four equally spaced values, such as 0, p/2, p and 3p/2, where each value of phase corresponds to a unique pair of message bits. The QPSK signal for this set of symbol states may be defined as
SQPSK(t) = (√(2Es/Ts) cos[2pfct+(i+1)/2p]
0≤t≤Ts, i=1, 2, 3, 4
Where Ts is the symbol duration and is equal to twice the bit period.
Using trigonometric identities
SQPSK(t) = (√(2Es/Ts) cos[(i1)p/2] cos(2pfct))  (√(2Es/Ts) sin[(i1)p/2] sin(2pfct))
If basis functions f1(t) = √(2/Ts)cos(2πfct) , f2(t)=√(2/Ts)sin(2πfct) are defined over the interval 0≤t≤Ts for the QPSK signal set, the four signals in the set can be expressed in terms of the basis signals as
SQPSK(t) = {√Es cos [(i1)π/2] f1(t) {√Es sin[(i1)π/2] f2(t)]}
i=1, 2, 3, 4
Based on this representation, a QPSK signal can be depicted using a two dimensional constellation diagram with four points as shown in the below:
From the constellation diagram of a QPSK signal, it can be seen that the distance between the adjacent points in the constellation is √ (2Es).
A striking result is that the bit error probability of QPSK is identical to BPSK, but twice as much data can be sent in the same bandwidth. Thus, when compared to BPSK, QPSK provides twice the spectral efficiency with exactly the same energy efficiency.
Similar to BPSK, QPSK can also be differentially encoded to allow noncoherent detection.
Difference between a biphase modulator and a quadrature (QPSK) modulator is The biphase modulator has two phase states, 0 degree and 180 degree, modulated by switching polarity of the dc voltage at the control ports.
The basic elements in a QPSK modulator are a pair, (each pair in quadrature to each other) matched biphase modulators, modulated by switching polarities of dc voltage controls in 4 different logics for 4 phase states.
I&Q modulator/demodulator
A popular form of an I&Q modulator/ demodulator consists of a 90° splitter, two mixers and a 0° combiner/splitter, as shown below.
In a modulator, the incoming data stream is split into two independent bit streams to form l&Q signals. These are unconverted in mixers 1 & 2. Since the local oscillator signals to the mixers are 90° apart in phase, the outputs of mixers 1 & 2 are orthogonal. These signals are vectorially combined in a 0° hybrid. In essence, the quadrature nature of I&Q modulation is used for bandwidth reduction of modulated signals. By proper coding, of I&Q signals an I&Q modulator can be used for higherorder modulation. In a demodulator, the output I&Q signals are decoded.
Difference between an I&Q and QPSK modulator
An I&Q modulator can be used as a QPSK modulator. As an I&Q modulator, mixers operate in the linear range. In a QPSK (360° in 90° steps) modulator (such as MiniCircuits QMC100), mixers behave as biphase modulators and operate in a saturated mode.
Spectrum and Bandwidth of QPSK Signal
The power spectrum density of a QPSK signal using rectangular pulses can be expressed as
PQPSK(f) = (Es/2)[(sin π(ffc)Ts/π(ffc)Ts)2 + (sin π(ffc)Ts/π(ffc)Ts)2]
=Eb[(sin 2π(ffc)Tb/2π(ffc)Tb)2 + (sin 2π(ffc)Tb/2π(ffc)Tb)2]
Offset Quadrature Phase Shift Keying (OQPSK)
OQPSK signaling is similar to QPSK signaling except for time alignment of the even and odd bit streams. In QPSK signaling the bit transitions of the even and the odd bit streams occur at the same time instance, but in OQPSK signaling, the even and odd bit streams, mI(t) and mQ(t),are offset in their relative alignment by one bit period (half symbol period).
When QPSK signals are pulse shaped, they lose the constant envelope property. Phase Shift of π radians can cause the signal envelope to pass through zero for just an instance. Any kind of known linear amplification of the zero crossings bring back the filtered side lobes, since the fidelity of the signal at small voltage levels is lost in transmission. To prevent the regeneration of side lobes and spectral widening, it is imperative that QPSK signals that use pulse shaping be amplified only using linear amplifiers, which are less sufficient.OQPSK is less susceptible to these deleterious effects and supports more efficient amplification.
In QPSK the phase transitions occur only once every Ts = 2Tb, and will be a maximum of 180°. Whereas in the case of OQPSK the bit transitions (and hence phase transitions) occurs every Tb. Since the transition instants of mI(t) and mQ(t) are offset, at any given time only one of the two bit streams can change values. Therefore the maximum phase shift of the transmitted signal at any given time is limited to ±90°. Hence OQPSK eliminates the 180° phase transitions.
As the 180° phase transitions have been eliminated, band limiting of OQPSK signals does not cause the signal envelope to go to zero. The ISI caused by the band limiting signal is lesser than QPSK, and hence hard limiting or nonlinear amplification of OQPSK signals does not regenerate the high frequency side lobes as much in QPSK.
The spectrum of the OQPSK signal is the identical to that of QPSK signal; hence both signals occupy the same bandwidth.
π/4 Quadrature Phase Shift Keying(π/4 QPSK)
* It may be demodulated in a coherent or noncoherent fashion.
* Maximum phase transition is limited to ± 135° as compared to 180° for QPSK and 90° for OQPSK. Hence, the p/4 QPSK preserves the constant envelope property better than the band limited QPSK but is more susceptible to envelope variations than OQPSK.
* Can be noncoherently detected, thus greatly simplifies the receiver design.
* p/4 QPSK can be differentially encoded to facilitate easier implementation of differential detection with phase ambiguity in the recovered carrier.
* When differentially encoded, p/4 QPSK is called p/4 DQPSK.
* In a p/4 QPSK modulator, signaling points of the modulate signal are selected from two QPSK constellations which are shifted by p/4 with respect to each other.
· Switching between two constellations, every successive bit ensures that there is at least a phase shift which is an integer multiple of p/4 radians between successive symbols. This ensures that there is a phase transition for every symbol, which enables the receiver to perform timing recovery and synchronization.
3.7 The Constellation Diagrams
The First Constellation
The Second Constellation
3.8 p/4 QPSK Transmission Techniques
* The input bit stream is partitioned by serial to parallel converter into two parallel data streams of mI,k and mQ,k, each with a symbol rate equal to half that of the incoming bit rate.
* The Kth inphase and inquadrature pulses, Ik and Qk are produced at the output of the signal mapping circuit over time KT≤t≤ (K+1) T, and are determined by their previous values, Ik1 and Qk1, aw well as qk, which itself is a function of fk, which is a function of the current input symbols mI,k and mQ,k. Ik and Qk represent rectangular pulses over one symbol duration having amplitudes given by:
Ik=cosqk
Qk=Sinqk
qk= q(k1) +fk
Ik = Cos (q(k1) +fk)
= Cosqk1 Cosfk  Sinqk1Sinfk
Ik = Ik1Cosfk  Qk1 Sinfk  (1)
Qk = Sin (qk1+fk)
= Cosqk1 Sinfk + Sinqk1Cosfk
Qk = Ik1 Sinfk + Qk1 Cosfk (2)
Where Qk and Qk1 are phases of the Kth and K1st symbols.
* The phase shift fk is related to the input symbols mIk and mQk according to the table below:
Information Bits, mIk, mQk 
Phase Shift fk 
11 
p/4 
01 
3p/4 
00 
3p/4 
10 
p/4 
Table 3.2 Relation between the input symbols and the phase shift
* The in phase and in quadrature bit streams Ik and Qk are then separately modulated by two carriers, which are in quadrature with one another to produce p/4 QPSK waveforms given by :
p/4 QPSK (t) = I (t) Coswct  Q (t) Sinwct
* Both Ik and Qk are usually passed through Raised Cosine roll off pulse shaping filters before modulation, in order to reduce bandwidth occupancy.
* Ik and Qk and the peak amplitudes of waveforms I (t) and Q (t) can take one of the five possible values. 0, 1, 1, 1/Ö2, 1/Ö2.
* The information is completely contained in the phase difference fk of the carrier between two adjacent symbols.
Since the information is completely contained in phase difference, it is possible to use noncoherent differential detection even in the absence of differential encoding
3.9 p/4QPSK Detection Techniques (Using differential detector)
* Differential detection is often employed due to ease of hardware implementation.
* Differential detection offers a lower error floor, since it does not rely on phase synchronization.
* The above detection technique is called base band differential detection.
* Base band detector determines the cosine and sine functions of the phase difference, and then decides on the phase difference accordingly.
Baseband Differential Detection
* Incoming p/4 QPSK signal is quadrature demodulated using two oscillator signals that have the same frequency as the unmodulated carrier at the transmitter, but not necessarily the same phase.
* If fk = tan1 (Qk/Ik) is the phase of the carrier due to the Kth data bit, the output wk and the Zk from the two low pass filters can be expressed as:
w (k) =Cos (fkr)
Z (k) =Sin (fkr)
Where r is the phase shift due to noise, propagation and interference.The phase r is assumed to change much slower than fk, so it is essentially a constant.
* The two sequences wk and Zk are passed through a differential decoder which operates on the following rule:
X (k) =w (k) w (k1) +Z (k) Z (k1)
Y (k) =Z (k) w (k1)w (k) Z (k1)
X (k) =Cos (fkr) Cos (f(k1)r) + Sin (fkr) Sin (f(k1)r)
=Cos (fkf(k1))
Y (k) =Sin (fkr) Cos (f(k1)r) + Cos (fkr) Sin (f(k1)r)
= Sin (fkf(k1))
* The output of the differential encoder is applied to the decision circuit which uses gray code to determine:
SI=1, if Xk>0
SI=0, if Xk<0
SQ=1, if Yk>0
SQ=0, if Yk<0
Where SI and SQ are the detected bits in the inphase and quadrature arms.
· Make sure that the local oscillator frequency of receiver is same as the transmitter carrier frequency, and that it does not drift.
Chapter 4 DQPSK (Differential Quadratered phase shift Keying)
Differential modulation
The second variation is differential modulation as used in differential QPSK (DQPSK) and differential 16QAM (D16QAM). Differential means that the information is not carried by the absolute state, it is carried by the transition between states. In some cases there are also restrictions on allowable transitions. This occurs in _/ 4 DQPSK where the carrier trajectory does not go through the origin. A DQPSK transmission system can transition from any symbol position to any other symbol position.
The Pi/4 DQPSK modulation format uses two QPSK constellations offset by 45 degrees (pi/4 radians). Transitions must occur from one constellation to the other. This guarantees that there is always a change in phase at each symbol, making clock recovery easier. The data is encoded in the magnitude and direction of the phase shift, not in the absolute position on the constellation. One advantage of Pi/ 4 DQPSK is that the signal trajectory does not pass through the origin, thus simplifying transmitter design. Another is that Pi/4 DQPSK, with root raised cosine filtering, has better spectral efficiency than GMSK, the other common cellular modulation type.
Pi/4 DQPSK Modem
Block Diagram of Pi/4 DQPSK Modem
The block diagram of pi/4 DQPSK Modem is shown below,
The function of the data splitter is to do a serial to parallel conversion. It takes in two bits
from the serial input and output them into I and Q channel. The first bit would be put into the I channel and the second bit would be put into the Q channel.
The "D" in DPSK means the coding is a differential coding. It encodes the difference between the current input with the delayed output.
.1 Envelope Variations
Most digital transmitters operate their high power amplifiers at or near to saturation in order to achieve maximum power efficiency. At saturation however, the signal is nonlinearly amplified which generates amplitude and phase distortions. These distortions spread the transmitted signal into adjacent channels. A filter used to suppress the sideband lobes can introduce amplitude distortions when the input pulse changes abruptly. The result of these amplitude variations is to increase the bandwidth of the signal if nonlinear amplification is used.
In an ideal system, the transition from one constellation point to the next occurs instantaneously. However, filtering in a practical system will mean that the transition takes a finite time, resulting in a progressive phase change and hence signal envelope. The envelope variation of a signal is defined by the changes in the magnitude of the vector from the origin on the IQ constellation diagram to the line 'traced' by the signal when changing from one constellation point to the next.
Generation of p/4 DQPSK Signals
Let S = [A0 B0 A1 B1 A2 B2 A3 B3 A4 B4 ... An Bn] represents the complete waveform. Then, Sw = [Ak Bk Ak1 Bk1] represents a particular sample of 4 bits.
There are four possible values for each digit, Ak Bk. Each of the four different digits will represent a different phase shift. Phases are represented by points on the IQ diagram.
The phase being the angle made by the vector from the origin to the point on the IQ diagram and the I=0 axis.
AkBk Δθ
0 0 +5pi/4
0 1 +3pi/4
1 1 +7pi/4
1 0 pi/4
The present phase is simply equal to the previous phase plus the phase shift. This can be easily obtained from knowledge of the previous phase representation [Ik1 Qk1] and the received digit.
In pi/4 DQPSK the data is encoded in the change in phase of the transmitted carrier. Let Sk1 represent the carrier transmitted for the previous symbol and q is equal to the absolute phase.
Sk1 = A cos (wct  q)
Then, the carrier for the new symbol is:
Sk = A cos (wct  (θ+Δ θ))
Sk = A cos (wct  (θ+Δ θ)) cos wct + A sin (wct  (θ+Δ θ)) sin wct
Sk = Ik cos wct + Qk sin wct
If the input data is Gray coded then the values of sin and cos are obtained according to .
Encoder Implementation.
The Inphase (I) and Quadrature Phase (Q) components of pi/4 DQPSK can be expressed as:
I(i) = I(i  1)* cos (Δ θi)  Q (i1)*sin (Δ θi)
Q(i) = I(i  1)* sin (Δ θi)  Q (i1)* cos (Δ θi)
S(i) = S(i1)* exp (jΔ θi)
Where,
I(i) and Q(i) are the inphase and the quadrature phase components of the Pi/4 DPSK Modulated Symbol at the ith & (i1)th signaling intervals.
Decoder Implementation
Pi/4DQPSK decoding is generally accomplished using differentially coherent detection. So for decoder implementation,
• Compute the phase angles of all the received baseband symbols.
• Compute the difference in phase angles between successive symbols.
Δθk = ∟Sk  ∟Sk1
• Demap the phase values into information bits using the inverse of the phase mapping function.
Once the value of k is obtained, the pi/4DQPSK decision rule becomes,
b1k = (sin Δθk >0)
b2k = (cos Δθk >0)
Scrambler.
The scrambler scrambles the bits sent by DTE. To understand the need for a scrambler, consider the situation where the DTE sends a series of 01 dibits. Therefore the total phase transmitted is the same, it results in transmitting the same constellation point. But at the receiver side phase changes are required for correct clock recovery, So this causes problems and the carrier losses lock. To avoid this the scrambler is introduced to minimize the probability of such ill conditions.
ds(nT) .. d(nT) XOR d,«n 14)1') XOR ds«n 17)1')
So there is a feed backward register in the scrambler.
Serial to Parallel Converter.
This block converts the incoming information bits into two streams, one containing the even numbered bits and the other containing the odd numbered bits. The output pair of bits constitute the input symbol stream to the pi/4 DQPSK Modulator.
Encoder.
This block encodes the input information bits { b1k , b2k } into Modulation Symbols {IK, Qk} using i/40QPSK Signal Mapping.
Low Pass Filters.
Since the telephone network behaves as a bandpass filter with the passband started around 300Hz and ending around 3200Hz, the baseband encoder outputs cannot be directly transmitted through the communication medium. They first must be modulated up in frequency. The modulation is not attempted directly on the encoder increase from 1/Tb to 1/Ts being at least 6.4Khz. This increase in the sampling frequency is accomplished by interpolation. Second, if the modulation is attempted directly on the encoder out put, the instantaneous changes of the I and Q generate higher order harmonics. To eliminate these harmonics we increase the sampling frequency by interpolation.
Interpolation
Interpolation is a powerful signal processing concept which strongly influences the hardware reconstruction filter specification and digital filter design. Interpolation involves sampling the input waveform at a greater rate than the existing samples are produced ( 14) by interposing extra samples in between them. This can be achieved by repeating existing samples or inserting zero impulses.
(a) Existing Sampling
(b) Interpolating Sampling
InterSymbol Interference (ISI)
The frequency response of a rectangular pulse is a sine function containing an infinite number of frequency components. Therefore in all practical channel mediums where the bandwidth is limited the rectangular pulse will be distorted in both amplitude and phase; thus affecting the next pulse. This is called 'InterSymbol Interference'. The amount of ISI may be reduced by shaping the digital pulses so that the sampling instant coincides with the zero crossing of the adjacent symbols. A sine function satisfies this criteria and offers the highest possible symbol rate which can be transmitted using the minimum Nyquist baseband frequency. The sine function, a=0), represents a 'brickwall' filter in the frequency domain. The spectra of the raised cosine filter, _> 0), is smoother and hence easier to implement practically. Raised cosine filtering is usually achieved by implementing a root raised cosine filter as part of the modulator and demodulator.
In the practical situation the maximum number of taps is compromised by processor speed limitations. In the time domain, limiting the number of taps truncates the impulse response and results in undesirable side lobes generated in the frequency domain. As α approaches zero the side lobes rise up to the principal passband lobe attenuation level and hence the desired filtering characteristic is unachievable. An α of about 0.2 is achievable by the clock speed of today's DSPs.
Pulse Shaping
ISI is caused when the tails of the received pulses overlap at the sample points, causing uncertainty in the received pulse amplitude. It is possible to shape the transmit pulses in a manner designed to minimize the effects of ISI on the received waveform. The set of shifted pulse responses overlap, but their tails all possess nulls at the sample instants. Therefore, the only contribution to r(nT) is due to the nth transmit pulse. As shown below, the received signal r(t) equals the amplitude of the individual sinc functions at the sample instants. Compare this with the previous example in which r(t) has a more ambiguous relationship to the individual pulse responses.
An example of pulse shaping waveform
If a received pulse shape can meet the following property, zero ISI can be achieved
Px(nT) = { 1, n = 0
2, n ≠ 0
This equation simply means that there are zero crossings at the sample rate. It can be shown that this results in a spectrum possessing vestigial symmetry. That is, the frequency response exhibits odd symmetry about 1/2T, causing the sum of repeat spectra to equal a constant. It is important to note that this spectrum may be closely approximated by a realizable filter having a gradual rolloff around 1/2T.
Receiver.
The receiver is similar to the transmitter but in reverse. It is more complex to design. The incoming (RF) signal is first down converted to (IF) and demodulated. The ability to demodulate the signal is hampered by factors including atmospheric noise, competing signals, and multipath or fading. Generally, demodulation involves the following stages:
1. Carrier frequency recovery (carrier lock)
2. Symbol clock recovery (symbol lock)
3. Signal decomposition to I and Q components
4. Determining I and Q values for each symbol ("slicing")
5. Decoding and deinterleaving
6. Expansion to original bit stream
7. Digitaltoanalog conversion, if required
In more and more systems, however, the signal starts out digital and stays digital. It is never analog in the sense of a continuous analog signal like audio. The main difference between the transmitter and receiver is the issue of carrier and clock (or symbol) recovery. Both the symbolclock frequency and phase (or timing) must be correct in the receiver in order to demodulate the bits successfully and recover the transmitted information. A symbol clock could be at the right frequency but at the wrong phase. If the symbol clock was aligned with the transitions between symbols rather than the symbols themselves, demodulation would be unsuccessful Symbol clocks are usually fixed in frequency and this frequency is accurately known by both the transmitter and receiver.
The difficulty is to get both of them aligned in phase or timing. There are a variety of techniques and most systems employ two or more.
If the signal amplitude varies during modulation, a receiver can measure the variations. The transmitter can send a specific synchronization signal or a predetermined bit sequence such as 101010101010 to "train" the receiver's clock. In systems with a pulsed carrier, the symbol clock can be aligned with the power turnon of the carrier. In the transmitter, it is known where the RF carrier and digital data clock are because they are being generated inside the transmitter itself. In the receiver there is not this luxury. The receiver can approximate where the carrier is but has no phase or timing symbol clock information. A difficult task in receiver design is to create carrier and symbol clock recovery algorithms. That task can be made easier by the channel coding performed in the transmitter.
.2 Equalizer
A properly shaped transmit pulse resembles a sinc function, and direct superposition of these pulses results in no ISI at properly selected sample points.
In practice, however, the received pulse response is distorted in the transmission process and may be combined with additive noise. Because the raised cosine pulses are distorted in the time domain, you may find that the received signal exhibits ISI. If you can define the channel impulse response, you can implement an inverse filter to counter its ill effect. This is the job of the equalizer.
.3 Demodulator
The demodulator translates the passband information back to the baseband. With Ip(nTs) and Qp(nTs) inputs to the demodulator, the outputs are I'(nTs) and Q'(nTs).
Where w, is the local carrier frequency.
I'(nTs) = Ip(nTs) cos (w'nTs) + Qp(nTs) sin (w'nTs)
Q'(nTs) = Ip(nTs) sin (w'nTs)  Qp(nTs) cos (w'nTs)
.4 Receiver Filters
The incoming analog signal is digitized at the sampling frequency to obtain its digital counterpart. This signal is then bandpass filtered for three reasons.
• Rejection of outofband noise, including the rejection of the transmit signal spectrum due to the near end echo paths.
• Introduction of 90degree relative phase shift for hilbert transformer.
• Root raised cosine response, to match it with the root raised cosine response of the transmitter filter to reduce ISI.
.5 Decoder
Pi/4DQPSK decoding is generally accomplished using differentially coherent detection.
• Compute the phase angles of all the received baseband symbols.
• Compute the difference in phase angles between successive symbols.
• Demap the phase values into information bits using the inverse of the phase mapping function.
• Once the value of k is obtained, the pi/4DQPSK decision rule becomes,
b1k = (sin Δθk >0)
b2k = (cos Δθk >0)
.6 Descrambler.
The dibit is fed into the descrambler to recover the originally transmitted dibit. The output of the descrambler is described by
d(nT) = ds(nT) XOR ds((n 14)T) XOR ds(nI7)T)
Applications
The application of DPSK is the recovery of phases without the need for the exact phase of the carrier. In order word, noncoherence detection can be carried out.
MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar noninteractive language such as C or Fortran.
The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for highproductivity research, development, and analysis.
MATLAB features a family of addon applicationspecific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (Mfiles) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.
Typical uses of MATLAB
1. Math and computation
2. Algorithm development
3. Data acquisition
4. Data analysis ,exploration and visualization
5. Scientific and engineering graphics
The main features of MATLAB
1. Advance algorithm for high performance numerical computation ,especially in the Field matrix algebra
2. A large collection of predefined mathematical functions and the ability to define one's own functions.
3. Twoand three dimensional graphics for plotting and displaying data
4. A complete online help system
5. Powerful , matrix or vector oriented high level programming language for individual applications.
6. Toolboxes available for solving advanced problems in several application areas
The MATLAB System
The MATLAB system consists of five main parts:
Development Environment.
This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.
The MATLAB Mathematical Function Library.
This is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms.
The MATLAB Language.
This is a highlevel matrix/array language with control flow statements, functions, data structures, input/output, and objectoriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throwaway programs, and "programming in the large" to create large and complex application programs.
Graphics.
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes highlevel functions for twodimensional and threedimensional data visualization, image processing, animation, and presentation graphics. It also includes lowlevel functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.
The MATLAB Application Program Interface (API).
This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MATfiles.
Starting MATLAB
On Windows platforms, start MATLAB by doubleclicking the MATLAB shortcut icon on your Windows desktop. On UNIX platforms, start MATLAB by typing mat lab at the operating system prompt. You can customize MATLAB startup. For example, you can change the directory in which MATLAB starts or automatically execute MATLAB statements in a script file named startup.m
MATLAB Desktop
When you start MATLAB, the MATLAB desktop appears, containing tools (graphical user interfaces) for managing files, variables, and applications associated with MATLAB. The following illustration shows the default desktop. You can customize the arrangement of tools and documents to suit your needs. For more information about the desktop tools .
Characteristics of MATLAB
§ Programming language based (principally) on matrices.
o Slow (compared with fortran or C) because it is an interpreted language, i.e. not precompiled. Avoid for loops; instead use vector form (see section on vector technique below) whenever possible.
o Automatic memory management, i.e., you don't have to declare arrays in advance.
o Intuitive, easy to use.
o Compact (array handling is fortran90like).
o Shorter program development time than traditional programming languages such as Fortran and C.
o Can be converted into C code via MATLAB compiler better efficiency.
§ Many applicationspecific toolboxes available.
§ Coupled with Maple for symbolic computations.
§ On sharedmemory parallel computers such as the SGI Origin2000, certain operations processed in parallel autonomously  when computation load warrants.
Data Classes:
Although we work with integers coordinates the values of pixels themselves are not restricted to be integers in MATLAB. Table above list various data classes supported by MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred to as logical data class.
All numeric computations in MATLAB are done in double quantities, so this is also a frequent data class encounter in image processing applications. Class unit 8 also is encountered frequently, especially when reading data from storages devices, as 8 bit images are most common representations found in practice. These two data classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt functions however support all the data classes listed in table. Data class double requires 8 bytes to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and unit 32.
Name Description
Double Double _ precision, floating_ point numbers the Approximate.
Uinit8 unsigned 8_bit integers in the range [0,255] (1byte per Element). Element).
Uinit16 unsigned 16_bit integers in the range [0, 65535] (2byte Per element).
Uinit 32 unsigned 32_bit integers in the range [0, 4294967295]
(4 bytes per element).
Int8 signed 8_bit integers in the range [128,127]
1 byte per element)
Int 16 signed 16_byte integers in the range
[32768, 32767] (2 bytes per element).
Int 32 Signed 32_byte integers in the range
[2147483648, 21474833647]
(4 byte per element).
Single single _precision floating _point numbers with values
In the approximate range (4 bytes per elements).
Char characters (2 bytes per elements).
Logical values are 0 to 1 (1byte per element).
int 32 and single required 4 bytes each. The char data class holds characters in Unicode representation. A character string is merely a 1*n array of characters logical array contains only the values 0 to 1,with each element being stored in memory using function logical or by using relational operators.
Matlab Simulation
%Transmitter%
%Parameters%
clc;
close all;
clear all;
nSymbols=1000;
input = randint(1,nSymbols); %input in the form of NRZ Unipolar(0s &1s)
symbol_rate=500;
fs=8000;
no of samples =fs/symbol_rate;
Si = [];
Sq = [];
phi = [];Mi = [];Mq = [];
%Inphase and Quadrature Components%
for k=1:2:length(input),
Si = [Si input(k)]; %inphase components
end
for k=2:2:length(input),
Sq = [Sq input(k)]; %quadrature components
end
plot(Si,Sq,'x'),axis([3 3 3 3]);
I & Q Mapping
%Mapping%
flag=input(k);
phase = zeros(1,length(input)/2);
if(flag==0)
for k=1:length(input)/2,
if ( Si(k)== 1 && Sq(k)== 1)
phi = pi/4;
elseif ( Si(k)== 0 && Sq(k)== 1)
phi = 3*pi/4;
elseif ( Si(k)== 0 && Sq(k)== 0)
phi = 3*pi/4;
elseif (Si(k)== 1 && Sq(k)== 0)
phi = pi/4;
end
phase(k+1) = phase(k) + phi;
end
else
for k=1:length(input)/2,
if ( Si(k)== 1 && Sq(k)== 1)
phi = 0;
elseif ( Si(k)== 0 && Sq(k)== 1)
phi = pi/2;
elseif ( Si(k)== 0 && Sq(k)== 0)
phi = pi;
elseif (Si(k)== 1 && Sq(k)== 0)
phi = 3*pi/2;
end
phase(k+1) = phase(k) + phi;
end
end
%Carrier Generation%
for k=2:length(phase),
Mi(k) = cos(phase(k));
Mq(k) = sin(phase(k));
end
%plotting%
subplot(211), plot(Mi); title('Inphase in Time')
subplot(212), plot(abs(fft(Mi,1020))); title('Inphase in Frequency');
subplot(211), plot(Mq);title('Quadrature in T')
subplot(212), plot(abs(fft(Mq,1020))); title('Quadrature in Frequency');
In phase and Quadrature Components
%Upsampling%
upsampled_i = upsample(Si,noofsamples);
upsampled_q = upsample(Sq,noofsamples);
%Pulse Shaping Filter%
Pulseshaping = rcosfir(.35,1,noofsamples,1000,'sqrt'); % Root raised cosine filter
plot(1:1/noofsamples:1,Pulseshaping);
title('Root Raised Cosine Filter');
Root Raised Cosine Filter
%Setting Coefficients%
Pulseshaping= [Pulseshaping(1:16) Pulseshaping(18:33)]
pulseshaped_i = conv(Pulseshaping,upsampled_i); % Pulse shaping
pulseshaped_i = pulseshaped_i(17:end16); % Removing extra # of bits
pulseshaped_q = conv(Pulseshaping,upsampled_q); % Pulse shaping
pulseshaped_q = pulseshaped_q(17:end16); % Removing extra # of bits
%Modulation%
t=0:1/fs:(1/fs)*7999;
c_1 = cos(2*pi*1000*t);
c_2 = sin(2*pi*1000*t);
final_i= pulseshaped_i.*c_1(1:length(pulseshaped_i));
final_q= pulseshaped_q.*c_2(1:length(pulseshaped_q));
mod_signal = final_i  final_q; %modulated signal
%Plotting%
plot(mod_signal);title('Modulated Signal');
Modulated signal
received_signal=mod_signal;
%Receiver%
demodulated_i = received_signal.*c_1(1:length(received_signal))*3;
demodulated_q = received_signal.*c_2(1:length(received_signal))*3;
x1= conv(Pulseshaping,demodulated_i);
y1= conv(Pulseshaping,demodulated_q);
x2 = x1(17:end16); % Removing extra # of bits
y2 = y1(17:end16); % Removing extra # of bits
%Decimation %
x = x2(1:noofsamples:end); %decimating by 16
y = y2(1:noofsamples:end); %decimating by 16
%Decision%
d1=round(x);
d2=round(y);
i=2;
for i=1:length(d1)
if(d1(i)>0)
d1(i) = 1;
else
d1(i)=0;
end
end
for i=1:length(d2)
if(d2(i) >0)
d2(i) = 1;
else
d2(i)=0;
end
end
data=[d1; d2];
recovered_data=data (:);
%Comparison of Received and Transmitted Data%
corr_bits=0;
incor_bits=0;
for i=1:length(input)
if(recovered_data(i)==input(i))
corr_bits=corr_bits+1;
else
incor_bits=incor_bits+1;
end
end
corr_bits
incor_bits
Transmitted signal
Received Signal
MATLAB
Table of Contents
1) Introduction
2 )The MATLAB System
3 )Development Environment
)The MATLAB Mathematical Function Library
5 )The MATLAB Language
6 )Graphics
7 )The MATLAB Application Program Interface (API)
8 )Programming
9 )Data Types
10) Basic Program Components
11 )MFile Programming
12 )Types of Functions
13 )Data Import and Export
14 )Error Handling
15 )Classes and Objects
16 )Scheduling Program Execution with Timers
INTRODUCTION:
MATLAB is a highperformance language for technical computing. It integrates computation, visualization, and programming in an easytouse environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include
* Math and computation
* Algorithm development
* Data acquisition
* Modeling, simulation, and prototyping
* Data analysis, exploration, and visualization
* Scientific and engineering graphics
* Application development, including graphical user interface building
MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar noninteractive language such as C.
The name MATLAB stands for matrix laboratory. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for highproductivity research, development, and analysis.
MATLAB features a family of addon applicationspecific solutions called toolboxes. Toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (Mfiles) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.
The MATLAB System
The MATLAB system consists of five main parts:
Development Environment
This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.
The MATLAB Mathematical Function Library
This is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix Eigenvalues, Bessel functions, and fast Fourier transforms.
The MATLAB Language
This is a highlevel matrix/array language with control flow statements, functions, data structures, input/output, and objectoriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throwaway programs, and "programming in the large" to create large and complex application programs.
Graphics
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as explanatory and printing these graphs. It includes highlevel functions for twodimensional and threedimensional data visualization, image processing, animation, and presentation graphics. It also includes lowlevel functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.
The MATLAB Application Program Interface (API)
This is a library that allows you to write C programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MATfiles.
Programming
MATLAB is a highlevel language that includes matrixbased data structures, its own internal data types, an extensive catalog of functions, an environment in which to develop your own functions and scripts, the ability to import and export to many types of data files, objectoriented programming capabilities, and interfaces to external technologies such as COM, Java, programs written in C and Fortran.
Data Types
It presents the types of data used in MATLAB: numeric, logical, characters, dates, structures, cell arrays, function handles, and objectoriented classes.
Basic Program Components
It presents the principal building blocks used in MATLAB programming: variables, keywords, special values, operators, expressions, regular expressions, commaseparated lists, control statements, symbols, and the extensive set of functions provided with MATLAB.
MFile Programming
It describes the overall process of developing programs in MATLAB. It shows you how to work with Mfiles to create scripts and functions. It describes the various types of functions you can create, how to make calls to functions, handle argument data, and use function handles.
Types of Functions
It describes the various types of functions you can work with in MATLAB. These include the primary function, sub functions, nested functions, anonymous functions, overloaded functions, and private functions.
Data Import and Export
It explains how to import and export data between MATLAB and the many different types of data files supported by MATLAB (including text, spreadsheet, graphics, and audio/video files, files formatted for scientific data.
Error Handling
It describes how to put error checking into your programs, and how to identify, handle, and possibly recover from errors that occur. It also explains how to use message identifiers to better identify the source of an error, and how to selectively display or ignore warning messages.
Classes and Objects
It presents the MATLAB objectoriented programming capabilities. Classes and objects enable you to add new data types and new operations to MATLAB.
Scheduling Program Execution with Timers
It describes how to use the MATLAB Timer object to schedule program execution.
REFERENCES:
* A VLSI Architecture for a High Speed AllDigital Quadrature Modulator and Demodulator for Digital Radio Applications by Henry Samueli and Bennet C.Wong.
* Wireless Communications Principles and Practices second edition by Theodore S. Rappaport
* Digital Communications by Lathi
* Digital Communications second edition by Sklar
* A Faster Distributed Arithmetic Architecture For FPGAs by Radhika S.Grover
* A Flexible Implementation of HighPerformance FIR Filters on Xilinx FPGAs by TienToan Do, Holger Kropp and Peter Pirsch
* FPGA Hardware Reference Manual
* Verilog HDL, A Design Guide to Digital Design and Synthesis by Samir Palnitkar
* Soft Radios and Modems on FPGAs by Les Mintzer
* Using FPGAs for Software Radio Systems Rodger H. Hosking Pentek
* The HighEnd Alternative for DSP Applications by Dr. Chris Dick
* Digital Communications by Bernard Sklar
* Analog and Digital Communication by B.P.Lathi
* Application of Digital Wireless Technology to Global Wireless Communication by Seiichi Sampei
* Contemporary Communication Systems using MATLAB by John G. Proakis & Masoud Salehi
* Communication Systems by Simon Haken
Request Removal
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Request the removal of this essay