A Typical RFID System Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The Global technology market is working to introduce new technologies to enhance the improvement in the life, bring efficiency in businesses. RFID (Radio Frequency Identification System) is a wireless communication via frequencies waves that used to communicate and tracking objects technology with in a frequency range. The frequency bands / emission power for the RFID systems are limited according to governmental regulations [FCC 2001]. The decision for the frequency range adopt depend on the business requirements and range of the frequencies measured between RFID installed components, which are RFID (Host Computer, reader, antenna and tags on the different objects) because different frequencies have different ranges.

Figure. 1 Typical RFID System [1]

RFID components communicate with each other and store the tags data in the reader. RFID tags, which placed on the objects, are comes with different sizes, shapes and categories with different functionalities, requirement of the business and cost count these tags are called Active tag and passive tag. Both functionality is communicate with RFID reader's antenna and provide data, which stores in the reader. Both tags information can read/write and even can be altered by the RFID reader. The active tags contain radio transceiver and battery power that support to transceiver which enhanced frequency range and also expensive, but passive tags either can have battery support or without battery support. RFID reader includes antenna that receive data by radio frequencies and transponder that transfer tag's id information, response and time to the reader's antenna. The collected raw data transfer to the middleware system which filtered the raw data and extract this useful information to the enterprise application for represent for reports for business logics.

Area of Research


The Starting topic for the research was Handheld Dictionary Device.

The main motive of the project "Hand Held Dictionary" was to recognise character from a specific scan reader device[1]. The role of the scan reader was to convert that those character into a word and then search that particular word in the whole dictionary database. After finding that particular word, meaning from dictionary. On successful completion of searching that word, meanings converted to audio format in order to reply into audio form that can be listening over the headphone/speakers to the end user.

The idea of this project was self-contained so in the initial stage academic literature for this topic is available in the different bits and pieces because no one proposed this whole system in the past. To propose this topic, the system divide in the components and started research, how these components work like scan reader, meaningful character scan and convert them into the binary form. During the research one website is already published this system was developed in the 2001[2].

The digital library used to research on the starting topic with the help of these bibliographic.

A Low Cost Device for the Real.Time On-Line Entry of Handprinted Characters David D. Thornburg Innovision P.O. Box 1317, Los Altos, CA 94022

An Experimental Laboratory for Pattern Recognition and Signal Processing N.M. Herbst and P.M. Will IBM Thomas J. Watson Research Center*

An Efficient Text Input Method for Pen-based Computers Toshiyuki Masui Sony Computer Science Laboratory Inc. 3-14-13 Higashi-Gotanda Shinagawa, Tokyo 141-0022, Japan +81-3-5448-4380 [email protected]

The starting topic is already evaluated in the 2001.So it does not contain further work because this topic is not wide and the research was limited which was already completed. The another topic was selected accent reorganization which seems to a out of scope, resource were limited and wide knowledge needed.

In a short space of time technology has vastly progressed and improved to the point, where a majority of people rely upon it. The reason finally selected the topic RFID (Radio Frequency Identification ) is related to latest technology, the RFID technology will drop the market demand on the barcode system because that is contain a lot of manually work which effect time, cost and more labor. But the RFID technology research is still going on the middleware system for representation of data quality to stop duplication data [3].

The Reason why I have area for research

This topic is of particular interest to me, as RFID is a relatively new technological development. Currently, Barcodes are used extensively in logistics. They have proved useful to an extent but need manual scanning. Using barcodes to scan items with a laser gun is time consuming and inefficient. Also the amount of data that can be stored using a Barcode is limited. RFID on the other hand seems to be a technology that promises convenience of use and has the ability to allow for significantly more data. Due to these factors it seems to have several potential uses. RFID is able to read numerous tags at the same time, as compared to barcodes, which require scanning items one at a time. This is a laborious procedure and can be time consuming for busy businesses. The RFID tags operate similar to Bluetooth devices by enabling information to be detected from a distance. Entire shopping can be read at the same time with one RFID reader. RFID enables greater efficiency through speedier checkouts than a barcode system could ever manage. RFID enables an accurate checking and location in the warehouse, whereas the barcode system is prone to error.

The advance of technology, Radio Frequency IDentification (RFID) is emerging as a viable technology for use in managing logistics. The use of RFID tagging proposes to revolutionise the logistics industry. Once a product has been tagged using an RFID tag, it will be possible to track the product from manufacture to sale.

It appears that RFID has not been fully implemented or developed despite its benefits. The barcode system is still prevalent, as businesses are reluctant to adopt RFID until it shows signs of technical and efficient progression to the point, where it is viable in the business arena. The research will shows that there is room for improvement in its function which Wal-mart market is experiencing. A Improvement the future of RFID and technology, A lot of issues related to data quality and data synchronisation that stopping to wal-mart market to start use this technology.

Review of Current State of Proposed Area

Review of the key paper

Designate key paper has been suggested different methods for noise removal and filters the duplicate data by the duplicate elimination (Merging) from the RFID middleware system.

Autor: Yijian Bai , Fusheng Wang , Peiya Liu

Title: Efficiently Filtering RFID Data Streams

Date: Year of Publication: 2006

Location: UCL (Siemens Corporate Research)

The objective of this article to proposed algorithms noise removal and duplicate elimination (merging) for makes efficiency in the large amount of data.

The algorithm suggested is that for each reading of the value of R a full scan of the previous sliding time window of size is performed. If R appears more than the limit of the window, this is not a noise reading therefore every R in the window output. To ensure that the reading of R is output once, a state-of-output is kept for each reading in the window buffer and is set to true for the first output [4].

When noise readings are eliminated, duplicate for these tags should be recognized and only the earliest reading is retained, the algorithm take one parameter which is the max-distance if reading from max-distance in time from the previous same key then this is a duplicate tag. Baseline-merge algorithm performs elimination of duplicate readings by keeping a sliding window of size max-distance. If incoming readings in the same window, which have the same key then this is a duplicate otherwise it is a new reading.

The experimental result of denoising algorithm is efficient and correct result but the duplicate elimination (merging) is not efficient and the output is missing to control the unknown data in the form of negative or positive.

The above article tries to deal with the duplication of data is through different algorithms not only one algorithm which is effecting the efficiency of the system and to put a flag on duplicate tags and just call upon the data that have flags attached and then only compares the first reading cycle with the second reading cycle by the queries.

The one major issue in the RFID technology is collection the large amount of raw data that observed read rate (i.e., percentage of tags in a reader's vicinity that are actually reported) in real-world RFID deployments is often in the 60−70% range [2]; in other words, over 30% of the tag readings are routinely

dropped [5]. The garbage data store in two different formats like negative reading and positive reading and the reason multiple tags are detected in simultaneous with each other and interfering their signals and positive reading tags cause the reason of detection out of the scope. The time cycle to get the response from the tags reading make duplication of the raw data and false reading when the reader dealing with the large amount of raw data. The RFID middleware system filtered it in the clean useful data that transform in the enterprise application to present for human readable format.

The RFID technology research showed that there is not vast implication of RFID technology that cause the problem in the middleware system due to the different level of problems in the data control. The middleware system is dealing with these problems on the application layer by the different algorithms, which is dealing with the level of problems to stopping this technology to work accurately. The middleware system contains three layers, which help to precede the raw data for that algorithm, which are still struggling to extract and make it accurate, efficient and reliable for the enterprise application on the host computer. It has been submitted that during the process of data collection from the tags by the sliding window which contain parameters which dynamically make right sliding window size automatically as the data store in it and that the data information which sliding window contains clear it as the time moves, however the problem is the missing data as the size of the sliding window is small, "A sliding window is a window with certain size that moves with time"[4]. It has been proposed to increase the window size dynamically depend as the time moves. The window size stored three-parameter information in the tag list that reader stored these parameters in list. The research is still going on the RFID data streams and the data cleaning on the middleware system using different approaches and algorithm to manage the large volume of the data and filtering the data to bring efficiency. RFID system is using Reva Tap system [6], which is a device to controlling data streams and filtering the data, which is still part of research. On the other hand different published papers are using different approaches

with different mechanism like probability, eager, deferred and after that SQL/TS, CQL (Continuous Query Language) [7]. "Continuous queries were introduced explicitly for the first time in Tapestry [TGNO92] with a SQL-based language called TQL. (A similar language is considered in [Bar99].)"[7]

The CQL manage complex queries to handle data stream to filter data and using relational model to manage the data stream.

The experimental result shows in the article [4] that the algorithm designed is not efficient and also not accurate because the extra algorithm used reduced efficiency of the data processing and the duplication algorithm that has been compared using the first and second reading only and did not consider other readings like the third, forth etc. Also the use of wireless technology reading can be unreliable and may mean that readings are missing and the experiment readings did not cover the broad aspect of data duplication.



Expected Benefits

Quality of data transferred increased.

Increasing efficiencies.

Reduced time scale of transfer of data leading to better efficiency.

Providing superior quality.

Reducing losses

Improvement technology provides future development.

Timely visibility of the various stages of the activities

Intelligent strategic decisions


The widely implementation of RFID technology is very limited cause of problem in the large amount of data collection and extraction of clean data from RFID middleware system by the algorithm. The filtering data algorithm needs to update that can control the extraction of clean data.

The solution proposed in this proposal is a filtered duplicate record from raw data and increase efficiency by algorithm for solving the duplication data problem.

In consideration with figure 1. The transponder parts in the figure 1 store the object information with the unique tag and that transfer the tag information to the reader. Communication between antenna and tags via radio frequencies that activate the tags for communication and the clock measure time read rate of this activation and transformation of the data. The transponder sends this raw information to RFID middleware software that stores internally this information in the tag list. RFID middleware extracts the useful data information and transfer it into the host computer which application of end system mining the data and represent it for the business logics.


According to latest research by venture development corporation worried about data quality and data synchronisation to reduce the duplication of the data due to tag missing or corrupt during the transformation process do data.

From my research I expect the some kind of algorithm to control this data quality and present this data to human readable form for data presentation.

3.Statement of Research Question and Research Objectives

The aim of this project is to research the area of duplication of data using algorithm follow by the technique and to find problems associated with different approaches and mechanism proposed by different people.

Specific intension is, to improve the data quality that can help to filter the duplicate data that will reduce duplication and increase efficiency of data.

The different mechanisms and approaches are various types of algorithms used which may have proven difficult in solving the problem of recent progress in data streams and duplications, leading to much widespread opinion where contrasting solutions have been suggested. The research carried out has been on the data duplication and efficiency of its performance, which can be sorted out using methods or approaches to make the algorithm.

Objectives Use location parameter technique to track the duplication from the collected list of data.

Different criteria for control duplicate data.

Develop algorithm to filtered data by comparing the duplicate data with help of parameter.

It has been suggested in one article [4] used that to control missing tags in

the data streams the methods of algorithm are used by creating dynamically the window size to store the tuple in the rows as time moves. Another data problem suggested in another article [4] used tries to deal with the duplication of data is through denoising algorithm and to put a flag on duplicate tags and just call upon the data that have flags attached and then only compares the first reading cycle with the second reading cycle by the queries.

4.Explanation of how Findings will be Evaluated/Validated

The successful evaluation to stop duplicate data, depend on two parallel requirements which is efficiency in the performance to handle volume of raw data and second filtered the duplicate data. If solutions of filtered data do not consider efficiency of data then it will take long time to filter the data, which is not acceptable in the business point of view.

The process should be run only one algorithm this can save time and substantially increase efficiency. Also instead of using the denoising approach suggested in the article [4], is using two algorithms, one for denoising data and second for filtered data.

It could be more productive to insert the location parameter into the tag list and tag list contain scan raw data with four parameters (reader id, tag id, ResponseTime, location). These parameter information stored within the reader using the window size and it is forwarded into the middleware system.

Which filter the data and forward it to the host computer. The benefit of this location parameter is that all duplicated data stored reading tags in the window size can be detected no matter how many are transferred and the order in which they are transferred.

The algorithm working in the middleware system then filter the data by the

location with the help of the query to detect how many duplicate tags are in one location and as one location is filtered the data is transferred to the host computer in the enterprise application.