This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The new framework with the holistic approach for locating retail customers would be encapsulating the following:
1. Positioning/locating technologies integrated with the communications networks (Figure 1)
2. GIS in the different forms and on the different levels (Figure 2)
Positioning/locating technologies Integrated with the communications networks
The integration of the positioning/locating technologies with the communications networks would need to combine the following technologies:
- Cell tower triangulation
- Cell ID
- Browser-based location
As it was mentioned previously, GPS was originally developed by the US military in the 1970s. It was only made available for commercial use by the Pentagon in the mid '90s after lobbying from private enterprises, which saw the enormous potential of the technology if made available to the public. Even then, the version made available to the public, SPS (Standard Positioning
Service), was not as precise as the version reserved for the military, PPS (Precise Positioning Service).
GPS still remains the most popular and most widely spread positioning technology commercially available today. It's also free to the end user, the only cost being the GPS chip itself, which is included in the price of the device. The most important point for GPS is that this technology enables Global positing. Every system for positing/locating must start with GPS. However, GPS technology has several serious drawback of which the critical that is not working in the most indoor or covered environments.
Cell tower triangulation
Cell tower triangulation uses the known speed of radio signals (constantly emitted by the mobile phone on UHF frequencies) to calculate the distance from receivers. In geometric terms, by recording the distance of an object from three distinct points, it's possible to calculate the location of that object (indeed, this principle was the basis for early calculations of the distance between the earth and the moon).
The receivers or antennas can be existing cell towers, or they can be located on tall buildings in urban environments. It takes at least three and preferably four receivers to get a good location fix. In densely populated locations, the accuracy of the fix tends to be high (up to 200 meters or 700 feet precision) because there'll be more cell towers with their signal radii overlapping.
The accuracy of the location fix will increase further where directional antennae are installed on the cell tower, allowing for detection of not just distance but direction of the cell phone signal. Rural locations tend to have low densities of transmitting antennae, and where the cell signal is picked up by one antenna only, the precision will fall dramatically (to several kilometers).
Because cell tower triangulation is a network-based localization technique, it requires an agreement with the mobile operator in order to adopt it within a mobile service.
The truth today remains that there's no universally recognized definition for A-GPS, given the multitude of ways i n which this technology can be deployed (according to the configuration of chipset manufacturers, local legislation, and operator agreements) and the resulting variation in its effectiveness compared to GPS.
Fundamentally, A-GPS tries to address the key inescapable drawback of GPS technology,
namely, that a location fix is impossible in most indoor or covered environments. The
basic premise of A-GPS is to assist the embedded GPS chip within the handset in securing
either a faster or more precise location fix in challenging conditions (such as a
weak satellite signal or visibility of only two satellites instead of the required three for a
As we noted in the previous section, a GPS chip constantly scans the sky for orbit
and clock data of the relevant satellites. This results in what's known as the TTFF, or
Time To First Fix, namely, the amount of time required for the GPS receiver to pinpoint
your location. This initial TTFF is often called a cold start, and on SiRF III systems
it can take anywhere from 30 seconds to a couple of minutes to acquire a signal.
When a phone is using A-GPS, the TTFF is much faster. Very often cellular network
towers have GPS receivers (or a base station nearby), and those receivers are constantly
pulling down ephemeris data from the satellite and computing the data. This data is
then passed on to the cell phone (when requested) and acts like a cheat because the
relevant satellites to device location are already identified. GPS computations are handled
by either third-party servers or by the handset chipset (that download the
ephemeris data and insert the fix process to shortcut the correlation process with no
further data network activity required). This allows a comparison of fragmentary GPS
data received by the handset (because of few satellites being in line of sight, for example)
with data from the network assistance server. This then allows a more precise calculation
Given that A-GPS is a relatively new development, it's currently available on
proportion of the installed handset population, though most mobile manufacturers
are now deploying it as standard in all their GPS-enabled phones.
For a developer of location-based services, A-GPS is a useful enhancement to
underpin applications because it offers a faster location fix as well as saves battery life.
The complications lie in the fact that the implementation of A-GPS can vary by operator
and by manufacturer, requiring extended analysis and testing. It should also be
noted that A-GPS works by transferring (location) data over the mobile operator network
and thus will incur a data transfer charge for the mobile subscriber (whereas GPS
Cell ID has gained significantly in popularity as a positioning method in the last
few years. Cell ID positioning is accomplished by using the serving cell tower (the tower that
a mobile device is communicating with), or the cell, and its known position to find the
mobile device's position. The International Telecommunication Union (ITU), the United Nations intergovernmental fixed and mobile telecommunications regulatory body, assigns to each country a Mobile Country Code (MCC), and within each country a Mobile Network Code
(MNC) is assigned to each cellular network operator. Each operator is responsible for
creating the Location Area Codes (LAC) for their network and assigning a numeric
identification to each cell (Cell ID). Whenever a mobile terminal is connected to the
network, it's associated to one of these cells. Therefore, the absolute location of a terminal
can then be expressed by the four parameters Cell ID, LAC, MNC, and MCC.8
The current Cell ID can be used to identify the base transceiver station (BTS) that
the device is communicating with and the location of that BTS. Clearly, the accuracy of
this method depends on the size of the cell, and the method can be quite inaccurate. A
GSM network cell may be anywhere from 2 to 35 kilometers in diameter. The accuracy of
a location fix using a single cell tower is typically in the range of 1to 2 kilometers.
Other techniques used along with Cell ID can achieve accuracy within 150 meters.
A very prominent user of Cell ID positioning technology on mobile devices is Google.
Google's Maps for Mobile service uses the transmission from a single cell tower to provide
the cell phone location. This often leads to a disparity in accuracy between an urban and a
Cell ID location detection relies on the ability to map information detected on operator
cells to a database of their precise location. Mobile network operators that own the
cells don't publish or provide access to their Cell ID database for a number of reasons,
among which are privacy concerns, but perhaps more importantly commercial considerations
(operators plan to charge for access to the data).
For mobile applications other than the iPhone and Android (which embed Google Maps as the mapping component, thus making the use of their Cell ID database more or less mandatory), a number of other databases are available. A number of commercial enterprises have built up their own Cell ID database and offer this for use to third parties. A notable example is Navizon, which offers a relatively complete global Cell ID database at a reasonable cost.
Increasingly, demand is growing for open source solutions when it comes to Cell ID, and this has given rise to the development of the OpenCellID movement. The OpenCellID movement is an open source project that began to gain prominence in 2008 and is led by a France-based team. It currently claims to have mapped the location of over 600,000 cells9 thanks to the crowd sourcing of Cell ID locations from around the world. The data from this open source project is available through a number of public APIs (Application Programming Interface)
WPS - Wireless positioning systems
A key advantage of WPS, indicating they are a must-have for many mobile applications,
is that they work indoors where traditionally GPS hasn't been available. This is because GPS positioning requires a line of sight to the satellite.
The Wi-Fi positioning software uses 802.11 radio signals emitted from wireless routers to determine the precise location of any Wi-Fi-enabled device. When a mobile user running the Wi-Fi positioning client pops up in a neighborhood, the software scans for access points. It then calculates a user's location by selecting several signals and comparing them to the reference database. The more densely populated the area is with Wi-Fi signals, the m ore accurate the software is at locating the device. Effectively,
the same principles of cell tower triangulation are adopted as described earlier, but are used for detecting wireless router transmission signals instead of operator signal radio transmissions.
Location-based services are no longer limited to mobile or GPS devices. Web services running in browsers can now access a user's location through IP geocoding or centralized databases. This method has already close interconnection (in some cases ) with Wi-Fi signal geolocation (mentioned earlier).
IP address-based geolocation or shorter IP geocoding determines a user's geographic latitude, longitude and, by inference, city, region and nation by comparing the user's public Internet IP address with known locations of other electronically neighboring servers and routers.
Every device connected to the public Internet is assigned a unique number known as an Internet Protocol (IP) address. IP addresses consist of four numbers separated by periods (also called a 'dotted-quad') and look something like 192. 168.0.1. Since these numbers are usually assigned to Internet service providers within region-based blocks, an IP address can often be used to identify the region or country from which a computer is connecting to the Internet. An IP address can sometimes be used to show the user's general location. At one time ISPs issued one IP address to each user. These are called static IP addresses. Because Internet usage exploded far beyond what was envisioned in the early design of the IP standard (known as IPv4) and the number of IP addresses is limited, ISPs moved toward allocating IP addresses in a dynamic fashion out of a pool of IP addresses using a technology called Dynamic Host Configuration Protocol or DHCP. This dynamic allocation makes physically locating a device using an IP address tougher.
This method is being formalized as the Geolocation API Specification by the World
Wide Web Consortium.9 This specification defines an API that provides scripted access
to geographical location information associated with the hosting device, in this case, the web browser.
Radio-Frequency Identification (RFID)
Radio-frequency identification (RFID) is the use of a wireless non-contact system that uses radio-frequency electromagnetic fields to transfer data from a tag attached to an object, for the purposes of automatic identification and tracking. Some tags require no battery and are powered and read at short ranges via magnetic fields (electromagnetic induction). Others use a local power source and emit radio waves (electromagnetic radiation at radio frequencies). The tag contains electronically stored information which may be read from up to several meters away.
Indoor location obtained from RFID, combined with the previously described location-tracking information, opens new technological scenarios. One possible scenario for indoor location-based applications is in retail and tracking services. RFID could be used to describe, track and identify products, objects (car, computer, cell phone) and people (instead of using security badges). RFID is more efficient than barcodes, as there's no need for optical visibility - just sufficient range. Compared to Bluetooth, it's simpler, and it works on the reader-antenna principle, what makes it wider applicable.
Near Field Communication (NFC)
NFC is a short-range high frequency wireless communication technology that enables the exchange of data between devices over about a 10 cm distance. NFC is an upgrade of the existing proximity card standard (RFID) that combines the interface of a smartcard and a reader into a single device. It allows users to seamlessly share content between digital devices, pay bills wirelessly or even use their cellphone as an electronic traveling ticket on existing contactless infrastructure already in use for public transportation.
The significant advantage of NFC over Bluetooth is the shorter set-up time. Instead of performing manual configurations to identify Bluetooth devices, the connection between two NFC devices is established at once (under a 1/10 second).
Due to its shorter range, NFC provides a higher degree of security than Bluetooth and makes NFC suitable for crowded areas where correlating a signal with its transmitting physical device (and by extension, its user) might otherwise prove impossible.
NFC can also work when one of the devices is not powered by a battery (e.g. on a phone that may be turned off, a contactless smart credit card, etc.).
GIS in the different forms and on the different levels
The previous section with the overview of the locating/positioning technologies clearly indicates the different categories of the location information.
Descriptive locations. A descriptive location is always related to natural geographic objects like territories, mountains, and lakes, or to man-made geographical objects like borders, cities, countries, roads, buildings, and rooms within a building. These structures are referenced by descriptions, that is, names, identifiers, or numbers, from where this category of location has derived its name. Thus, descriptive location is a fundamental concept of our everyday life, which is used by people for arranging appointments, navigation, or delivering goods and written correspondence to well-defined places. Without having organized our real-world environment and infrastructure according to well-defined descriptions of geographical objects, people would meander without orientation.
Spatial locations. Strictly speaking, a spatial location represents a single point in the Euclidean space. Another, more intuitive term for spatial location is therefore position. It is usually expressed by means of two- or three-dimensional coordinates, which are given as a vector of numbers, each of it fixing the position in one dimension. In contrast to descriptive locations, positions are not used in our everyday life, because people prefer to orientate in terms of geographical objects instead of using coordinates. However, spatial location is indispensable for professional applications like aviation or shipping, which depend on the availability of highly precise and accurate location information. The concept of spatial locations also provides the basis for surveying and mapping of descriptive locations.
Network locations. Network locations refer to the topology of a communications network, for example, the Internet or cellular systems like GSM. These networks are composed of many local networks, sometimes also referred to as subnetworks, connected among each other by a hierarchical topology of trunk circuits and backbones. Service provisioning in these networks assumes that the location of the user's device with respect to the network topology is known. This is achieved by network addresses that contain routing information, in combination with directory services, for mapping numbers, identifiers, or names of another scheme onto the network address. For example, in the Internet a network location refers to a local network which is identified by means of its IP address. In mobile networks, on the other hand, a network location is related to a base station a mobile terminal is currently attached to.
Hence, an important function of new the framework is the integration (or maybe better say translation) between the different categories of locations. If positioning delivers a spatial or network location, it must often be mapped onto a descriptive location in order to be interpretable by the respective user. On the other hand, a descriptive location might be transferred into a spatial location in order to relate it with other locations, for example, as it is required for distance calculations. In another example, it might be necessary to translate a spatial or descriptive location into a network location to support location-based routing.
Geographic Information System (GIS), is the essential key technologies for fulfilling these tasks.
GIS, comparing to "classical" information systems, has unique features:
- Underlying levels of abstraction
- Map Analysis, Modeling and Visualization
The upper layer in a GIS, the so-called geographic data model, provides a conceptual view of geographic content in terms of units called features. A feature represents a real-world entity, for example, a building, road, river, country, or city.
The lower layer consists of a spatial component, which fixes its location, shape, and topological relationship with other entities, and a description, which provides non-spatial information about the entity, for example, the name of a city or road, or the population of a country. Each feature has a well-defined set of operations, which is tailored to the type of real-world entity it represents.
The relation, real-world entity=spatial component+non-spatial information, enables GIS to be implemented in the different forms and the different level.
GIS was often misunderstood and presented simple as "tool to produce map". As GIS continued its evolution, the emphasis turned from just simple show the map as descriptive query to map as the result of analysis.
As well, the new approach to generate maps could be applied. Using the previously mentioned concept of two components, it is possible to use underlying structure to produce the map which is not anymore just simple geographic representation. This "map" or better say "map presentation" of the "space" could be generated using only spatial component, only non-spatial information or combination spatial&non-spatial data.
The important feature of this approach is that spatial information could be represented numerically. This will enable spatial statistics which will combine the measurements at location (numbers!) and location itself (spatial data!) to create the new map presentation.
It is important to highlight the visualization of GIS. The radical change in historical 2D maps is already happening. Multimedia progress is very fast which brings new forms. Maps are already "enriched" with hyperlinks, 3D terrain, photos, video etc. all the data related with that location.