This paper will be explaining the working principles and applications of Laser Photogrammetry. Photogrammetry is a Greek word, “pho” meaning light and the photogrammetry meaning measuring with photographs. Thus, photogrammetry can be defined as a 3-dimensional coordinate measuring technique that uses photographs as the fundamental medium for measurement. It is an estimation of the geometric and semantic properties of objects based on images or observations from similar sensors. Traditional cameras, laser scanning and smart phones can be taken as examples of similar sensors. Measurements are made to give the location recognition, interpretation of an image or scenes. The technology has been used for decades to get information about an object from an image, for instance, autonomous cars need to get a better understanding of the object in front of them. The working principle is aerial triangulation in which photographs are taken from at least two different locations, lines of sight are developed from each camera to points on the object. This paper will mainly address the applications of laser photogrammetry. These applications include: recent advances of photogrammetry in robot vision; remote sensing applications and how the technology is aligned to photogrammetry; and the application of photogrammetry in computer vision and the relationships of photogrammetry and computer vision. The robotics application of photogrammetry is a young discipline in which maps of the environments are built and interpretations of the scene are performed. This is usually operated with small drones which give accurate results and updated maps and terrain models. Another application of photogrammetry is remote sensing. As its name indicates, remote sensing is done remotely without touching the object or scene. Remote sensors are used to cover large areas and where contact-free sensing is desired. For instance, there are objects which are not accessible, sophisticated or toxic to touch. Thus, remote sensors can be placed as far as satellites on orbits far away from the scene and photogrammetry plays an important role in interpretations of the scenes or objects. The third application of photogrammetry is in computer visions. In computer visions, the applications of photogrammetry that will be addresses in this paper include: image-based cartography, aerial reconnaissance and simulated environments.
Photogrammetry means obtaining reliable information about physical objects and their environments by measuring and interpreting photographs. It is the science and art of determining qualitative and quantitative features of objects from the images recorded on photographic emulsions. Laser Photogrammetry and 3D Laser Scanning are completely different technologies for different project purposes. The 3D laser scanning, one is using a laser to measure each individual measurement detained, whereas when using photogrammetry, one is using a series of photographs with overlapping pixels to extract 3D information. Qualitative observations are identification of deciduous versus coniferous trees, delineation of geologic landforms, and inventories of existing land use, whereas quantitative observation are size, orientation, and position. Objects’ identification and description of objects are performed by observing the shape, tone and texture of the photographic image. The principal type of photographs used for mapping are vertical photographs, exposed with optical axis. This is illustrated in Figure1, geometry of a single vertical aerial photogrammetry. Vertical photographs, exposed with the optical axis vertical or as nearly vertical as possible, are the principal kind of photographs used for mapping . In a vertical aerial photogrammetry, the exposure station of the photograph is known as the front nodal point of the camera lens. The nodal points are points in the camera lens system such that any light ray entering the lens and passing through the front nodal point will emerge from the real nodal point travelling parallel to the incident light ray . So, the object side of the camera lens has the positive photograph, placed such that all points – the object point, the image point, and the exposure station lie on the same straight line . The line through the lens nodal points and perpendicular to the image plane intersects the image plane at the principal point . The distance measured from the rear nodal point to the negative principal point or from the front nodal point to the positive principal point is equal to the focal length f of the camera lens .
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
The ratio between an image distance on the photograph and the corresponding horizontal ground distance is the scale of an aerial photograph . For a correct photographic scale ratio, the image distance and the ground distance must be measured in parallel horizontal planes . However, this condition does not occur because most photographs are tilted and the ground surfaces are not flat horizontal planes. As a result, scale will differ throughout the format of a photograph, and the photographic scale can be defined only at a point, and is given by equation 1 . Equation 1 is used to calculate scale on vertical photographs and is exact for truly vertical photographs .
S= photographic scale at a point
f = camera focal length
H= flying height above datum
h= elevation above datum of the point
Figure 1: Geometry of a vertical aerial photogrammetry 
When calculating, flight planning, approximate scaled distances are adequately enough for direct measurements of ground distances. Average scale is found by using equation 2 .
is the average ground elevation in the photo. Referring to the vertical photograph shown in the Figure 2 below, the approximate horizontal length of the line AB is given by equation 3 .
D= horizontal ground distance
d= photograph image distance
Figure 2: Horizontal ground coordinates from single vertical photograph 
Again, to get an accurate measurement of the horizontal distances and angles, the scaled variations caused by elevation differences between points must be considered .
Horizontal ground coordinates are calculated by dividing each photocoordinate by the true photographic scale at the image point . In equation form, the horizontal ground coordinates of any point are given by equation 4.
= ground coordinate of point p
= photocoordinate of point p
= ground elevation of point p
Equation 4, uses a coordinate system defined by the photocoordinate axes having an origin at the photo principal point and the x-axis typically through the midside fiducial in the direction of flight . Then the local ground coordinate axes are placed parallel to the photocoordinate axes with an origin at the ground principal point . These equations are exact for truly vertical photographs and are typically used for near vertical photographs. After the horizontal ground coordinates of points A and B in Figure 2 are computed, the horizontal distance is given by equation 5.
must be known before the horizontal ground coordinates can be calculated . If stereo solution is used, there is no need to know the elevations
. The solution given by equation 5, is not an approximation because the effect of scale variation caused by unequal elevations is included in the computation of the ground coordinates .
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
Another characteristic of the perspective geometry recorded by an aerial photograph is relief displacement. Relief displacement is evaluated when analyzing or planning mosaic or orthophoto projects . Relief displacement can also be used to interpret photo so that heights of vertical objects are obtained. . This displacement is shown in Figure 3, and is calculated by equation 6 .
d= image displacement
r= radial distance from the principal point to the image point
H= flying height above ground
Since the image displacement of a vertical object can be measured on the photograph, Equation 6 can be solved for the height of the object to obtain the vertical height of the object,
which is given by equation 7.
= elevation at the object base above datum
Figure 4: Relief Displacement on a Vertical photograph 
All photogrammetric procedures are composed of these two basic problems, resection and intersection. There are photogrammetric problems which are solved by Analog and Analytical solutions. Resection is the process of recovering the exterior orientation of a single photograph from image measurements of ground control points . In a spatial resection, the image rays from total ground control points (horizontal position and elevation known) are made to resect through the lens nodal point (exposure station) to their image position on the photograph . The resection process restores the photograph’s previous spatial position and angular orientation, that is when the exposure was taken. Intersection is the process of photogrammetrically determining the spatial position of ground points by intersecting image rays from two or more photographs . If the interior and exterior orientation parameters of the photographs are known, then conjugate image rays can be projected from the photograph through the lens nodal point (exposure station) to the ground space. Two or more image rays intersecting at a common point will determine the horizontal position and elevation of the point. Map positions of points are determined by the intersection principle from correctly oriented photographs. The Analog solution is one of the methods of solving these fundamental photogrammetric problems. The Analog solutions use optical or mechanical instruments to form a scale model of the image rays recorded by the camera . However, the physical constraints of the analog mechanism, the calibration, and unmodeled systematic errors limit the function and accuracy of the solution . The analytical photogrammetry solution is the second solution that employs mathematical model to represent the image rays recorded by the camera . The collinearity condition equations include all interior and exterior orientation parameters required to solve the resection and intersection problems accurately . Analytical solutions consist of systems of collinearity equations relating measured image photocoordinates to known and unknown parameters or the photogrammetric problem .
Working Principles of Photogrammetry- Aerotriangulation
Aerial triangulation is defined as the process of determining x,y and z ground coordinate of individual points on measurements from the photograph . The aerotriangulation geometry along a strip of photography is illustrated in Figure 6 . Photogrammetric control extension requires that a group of photographs be oriented with respect to one another in a continuous strip or block configuration . A pass point is an image point that is shared by three consecutive photographs (two consecutive stereomodels) along a strip. The exterior orientation of any photograph that does not contain ground control is determined entirely by the orientation of the adjacent photographs. Benefits of Aerial Benefits include: minimizing delays and hardships due to adverse weather condition; access to much of the property within the project area is not required; field surveying in difficult area, such as Marshes, Extreme slope, hazardous rock formation, etc; can be minimized. Aerial Triangulation is classified three categories:
- Photogrammetric projection method (analogic or analytical) .
- Strip or block formation and adjustment method (sequential or simultaneous) .
- Basic unit of adjustment (strip, stereomodel, or image rays) .
Figure 6. Aerotriangulation geometry
Application of Photogrammetry
Robot vision systems are an important part of modern robots as it enables the machine to interact and understand with the environment; and to take necessary measurements. The instantaneous feedback from the vision system which is the main requirements of most robots is achieved by applying very simple vision processing functions or/and through the hardware implementation of algorithms . One of the examples of this application is called close-range photogrammetry which is used in time-constrained modes in robotics and target tracking .
Photogrammetry and Remote Sensing Applications
Remote sensing collects information about objects and features from imagery without touching them. It is mainly used to collect and derivate 2D data from all types of imagery, for instance slope. Photogrammetry is associated with the production of topographic mapping generally from conventional aerial stereo photography . Today photographs are taken high-precision aerial cameras, and most maps are compiled by stereophotogrammetry methods. The advantage of Aerial Photogrammetry and Topographic Mapping is that it is cost effective when ground survey methods could not cover large areas. The map shows land contours, site conditions and details for large areas. The conventional aerial photography can produce an accurate mapping at scales as large as 1:200. The accuracy is achieved by employing improved cameras and photogrammetric instrumentations.
After an area has been authorized for mapping, the planning and procurement of photography are the first steps in the mapping process. The necessary calculations are made on a flight design worksheet. The flight planned chooses the best available basement on which to delineate the design flight lines. The final plan gives the location, length, and spacing of flight strips.
The goals of Computer Visions are for object recognition, navigation, and object modeling. Today’s Object recognition algorithms function according to the data flow shown in the Figure 7 below. Image features are extracted from the image intensity data such as: regions of uniform intensity, boundaries along high image intensity gradients, curves of local intensity maxima or minima (line features), and other image intensity events defined by specific filters(corners) [4,6]. In order to get high level measurements, these features are processed further. For instance, part of a step intensity boundary may be approximated by a straight-line segment and the properties of the resulting line are used to define the boundary segment. Formation of a model for each class is the next step in recognition, in which the algorithms store the feature measurements for a particular object, or a set of object instances for a given class, and then use statistical classification methods to classify a set of features in a new image according the stored feature measurements [4,6]. The second goal of the computer visions is the navigation modelling. The goal of navigation is to provide guidance to an autonomous vehicle. The vehicle is to maintain accurate following along a defined path. In the case of a road, it is desired to maintain a smooth path with the vehicle staying safely within the defined lanes. In the case of off-road travel, the vehicle must maintain a given route and the navigation is carried out with respect to landmarks . The third object of computer visions is object modeling. In object modelling, a complete and accurate 3D model of an object is recovered . The model can then be used for different applications, such as: to support object recognition, and for image simulation. In image simulation, the image intensity data is projected onto the surface of the object to provide realistic image of the object from any desired viewpoint . Computer vision methods is also used for defect detection assessment and is illustrated in Figure 8. Figure 8 shows that the general computer vision pipeline starting from low-level processing up to high-level processing. Correspondingly, the bottom part of Figure 8 labels specific methods for the detection, classification and assessment of defects on civil infrastructure into pre-processing methods, feature-based methods, model-based methods, pattern-based methods, and 3D reconstruction . These methods, however, cannot be considered fully separately. Rather they build on top of each other. For example, extracted features are learned to support the classification process in pattern-based methods .
Figure 7: The operational structure of object recognition algorithms.
Figure 8: Categorizing general computer vision methods (top) and specific methods to defect
detection, classification and assessment of civil infrastructure.
Future Innovations and Developments
These days, close range photogrammetry uses digital cameras with capabilities that will result in moderate to very high accuracies of measurement of objects. To improve robot’s vision capabilities, two alternatives are suggested and studied for future: (a) hardware implementation of more complex image analysis functions with consideration of photogrammetric methodology, or (b) design of a robot “insect-level” intelligent system principle, based on the use of a great variety of different, simultaneous, but simple sensor functions . In computer visions, the long-term goal of computer vision with respect to aerial reconnaissance applications is change detection . In this case, the changes from one observation to the next are meant to be significant changes, that is, significant from the human point of view . Thus, in order to define only significant change, it is essential to be able to characterize human perceptual organization and representation.
When one is looking to deploy one technology over the other for a given project purpose, it is a question of how large an area is required to be collected and how accurately it needs to be collected. Photogrammetry can easily help us to acquire large scale data, has ability to record dynamic scenes, records images to document the measuring process and can automatically process data, possibly for real-time processing. The disadvantages of photogrammetry are: the necessity of light source, the flaws in measurement accuracy, and the occlusions and visibility constraints. The performance of photogrammetry can be improved by using computer simulations which is more automatic to be deployed on places which are difficult to operate. The enormous contribution to heritage conservation cannot be overstated since photogrammetry is particularly preferred for monitoring purposes, like construction sites.
- Hamilton Research Group. “Chapter 10: Principles of Photogrammetry.” In Physical Principles of Remote Sensing (3rd Edition). Cambridge University Press, New York, 2013 441pp.
- Lillesand, Thomas M, et al. Remote Sensing and Image Interpretation. 6th ed., John Wiley & Sons, 2008.
- Gruen, Armin. (1992). Recent advances of photogrammetry in robot vision. ISPRS Journal of Photogrammetry and Remote Sensing. 47. 307-323. 10.1016/0924-2716(92)90021-Z.
- Linder, Wilfried. Digital Photogrammetry: A Practical Course. Springer Berlin Heidelberg, 2009. INSERT-MISSING-DATABASE-NAME, INSERT-MISSING-URL. Accessed 2018.
- CICES. “Photogrammetry and Remote Sensing.” Chartered Institution of Civil Engineering Surveyors, www.cices.org/.
- A. Heller and J.L. Mundy. The evolution and testing of a model-based object recognition system. In Computer Vision and Applications, R. Kasturi and R. Jain, eds, IEEE Computer Society Press., 1991.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: