Drone Mapper UAS Systems and Photogrammetric Image Processing

  The purpose of this document is to provide an overview of camera sensors, UAS systems and the photogrammetric image processing workflow. We've compiled our own information and information from around the web into this document.

We are currently updating and adding additional information to this page.

  Source Name Link
1 DroneMapper Pierre Stoermer, Jon-Pierre Stoermer https://dronemapper.com
2 Geospatial Applications of Unmanned Aerial Systems (UAS) Qassim A. Abdullah, Ph.d. CP, PLS, Instructor, MGIS program, The Pennsylvania State University; https://www.e-education.psu.edu/geog597g/node/3
       

Sensors Characteristics[2][1]

Focal Plane and CCD Array

The focal plane of an aerial camera is the plane where all incident rays coming from the object are focused. With the introduction of digital cameras, the focal plane is occupied by the CCD array and produces a digital frame.

The sensor or focal plane is a two-dimensional array of Charged Coupled Device(link is external)

(CCD) elements or pixels. The sensor is mounted at the focal plane of the camera. When an image is taken, all pixels of the sensor are exposed simultaneously, thus producing a digital frame. Figure 4.3 (from Wolf, page 75) illustrate how a digital camera captures an area on the ground that falls within the lens' field of view (FOV).

The size or format of a digital camera is measured by the pixel array size of its sensor. If a camera has a sensor with 4000 pixels by 4000 pixels, it is called a 16 Mega pixel camera (16,000,000 pixels).
An excellent source of information and specifications for your selected camera can be found here: dpreview.com

Lens Cone

The lens for a mapping camera usually contains compound lenses put together to form the lens cone. The lens cone also contains the shutter and diaphragm.

Compound Lens

The lens is the most important and most expensive part of a mapping aerial camera. Cameras on board the UAS are not of that  level of quality, as they were not manufactured to be used as mapping cameras. Mapping cameras are called metric cameras, and are built so that the internal geometry of the camera holds its characteristics despite harsh working conditions and changing operational environments. Lenses for cameras on board the UAS are typically much smaller in size and lighter in weight. They are also much less expensive than standard metric mapping cameras.

Shutters

Shutters are used to limit the passage of light to the focal plane. The shutter speed of aerial cameras typically ranges between 1/100 and 1/2000 seconds. The setting of shutter speed will be dependent on the UAS platform ground speed and pixel size on the ground to completely eliminate blur in the images – more discussion will follow. Shutters are of two types: focal-plane shutters or the between-the-lens shutters. The latter one is the most common shutter used for aerial cameras. More information on different types of shutter mechanisms can be found on Wikipedia's Shutter Photography page(link is external).
 

Geometry of Vertical Image[2][1]

In photogrammetry we usually deal with three types of imagery (photography), those are:

  1. Vertical photography: ±3º from nadir – Most used –
  2. Tilted Photography > ±3º but less than ±30º
  3. Oblique Photography: Between 35º degree and 55º off nadir

We will focus only on the first type, and that is “vertical photography.”

Figure 4.3 illustrates the basic geometry of a vertical photograph or image. By vertical photograph or image, we mean an image taken with a camera that is looking down at the ground. As the aircraft moves, so does the camera, and this makes it impossible to take a true vertical image. Therefore, vertical image definition allows a few degrees deviation from the nadir (the line connecting the lens frontal point and the point on the ground that is exactly beneath the aircraft). In summary, a vertical image is an image that is either looking straight down to the ground or is looking a few degrees to either side of the aircraft.

see above for more details

Figure 4.3 Geometry of vertical image
Source: Elements of Photogrammetry with application in GIS, 4th edition, 2014 McGraw Hill

Imagery Overlap

Imagery acquired for photogrammetric processing is flown with two types of overlap: Forward Lap and Side Lap. The following two subsections will describe each type of imagery overlap.

Forward Lap/In-Track Overlap

Forward lap, which is also called end lap or in-track overlap, is a term used in Photogrammetry to describe the amount of image overlap intentionally introduced between successive photos along a flight line (see Figure 4.5). This illustrates an aircraft equipped with a mapping aerial camera taking two overlapping photographs. The centers of the two photographs are separated in the air with a distance B. Distance B is also called air base. Each photograph of Figure 4.5 covers a distance on the ground equal to G. The overlapping coverage of the two photographs on the ground is what we call forward lap.

This type of overlap is used to form stereo-pairs for stereo viewing and processing. The forward lap is measured as a percentage of the total image coverage. Typical value for the forward lap for photogrammetric work is 60-75%. because of the light weight of the UAS, we expect substantial air dynamics and therefore substantial rotations of the camera (i.e. crab) therefore we recommend the amount of forward lap to be at least 70%.

see text below for more information on this image.

Figure 4.5 Imagery forward lap
Source: Elements of Photogrammetry with application in GIS, 4th edition, 2014 McGraw Hill 

Side Lap/Cross-Track Overlap

Side lap is a term used in Photogrammetry to describe the amount of overlap between images from adjacent flight lines (see Figure 4.6). Figure 4.6 illustrates an aircraft taking two overlapping photographs from two adjacent flight lines. The distance in the air between the two flight lines (W) is called lines spacing.

This type of overlap is needed to make sure that there are no gaps in the coverage. The side lap is measured as a percentage of the total image coverage. Typical value for the side lap for photogrammetric work is 30-60%. However, because of the light weight of the UAS and pointing stability concerns we recommend using at least 50% side lap.

don't forget to add an alt tag!

Figure 4.6 Imagery Side Lap
Source: Elements of Photogrammetry with application in GIS, 4th edition, 2014 McGraw Hill

Image Ground Coverage

Ground coverage of an image is the area on the ground (the area within ABCD of Figure 4.3) covered by the four corners of the photograph a'b'c'd' of Figure 4.3. Ground coverage of a photograph is determined by the camera internal geometry (focal length and the size of the CCD array) and the flying altitude above ground elevation.

Example on Image Ground Coverage:

A digital camera has an array size of 12,000 pixels by 6,000 pixels (Figure 4.7). If the resulting ground resolution (GSD) of a pixel is 1 foot the image ground size can be calculated by multiplying the number of pixels by the GSD, i.e. 12,000 pixels X 1 foot/pixel = 12,000 feet and 6,000 pixels X 1 foot/pixel = 6,000 feet. So the image size on the ground is 12,000 feet wide by 6,000 feet high.

Figure 5 CCD Array

Figure 4.7 CCD Array
Source: Dr. Qassim Abdullah

Designing Flight Route[2]

The first step in the design of a flight route is to determine what GSD and accuracy is required. Ask yourself what am I trying to image or identify and at what geo-spatial or positional accuracy?GSD is computed using the following: GSD = (pixel size X flying height) / focal length. Positional accuracy is dependent on ground resolution (GSD), overlaps, GPS/IMU photo geo-tagging accuracy and/or the use of ground control. If ground control is utilized then one could expect absolute geo-spatial accuracies of one pixel (GSD) horizontal and one to three pixels vertical. Once those two requirements are known, the following processes follow:

  1. Planning the aerial photography (developing the flight plan);
  2. Planning the ground controls;
  3. Selecting software, instruments and procedures necessary to produce the final products;
  4. For the flight plan, the planner needs to know the following information:
  1. Focal length of the camera lens;
  2. Flying height above the ground;
  3. Size of the pixel;
  4. Size of CCD Array (how many pixels);
  5. Size and shape of the area to be photographed;
  6. The amount of end lap and side lap;
  7. Scale of flight map;
  8. Ground speed of aircraft;
  9. Other quantities as needed.

Geometry of Photogrammetric Block

Figure 4.8 shows three overlapping squares with light rays entering the camera at the lens focal point. Successive overlapping images forms a strip of imagery we usually call "strip" or "flight line," therefore photogrammetric strip (Figure 4.8) is formed from multiple overlapping images along a flight line while photogrammetric block (Figure 4.9) consists of multiple overlapping strips (or flight lines).

need an alt tag

Figure 4.8 Geometry of photogrammetric strip.
Source: Dr. Qassim Abdullah

need an alt tag

Figure 4.9 Geometry of photogrammetric block with two strips
Source: Dr. Qassim Abdullah

Flight Plan Design and Layout

Once we compute the ground coverage of the image and decide on overlaps, as discussed in section 4.3, we can compute the number of flight lines, the number of images, aircraft speed, flying altitude, etc. and draw them on the project map (Figure 4.10).

example of a project map

Figure 4.10 The project map
Source: Dr. Qassim Abdullah

Before we start the computations of the flight lines and image numbers here are some general guidelines:

  • For a rectangular shaped project, always use the smallest dimension of the project area to layout your flight lines. This results in fewer flight lines and less turns between flight lines (Figure 4.11). In Figure 4.11, the red lines with arrow heads represent flight lines or strips, while the black dash lines represent the project boundary.

    example of correct flight lines drawn

    Figure 4.11 Correct flight lines orientation
    Source: Dr. Qassim Abdullah

  • If you have a digital camera with a rectangular shape CCD array, always choose the largest dimension of the CCD array of the camera to be perpendicular to the flight direction (Figure 4.12). In Figure 4.12, the blue rectangles represent images as taken by a camera with rectangular CCD array. The wider dimension of the array is always configured to be perpendicular to the flight direction (which is the East-West direction for this figure).

    see text above

    Figure 4.12 Correct camera orientation
    Source: Dr. Qassim Abdullah

Flight Lines Computations

see text to the left for more details

Figure 4.13 Flight line layouts
Source: Dr. Qassim Abdullah

Next we compute how many flight lines we need for the project area illustrated in figure 4.13. Figure 4.13 shows rectangular project boundaries (in black dashed lines) with LENGTH and WIDTH. To establish the number of flight lines needed to cover the project area, we will need to go through the following computations:

  1. Compute the coverage on the ground of one image (along the width of the camera CCD array (or W)) as we discussed in section 4.3.
  2. Compute the flight lines spacing as follows:
    Line spacing or distance between flight lines (SP) = Image coverage (W) x (100 – amount of side lap)/100
  3. Number of flight lines (NFL) = (WIDTH / SP) + 1
  4. Always round up the number of flight lines, i.e. 6.2 becomes 7; the 7th flight line should cover the east boundary for full coverage
  5. Start the first flight line at the east or west boundary of the project

Number of Images Computations

Once we determine the number of flight lines, we need to figure out how many images will cover the project area. To do so, we need to go through the following computations:

  1. Compute the coverage on the ground of one image (along the height of the camera CCD array (or L)) as we discussed in section 4.3.
  2. Compute the distance between two consecutive images, or what we call the “airbase,” B, as follow: Airbase or distance between two consecutive images (B) = Image coverage (H) x ((100 – amount of end lap)/100)
  3. Number of images per flight line (NIM) = (LENGTH / B) + 1
  4. Always round up the number of images, i.e. 20.2 becomes 21
  5. Add two images at the beginning of the flight line before entering the project area and two images upon exiting the project area Figure 4.14 (it is needed to insure continuous stereo coverage), i.e. total of additional 4 images for each flight line, or Number of images per flight line = (LENGTH / B) + 1 + 4
  6. Total number of images for the project = NFL x NIM

Figure 4.14 is the same as Figure 4.13 with added blue circles that represent photo centers of the designed images. The circles are shown on only one flight line, but this is repeated for each flight line.

see text above for details

Figure 4.14 Imagery layout
Source: Dr. Qassim Abdullah


Aircraft Speed and Image Collection[1]

Controlling the aircraft speed is important for maintaining the necessary forward or end lap expected for the imagery. Flying the aircraft too fast, and you end up with less forward lap than anticipated, while flying the aircraft too slow results in too much overlap between successive images. Both situations are harmful to the anticipated products and/or the project budget. Little amount of overlap reduces the capability of using such imagery for stereo viewing and processing, while too much overlap results in too many unnecessary images that may affect the project budget negatively. Aircraft speed also has an influence on the proper camera shutter speed to set to eliminate image blur. A good rule of thumb for minimizing/eliminating image blur is to set the shutter speed faster than the time for the aircraft to move one half a pixel or GSD. As an example, the aircraft is travelling at 10 m/sec and imaging at 10 cm GSD: 0.1 m / 2 = 0.05 m divided by 10 m/sec = 0.005 seconds or shutter speed of 1/200 sec.

Example of Flight Plan Design and Layout

A construction project area is 1 Km long and and 0.5 Km wide. The client has requested elevation contours at 60 cm intervals with an absolute elevation accuracy of 30 cm. We first determine what GSD is required to do this job – the absolute elevation accuracy requires that we image at 10 cm (one to three GSD vertical accuracy when using ground control to be safe). We will use a UAS platform that can fly at 10 m/sec with a Canon SX 260 HS camera.

Canon SX 260 HS Specifications
Focal Plane Format: 4,000 x 3,000 pixels
Pixel Size: 0.00154 mm or 1.54 microns
Lens focal length range: 4.5 to 90 mm
Shutter speed: see below
 

  1. Determine UAS flight height: GSD = pixel size X flight height / focal length, so flight height = GSD X focal length /pixel size = 0.1 m X 4.5 mm / 0.00154 mm = 292 meters
  2. Determine image size on ground: 0.1 m/pixel X 4,000 pixels = 400 m (width of image – for side lap); 0.1 m/pixel X 3,000 pixels = 300 m (height of image – for forward lap)
  3. Select overlaps: 75% forward lap and 50% side lap
  4. Compute spacing and number of flight lines: SP = 400 m X (100 – 50)/100 = 200 m & NFL = (500 m / 200 m) + 1 = 3.5 or 4
  5. Compute airbase and number of images per flight line: B = 300 m X (100 – 75)/100 = 75 m; NIM = Length / B + 1 = 1,000 m / 75 m + 1 = 14.3 or 15 (+ 4 for coverage) = 19
  6. Total number of images: NFL X NIM = 4 X 19 = 76
  7. Shutter speed setting: one half GSD / aircraft ground speed = 0.05 m / 10 m/sec = 0.005 sec or 1/200 minimum (faster is better to be safe)

Sensors Calibration and Boresighting[2]

Camera Calibration

Most existing UAS that are dedicated to photogrammetric imaging carry on board less expensive cameras that we call nonmetric cameras. Nonmetric cameras are cameras with variable interior geometry (i.e. unknown focal length) and with relatively large lens distortion. In order to conduct photogrammetric mapping from the resulting imagery from such cameras, we need to determine to a known accuracy all interior camera parameters such as the focal length and the coordinates of the principal point, and to model the lens distortion.

The principal point of a camera is the point where lines from opposite corners of the CCD array or the lines connecting the opposite mid-way points of the CCD array sides intersect, Figure (4.18). However, when the lens is fitted on the camera body, it is impossible to align the center of the lens and the principal point described above resulting in offset distances xp and yp as illustrated in Figure 4.18. Those two values are determined in the process of camera calibration that needs to be represented in the photogrammetric mathematical model during computations.

Camera calibration usually performed in special laboratories dedicated to this task such as the USGS calibration lab for film cameras(link is external). However, with the advancements in the computational analytical model in photogrammetry, we can determine the camera parameters analytically through a process called camera self-calibration from within the aerial triangulation process. Most UAS data processing software such as the one used in this course support camera self-calibration.

see text above for details

Figure 4.18 Internal camera geometry
Source: Dr. Qassim Abdullah

Sensors Boresighting

The term “boresighting” is usually used to describe the process of determining the differences in the rotations of the sensor (such as camera) rotational axes and the rotational axes of the Inertial Measurement Unit (IMU), which is usually bolted to the camera body. The IMU(link is external) is a device that contains gyros and accelerometers used in Photogrammetry and LiDAR to sense and measure sensors rotations and accelerations. In Photogrammetry where the IMU is used on an imaging camera, the boresight parameters are determined by flying over a well controlled site (site with accurate ground controls) and then conducting aerial triangulation on the resulted imagery.

The aerial triangulation process will compute the six exterior orientation parameters (X, Y, Z, omega, phi, kappa) while the IMU will measure the three orientation parameters' roll, pitch and heading (or yaw). Comparing the two sets of the orientation angles of the camera as computed by the aerial triangulation and measured by the IMU, one can establish the differences in the rotations of the camera in reference to the inertial system (from the IMU). These differences (or offsets values) will be used to correct all the future IMU-derived orientation to convert the rotation angles from inertia to photogrammetric systems so it will be utilized in the mapping process.

A similar process is followed for determining the offset values for the IMU used in the LiDAR system. For the LiDAR offset determination, there is no aerial triangulation used as it follows different processing steps. To determine the boresight offset values in lidar, the lidar has to be flown in a certain configuration over a well controlled site. Figure 4.19 represents an ideal design for lidar boresight determination. From the figure, there are two lines flown in the east-west directions (one flight line flown due east and the other flown the opposite direction, due west) from a certain altitude and two flight lines flown in the opposite direction (north-south) from an altitude that is nearly double the altitude of the east-west flight lines.

See text above for details

Figure 4.19 Lidar boresight determination flight design
Source: Qassim Abdullah

Imagery Geo-Location[2][1]

In order to utilize the photogrammetric mathematical model, i.e. the collinearity condition, for the production of any mapping products, the following information needs to be made available:

  1. The Exterior orientation parameters for every image: Six parameters which represent the camera attitude or orientation represented by the three rotational angles omega, phi, and kappa, and camera position, which is represented by the three coordinates Easting, Northing, and Elevation at the moment of image exposure.
  2. The camera Interior Geometry Parameters: The calibrated lens focal length, the principal point coordinates, and the lens distortion.
  3. The Size of the CCD Array: The number of pixels contained in the CCD array along the width and the height of the array.
  4. The Physical Size of the CCD Pixel: Usually provided in microns such as 14 u (1 mm is equal to 1000 um).
  5. Ground Controls: A ground control is a feature in the imagery with known accurately surveyed coordinates. Depending on the required accuracy of the final products, ground controls can be omitted in some situations.

In this section we will focus on the process of determining the six exterior orientation parameters. The camera position can be measured accurately using the airborne GPS technique using a GPS antenna on board the UAS. The three camera positions can also be computed using the process of aerial triangulation, as we will discuss soon. However, there are two methods for determining the camera attitude or orientation, and those are the aerial triangulation process and the direct measurement from the IMU.

Aerial Triangulation and Bundle Block Adjustment

Aerial triangulation is usually performed on a photogrammetric block (Figure 7.2), which consists of all the imagery acquired over the project area. Figure 7.2 illustrates a photogrammetric block of imagery consisting of three strips, each of which has multiple overlapping images. Also shown are the different types of image overlaps. The top and middle strips contain images with 60% forward lap, while the bottom strip contains imagery with 80% forward lap. You may also notice in the figure that the middle and the bottom strips are overlapping by the amount of 30%.

see text below for details

Figure 7.2 The photogrammetric block
Source: Dr. Qassim Abdullah

In section 7.1, we mentioned a few terms related to aerial triangulation. We will briefly describe these terms in the following sub-sections:

Relative Orientation

Relative Orientation is the process of orienting images relative to one another (i.e. it recreates the “relative” position and attitude of the images at the instants of exposure), as illustrated below. Figure 7.3 shows four images that are connected to each other in space through the aircraft/GPS trajectory but are not necessarily connected to the ground datum (i.e. they are floating in space).

relative orientation - see text above for more details

Figure 7.3 Relative Orientation
Source: Dr. Qassim Abdullah

Relative orientation is an important process that must be performed before we scale the imagery to the ground datum through the process of absolute orientation, which will be discussed in the next section. To form a cohesive block, All images in the block should be relatively oriented with respect to each other through the process of relative orientation.

Absolute Orientation

The process of leveling and scaling the stereo model (formed from two images) with respect to a reference plane or datum using ground control points is shown in Figure 7.4. Figure 7.4 represents the same four images as Figure 7.3, but this time the block was tied to the ground datum through the use of seven ground control points (represented by the black stars).

see text above for more information

Figure 7.4 Absolute Orientation
Source: Dr. Qassim Abdullah

Without performing the absolute orientation process, the generated map would not be specifically associated with a certain location in space. Generating maps that have geo-location information such as datum and coordinates systems can only happen after the process of absolute orientation is performed following relative orientation.

Exterior Orientation

Exterior orientation of a photograph defines its position and orientation in the object space. There are six elements of exterior orientation, X, Y, and Z of the exposure station position, and the three angles that define the angular orientation: ω, φ, and κ. The six elements of exterior orientation are not known and must be computed through a process called space resection within the aerial triangulation process. Here is the definition of the three orientation angles illustrated in Figure 7.5:

  • Omega (ω): Rotation about the x axis. It is equivalent to the angle Roll of the navigation system.

  • Phi (φ): Rotation about the y axis. It is equivalent to the angle Pitch of the navigation system.

  • Kappa (κ): Rotation about the z axis. It is equivalent to the angle Yaw of the navigation system.

The figure shows the three sensor rotations omega, phi, kappa, and three sensor positions X, Y,Z

Figure 7.5 Sensor (camera) Orientation Angles
Source: Dr. Qassim Abdullah

Knowing the six exterior orientation parameters for an image is necessary for any photogrammetric processing aimed at creating products from such an image. Whether you perform map compilation on a stereo plotter or generate an ortho image, the six exterior orientation parameters need to be computed before you start the production process.

Space Resection

Space Resection is the process of determining ray intersection in space to conclude camera position. See Figure 7.6. The method of space resection is a purely numerical method using collinearity equations to simultaneously yield all six elements of exterior orientation (X, Y, Z , omega, phi, and kappa). Once these elements are known, a stereo plotter can measure the photo coordinates of any point in a photo (x,y) and the ground coordinates can be computed. Ortho rectification software also utilizes space resection for ortho-rectifying an image. Figure 7.6 illustrates six images. Each of them has rays from the ground entering the camera through the lens. The intersection of the rays entering the camera at point "O" represents the photo center location, which is important for the determination of the exterior orientation parameters described earlier.

see text above

Figure 7.6 Space Resection
Source: Dr. Qassim Abdullah

Aerial triangulation

Aerial triangulation can be defined as the process of densification of a sparsely distributed horizontal and vertical control network through:

  1. Measurements performed on overlapping aerial photographs,
  2. Known ground control points coordinates on the ground, and
  3. Mathematical Modeling and Solution.

Numerical Computation of Aerial Triangulation: Here is a summary for the steps taken within the processing software:

  1. Processing numerical observations of individual photographs to build a cohesive block.
  2. Forming individual photos into strips by successive, relative orientations, using the common primary pass points between overlapping photos.
  3. Computing Horizontal and vertical coordinates for each strip.
  4. Converting strip coordinates to ground coordinates using the ground control contained within a given strip.
  5. Applying simultaneous polynomial equations (horizontal and vertical) to produce final adjusted values for all points.
  6. Calculating exterior orientation elements for each photo to be used as input to a bundle adjustment program.

Unlike the aerial triangulation of the past, which was performed using film-based imagery instead of digital imagery and optical-mechanical instruments, today aerial triangulation is performed on digital imagery using a complete softcopy approach called softcopy aerial triangulation. In softcopy aerial triangulation, all manual work of points marking and measurements are left to the automation of the software. It is more efficient and more accurate.

Mathematical Model for Aerial Triangulation

The backbone of the computational model in Photogrammetry is based on two equations called the collinearity equations, which are based on the collinearity condition. The two collinearity equations are represented below:

Where,

Xc, Xc, Xc = Camera perspective center

X, Y, Z = ground point position

x, y = point position on image

mii = photo orientation matrix

f = camera lens focal length

x0, y0 = Principal point of autocollimation

Direct Geo-referencing

In the last two decades, navigation technologies have advanced to the point that enabled manufacturers of the Inertial Navigation Systems (INS), usually used for missiles and submarines navigation, to produce an Inertial Measurement Unit (IMU) to accurately measure the orientation of airborne sensors such as cameras and LiDAR. The IMU are used either to replace the process of aerial triangulation or to assist its solution. Most UAS, including the small ones, carry on board a GPS unit and an IMU unit. Unfortunately, most of these miniaturized low cost IMU that are used for UAS are not accurate enough to replace the aerial triangulation. Such low accuracy IMU is usually used to navigate the UAS but not to support the aerial triangulation. On the other hand, the GPS antenna in most UAS is a survey grade quality that can receive signals from both GPS and GLONASS. Some of the UAS can receive signals from OMNISTAR with real time corrections.

Ground Control Requirement[2][1]

A ground control, which we introduced in section 7.2, is a target in the project area with known coordinates (X,Y,Z). Accurate, well placed ground controls are essential elements for any photogrammetric project utilizing aerial triangulation.

There are two standard types of ground control points (Figure 1), those are:

  1. Photo Identifiable (Photo ID): This could be any feature on the ground such as a manhole, parking stripe, etc. (the right two images of Figure 7.9). This type of control does not need to be surveyed before the UAS flies the project as it can be surveyed later on.
  2. Pre-marked (Panels): This type is generated by marking or painting certain figures or symbols on the ground before the UAS flies the project (the left two images of Figure 7.9).

Many projects make use of one type or the other or a combination of the two.

4 examples of different types of ground control points. see text below for more information

Figure 7.9 Different types of ground control points.
Source: Dr. Qassim Abdullah

The leftmost image In Figure 7.9 represents a pre-marked control point set on black and white fabric, while the image next to it represent a pre-marked control point that is spray painted on a sidewalk. The rightmost images represent different types of photo identifiable ground control points. On these images, the user can pick any visible ground feature (such as parking strip or edge of where the concrete meets the asphalt pavement on a bridge) to use as a control point.

Ground control requirements vary from one project to another depending on the project specifications and its geographic extent. Projects with high geometrical accuracy requirements require more ground controls. Figure 7.10 illustrates typical distribution of ground controls in a rectangular shaped project when the aircraft does not carry on board a GPS antenna, resulting in a non-GPS supported aerial triangulation, or what is usually called “conventional aerial triangulation.”

ground control distributions

Figure 7.10 Ground control distribution (ground controls are represented by white circles and red triangles)
Source: Dr. Qassim Abdullah

However, most aerial triangulation today is solved with airborne GPS data. Having GPS data in the aerial triangulation process saves a tremendous number of ground controls. Figure 7.11 illustrates the low density of ground controls required for GPS-based aerial triangulation.

control requirements with GIS require less control requirements. There are fewer triangles and circles, all of which are along the edges of the map

Figure 7.11 Ground controls distribution in GPS-based aerial triangulation (ground controls are represented by white ovals and red triangles)
Source: Dr. Qassim Abdullah

Despite having ground controls only at the edges of the flight line as shown in Figure 7.11, having few additional controls along the interior of the block (see Figure 7.12) is a wise strategy, especially as high accuracy is expected from the aerial triangulation. Savings can be made in the control survey by replacing most of the ground control points at the edges of flight lines with imagery taken with a flight line perpendicular to the project flight lines at each end of the block (see Figure 7.13). Such additional flight lines that are perpendicular to the normal project flight lines are called “cross flight lines.”

see text above for more details on this image

Figure 7.12 Alternate ground control distribution in a GPS-based aerial triangulation (ground controls are represented by white ovals and red triangles)
Source: Dr. Qassim Abdullah

Adding two cross flights (strips) at each edge of the photogrammetric block not only saves on number and cost of the ground control points but it also provide strength to the mathematical model within the bundle block adjustment computations. It helps in modeling and solving GPS and IMU problems.

see text for details

Figure 7.13 Cross flight lines (ground controls are represented by white ovals and red triangles).
Source:Dr. Qassim Abdullah

To summarize the subject of ground control requirement for a block, we start with Figure 7.10, which represents the most control consuming case. That is the case of conventional aerial triangulation, where we do not use GPS on the camera during imagery acquisition. Then comes the most efficient method of aerial triangulation, and that is GPS-based aerial triangulation. Figures 7.11 through 7.13 represent different distribution of ground controls for GPS-based aerial triangulation. Each case has its strength and weakness, however the configuration in Figure 7.13 represents the most economical way when it comes to the reduction in the ground controls requirement.

Products Generation[2]

Digital Ortho Photo (Ortho Map)

Digital ortho, ortho photo, orthographic image, or ortho map are different names for the same thing. Ortho photo is an image that is corrected (through the process of ortho-rectification) from the effect of terrain relief or sensor tilt to convert it to a unified scale map. Row images taken over variable terrain will have different scales at different locations on the image. Pixels covering the terrain of the ridge of a mountain will cover a smaller spot, as it is closer to the sensor (aircraft), as compared to a pixel covering a valley.

Performing the process of ortho-rectification will sample all these pixels so each pixel covers exactly the same ground resolution or GSD regardless of where it falls in the image or from which terrain it originated. In other words, ortho-rectification means reprocessing the raw digital image to eliminate the scale variation and image displacement resulting from terrain relief and sensor (camera) tilt.

Because ortho photos are geometrically corrected, they can be used as map layers in GIS, overlaying, management, update, analysis, or display operations. This is a great advantage offered by the ortho photo as compared to the raw imagery.

The five primary ingredients for the ortho photo generation are the following:

  1. Digital imagery;
  2. Digital elevation model or topographic dataset;
  3. Exterior orientation parameters from aerial triangulation or IMU;
  4. Camera calibration report;
  5. Photogrammetric processing software that utilizes collinearity equations.

An ortho photo produced using a digital elevation model for the bare earth (no buildings or trees in it) is usually called “ground ortho.” In ground ortho, the building lean is not removed in the process of ortho rectification, and buildings will appear to lean radially away from the center of the image, as you can see in the image of the world trade center in Baltimore at the left side of Figure 7.14. On the other hand, "true ortho" is an ortho where the buildings look as if they are erected straight up or as if you are looking at them from right above the roofs, as is illustrated in the right image of Figure 7.14. True ortho is very useful in urban areas, such as downtowns with tall buildings, as it reveals all the information in the streets and pathways surrounding the buildings. True ortho is computationally intensive and needs three-dimensional models of all buildings in the image, which makes it more costly than ground ortho.

Side-by-side images to show a Ground Ortho image (l) and a True Ortho image (ri) of an urban scene.

Figure 7.14 True ortho (right) versus ground ortho (left)
Source: Dr. Qassim Abdullah

It is very important to evaluate the quality of ortho-rectification, as it may cause some defects. Examples of such common defects are the following:

  • Image Completeness:

    • Root cause: Image not adequately covered by DEM.
  • Image Smearing:

    • Root cause:

      • Anomalies or spike error in DEM.
      • Excessive relief.
  • Double image on adjacent ortho sheets

    • Root cause:

      • Improper camera orientation.
      • Inaccurate DEMs.
  • Missing Image

    • Root cause:

      • Improper camera orientation.
      • Inaccurate DEMs.
  • Mismatch of two adjacent orthos

    • Root cause:

      • Inaccurate camera position and orientation.
      • Inaccurate DEM

Digital Terrain Data

Similar to LiDAR, stereo imagery can be used to generate accurate digital elevation models. Most software used for UAS data processing has the capability of image matching technique to produce fine quality elevation models that can be used for the ortho rectification process and other terrain modeling purposes. The main ingredients for digital terrain data generation are:

  1. Digital imagery;
  2. Exterior orientation parameters from aerial triangulation or IMU;
  3. Camera calibration report;
  4. Photogrammetric processing software that utilizes the image matching technique.

Until recently, users did not trust the poor quality of the auto-correlated digital terrain data. However, in the last couple years, software development companies adopted a new algorithm called “Semi Global Matching” or SGM that results in fine quality elevation data that in some ways competes with the elevation model generated by LiDAR. This made users excited again about using imagery for the development of a fine quality digital elevation data. The SGM algorithm is a new image matching approach that originated in the computer vision community. It utilizes auto-correlation matching technique based on aggregates per-pixel matching that was not possible with the old auto-correlation algorithms.

As it is in ortho photo production, digital elevation data needs to be evaluated to stand on the quality of the data.

There are couple of terms that are used in the geospatial community to describe digital terrain data, those are:

  • Digital Surface Model (DSM): It is also called reflective surface. Such surface represents the original LiDAR data before any feature such as buildings and trees are removed from it. It also represents the elevation model generated from the image auto-correlation process in Photogrammetry. Both LiDAR and image auto-correlation collect data on top of natural ground surfaces such as terrain and trees and man-made materials such as buildings and other structures (Figures 7.15 and 7.16 below).
don't forget to add an alt tag!

Figure 7.14 Digital Surface Model (Left) and Digital Terrain Model (Right)
Source: Dr. Qassim Abdullah

  • LiDAR digital surface model

    Figure 7.16 LiDAR Digital Surface Model (DSM)
    Source: Dr. Qassim Abdullah

  • Digital Terrain Model (DTM): DTM is a term usually associated with digital elevation models of just the ground (trees and manmade structures are removed. DTM sometimes augmented with 3-D modeling of abrupt changes in the terrain using 3-D lines called break lines. DTM usually contains arbitrary distributed elevation points (not at equal space or grid) called mass points and break lines
  • Digital Elevation Model (DEM): DEM is a term usually associated gridded digital terrain model or points are distributed at equal interval or grid.
  • Triangulated Irregular Network (TIN): The term TIN is used to describe the method that most software uses to model the digital terrain data and to present it on the screen. TIN surface represents a set of adjacent, non-overlapping triangles computed from irregularly spaced data points, with x, y horizontal coordinates and z vertical elevations (Figure 7.17 below).

don't forget to add an alt tag!

Figure 7.17 Triangulated Irregular Network (TIN)
Source: Dr. Qassim Abdullah