Skip to main content
Geosciences LibreTexts

2.4: Phase 3- Data Capture

  • Page ID
    44905
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Introduction

    When you have exhausted your contacts and the Internet, it is time to capture the data yourself. In this phase, you create new GIS datasets from both digital data that are not currently in a GIS format and from non-digital, hard-copy data sources. Examples of digital and non-digital data sources include maps (hard-copy and digital), aerial photographs (hard-copy and digital), questionnaires, field observations, digital satellite imagery, survey data, and Global Positioning System (GPS) coordinates.

    The data capturing phase is often tedious, laborious, and frustrating, but necessary. Figure 2.10 breaks the stage into three main steps, which are described in the text below:

    undefined

    Figure 2.10: The key steps in data capture phase. Here you digitize hard-copy maps and data directly into your GIS or transform existing digital data into a format your GIS reads.

    Converting Digital Data

    This section looks at digital datasets that are currently not in a GIS format, but that are often manipulated to create GIS layers. These sources include automated surveying, photogrammetry, GPS, Light Detection and Ranging (LIDAR), non-GIS mapping programs like Computer Aided Design (CAD), and remote sensing imagery. This chapter emphasizes the conversion of GPS data and remote sensing imagery. Importing spreadsheet data, a frequent way of importing attribute data, into a GIS is covered in Chapter 4.

    Automated surveying uses electronic data capturing instruments like theodolites, electronic distance measurement (EDM) systems, and total stations to capture spatial and attribute data. The most sophisticated of these instruments is the total station that combines the theodolite’s angle-measuring capabilities with the EDM’s distance calculations. Surveyors download the distance and direction data from their instruments directly into many vector-based GIS programs. The data, however, usually requires pre-processing before it can be used to make a map.

    2.11

    Figure 2.11: Total station, a surveying instrument, measures distance, direction, and location. Image comes from Illinois State University, Geography Dept.

    Photogrammetry obtains accurate measurements from aerial photographs. Photogrammetric techniques determine ground distances and directions, heights of features, and terrain elevations. Photogrammetry creates GIS data through 3-D stereo digitizing and by producing spatially rectified aerial photographs that can be entered into the GIS as a layer.

    GPS (Global Positioning System) is a radio-based navigation system that uses GPS receivers to compute accurate locations on the Earth’s surface from a series of orbiting satellites. With a small, inexpensive, hand-held GPS receiver (Figure 2.12), you can determine your location usually within about three meters. Using two GPS receivers simultaneously (called Differential GPS or DGPS) or using a Wide Area Augmentation System (WAAS)-enabled GPS receiver, which uses satellites and ground stations that provide GPS signal corrections, you can get sub-meter accuracy.

    undefined

    Figure 2.12: Typical low-end GPS device.

    The U.S. GPS network, called NAVSTAR (Navigation with Satellite Timing and Ranging), has at least 24 satellites that orbit in six planes around the Earth (see Figure 2.13). The network’s configuration secures at least four satellites—the minimum number of satellites needed to capture location data—above the horizon for every point on Earth.

    2.13

    Figure 2.13: NAVSTAR GPS constellation of satellites.

    The U.S. Department of Defense (DoD) developed and controls NAVSTAR, and DoD can turn the whole system off, as they briefly did immediately following the terrorist attacks of September 11, 2001. DoD monitors and tracks the satellites (which are equipped with radio transmitter/receivers and a set of atomic clocks) from five stations across the globe where they compute precise satellite orbital and clock corrections. These corrections are transmitted from the master control station at Schriever Air Force Base in Colorado to the satellites, which make the adjustments. All of the satellite’s locations are precisely known, and by knowing their exact locations, we can determine the location of every point on Earth with a GPS receiver.

    This is possible because each satellite transmits a unique radio signal, which can be received by GPS receivers. Using this signal, your GPS receiver calculates the distance to each of the four satellites it is tracking by the amount of time it takes for the signals to travel from the satellites to your receiver. This is a high-tech version of triangulation (see Figure 2.14), called trilateration. The first satellite locates you somewhere on a sphere (top left of Figure 2.14). The second satellite narrows your location to a circle created by the intersection of the two satellite spheres (top right). The third satellite reduces the choice to two possible points (bottom left). Finally, the forth satellite helps calculate a timing and location correction and selects one of the remaining two points as your position (bottom right).

    undefined

    Figure 2.14: GPS satellite trilateration.

    When dealing with satellites thousands of miles from the Earth’s surface, GPS’s accuracy is amazing. Still, the technique is not without error. Small errors in the receiver’s clock, variations in the satellite’s orbit, atmospheric conditions that slow radio waves, and radio signals that bounce (called “multipath” or “ghosting”) off buildings and cliffs are some possible distortions. In addition, GPS has difficulty penetrating thick forests and urban canyons created by tall buildings.

    Another source of error is Geometric Dilution of Precision (GDOP). This is the spatial relationship between the GPS receiver and each of the potential satellites. In general, the fewer the satellites available, and the closer they are clustered, the less accurate your readings. GPS receivers try to avoid GDOP by selecting the set of satellites that provide the least error. It chooses satellites that are well above the horizon, minimizing atmospheric thickness and interference from buildings, yet not so high that they are clustered together.

    GPS is a major data input tool for GIS. Most receivers, even inexpensive units, contain a hard-drive where you can log your positions. Each logged position is called a waypoint, and together, waypoints depict the location and extent of the features you enter in the field. They can be downloaded from GPS receivers (sometimes with the help of a separate software program) and imported into many GIS programs.

    Some of the more expensive GPS units have “feature lists” that streamline the data entry process. Feature lists are databases that you define to contain a list of the possible features you will locate. These lists have associated attributes for each feature type, and drop down lists of common attribute values can also be predetermined for each attribute to save time in the field. Both the feature locations and their associated attributes can be downloaded into your GIS.

    LIDAR (Light Detection and Ranging) is a remote sensing technology that uses laser light pulses to measure the distance to a surface. It is similar to other types of radar, but uses light instead of radio waves. Airborne LIDAR systems have resulted in topographic layers that depict the tops of ground-based features better than traditional remote sensing and radar methods, and this results in topographic layers that portray the shape of our cities (including the widths and heights of buildings) and forest canopies more accurately. Figure 2.15 is an example of LIDAR image in an urban area.

    undefined

    Figure 2.15: LIDAR image depicting feature heights.

    Both vector and raster data models use LIDAR data. Many GIS programs have automated routines that convert the x, y, and z (elevation or altitude) points first into vector point files, which can be converted into raster layers (rasterization is described in Chapter 3) where additional processing and analysis can take place. In vector systems, LIDAR points frequently generate topographic layers called Triangular Irregular Networks (TIN), which can be used to represent terrain in vector-based systems.

    Conversion of Spatial Data from Non-GIS Programs is a common way to acquire data. Other than GIS, there are additional computer programs that display, create, and edit vector spatial data. A brief list includes Computer Aided Design (CAD) systems (like AutoCAD, MicroStation, and ArchiCAD) and vector drawing programs (like Adobe Illustrator, Corel Draw, and OpenOfficeDraw). The conversion process varies based on the format of the dataset and the import/export capabilities of both the host software and the GIS program. Capturing and importing CAD and drawing data into vector-based GIS often involves converting the digital data from the host format into the desired GIS format or into a format that can be easily read by your GIS. Sometimes the conversion process is that easy. Sometimes it’s not; the data may need to be entered into a “third party” format that both programs read or export to. AutoCAD files are an example. They usually need to be saved in DXF format within the CAD software before many GIS programs can read them.

    Remote Sensing imagery is an important source of raster data. This section briefly describes satellite remote sensing, digital image processing, and the conversion of remotely-sensed imagery into GIS.

    The word “remote” means “from a distance.” Sensing, in this case, means “to record.” So a basic definition of remote sensing is to record from a distance.

    To understand remote sensing, we need to understand some important core concepts. First, all features on the Earth’s surface reflect and absorb the sun’s radiant energy. They also emit and reflect back parts of that energy. The amount and type of radiation that is emitted and reflected from the Earth’s features depends on the properties of those features (see Figure 2.16).

    undefined

    Figure 2.16: Bands of light including infra-red, red, green, and blue are emitted from the sun. The green leaf absorbs the red and blue bands, but largely reflects the infra-red and green bands thus appearing to our eyes green.

    What our eyes see is the reflected radiation within a very small part of the electromagnetic spectrum (see Figure 2.17). We see the blue, green, and red bands of light, known as the visible spectrum, but other wavelengths are present that we do not sense. Satellites, however, have sensors that record not only the visible spectrum, but also infrared, near infrared, and thermal infrared bands. Satellite sensors, however, vary by maker, and so they record different wavelengths. The precise bands (wavelengths) a satellite captures is referred to as spectral resolution.

    undefined

    Figure 2.17: The electromagnetic spectrum with the visible waves highlighted. Primary image is by Louis Keiner, Coastal Carolina University. See licensing at http://commons.wikimedia.org/wiki/File:Electromagnetic-Spectrum.png.

    Through telemetry, satellites transmit each of these individual wavelengths into remotely sensed data files, called images, which measure the reflected and emitted electromagnetic energy from the Earth’s features. The images are raw meaning they are unprocessed, and they need to be digitally enhanced and combined to highlight particular features (land use, climate changes, agricultural productivity, and environmental properties).

    Like any raster image, remote sensing images consist of pixels. Each pixel in a data file records a particular wavelength (band of light) of the electromagnetic spectrum for a specific chunk of the Earth’s surface. The pixel size is its spatial resolution, and it differs from spectral resolution in that it involves pixel size rather than a particular wavelength or band of light recorded by the satellite. Like spectral resolution, different satellites acquire data at different spatial resolutions.

    Different types of features emit and reflect back different intensities of each wavelength of light, so remote sensing images are combined and manipulated to highlight specific features. This procedure of combining images to isolate features is called digital image processing. Picking the proper images comes with experience and knowledge, but it is based on the feature’s spectral signature, which depicts the percent of each band of light that is emitted and reflected back off the features. Each spectral signature is kind of a feature’s unique fingerprint.

    Digital image processing is done in one of many specialized remote sensing software packages like Erdas Imagine or ER Mapper (but some raster-based GIS programs, like Idrisi, are capable of many image-processing functions). Once processed, the images can be entered into your GIS. Some GIS programs read some of the major remote sensing image formats, but you may need to export the image to a format that your GIS program reads. The processed images are usually used in raster-based GIS as a continuous data surface, such as climate data. The images can be used in vector-based systems simply as a reference background image or to trace vector features over (a process called head’s up digitizing, see next section).

    Converting Non-Digital Data

    Existing, hard-copy maps and aerial photographs (physical paper documents) are a major source of spatial data for GIS. Different processes, including digitizing, scanning, and “heads up” digitizing, exist to input these hard-copy sources into GIS. For most projects, you will need to capture both the spatial location of the feature and some of its attributes.

    Data input is usually the major bottleneck in the development of a GIS database, and converting hard-copy data from maps, aerial photographs, printed reports, and field notebooks is often the least desirable option because it is tedious and time consuming. Still, it is a way of making sure that you get a certain level of accuracy and precision for your project.

    Scanning is a popular way to convert hard-copy maps and aerial photographs into digital images. The resultant scanned image is a raster file, arranged as an array of pixels in columns and rows. Scanners capture what is on the original document by assigning a color or grayscale value to each pixel in the array. The pixel value is based on the intensity of the color or gray shade on the original document.

    Scanners come in a range of types, sizes, and degrees of sophistication. Scanner types include flat bed, sheet fed, drum, and video. Flat bed (or desktop) scanners are the most common and consist of a glass board where you lay the documents you want to scan. It works like a copier, but the output saves to an image. Usually, they scan areas no larger than a legal-sized document but a couple brands make flat beds up to 24” x 36″.

    For hard-copy documents larger than 24” x 36”, you can choose a sheet fed (see Figure 2.18), drum or video scanner. Sheet fed scanners work like a fax machine. The document moves past a scanning head, and the image is captured. It scans only loose sheets and the image is sometimes distorted if the document is not handled properly. A few varieties can handle rigid maps on poster board. Drum scanners are a great alternative due to their size and accuracy, but they are expensive, which generally limits their use to large commercial firms and government agencies. You feed maps into them and its rotating drum systematically scans the data. Video scanners use a high-resolution camera system to scan—incrementally—portions of the map. They have become an alternative to drum scanners because they are cheaper and their accuracy is improving (still, there are geometrical distortions and uneven brightness values across their scanned images). They are, however, extremely fast; most scans from a video scanner take about a second.

    undefined

    Figure 2.18: Wide-format, sheet-fed scanner.

    For many GIS applications, you may also need to consider the scanner’s resolution and its bit depth. Resolution is the detail the scanners read. Scanners support resolutions from 72 to 1200 dpi (dots per inch), but most scanning for GIS applications is done at resolutions between 200 and 600 dpi. Higher resolutions create higher quality images, but they also create substantially larger file sizes. Also, consider that at the same resolution, color images are larger than grayscale images, which in turn are larger than black and white images.

    Bit depth is the number of bits used to represent each pixel. The greater the scanner’s bit depth, the more colors and shades of gray that can be represented in your digital image. For example, a 24-bit color scanner can represent 2 to the 24th power or 16.7 million colors. Greater bit depth, however, creates larger files.

    Before scanning any document, remember to clean it—smooth out folds and tape any tears. This reduces spatial inaccuracies. Also, erase any marks you do not want to appear on the resultant image. Similarly, make wanted notes and highlighted features legibly on the hard-copy document if you want to see them on the scanned image.

    Once scanned, remember that you have essentially a “dumb” image; its pixels do not have meaningful values (just a gray or color value taken from the scanned document). This limits its use because it is just a pretty picture. Although future scanning technologies focus on generating vector features directly from scanned images (called auto-vectorization), these technologies are not ready. Scanning, however, remains beneficial because of a second input process called “heads up” digitizing.

    Heads Up Digitizing – After you create a scanned image, you georeference it (see Chapter 3) and use it as a background image within your vector system. Then with the image at its proper geographic location, trace the features that appear on the scanned image. This process, called “heads-up” digitizing (or on-screen digitizing), is like manual digitizing (described below) but without a physical digitizing board. Instead, you see on the screen a scanned image in its correct geographic position, and, with your mouse, you trace the position of features into new or existing point, line, and polygon layers. In the example below, building footprint polygons are traced over an image to create a vector polygon layer of buildings.

    undefined

    Figure 2.19: Head’s up digitizing. By eye, trace the features on your georeferenced image.

    After you trace each feature, enter the feature’s key identifier into the layer’s attribute table (see Figure 2.20). How you enter key identifiers depends on whether the feature’s layer is brand new or whether you are updating an existing feature layer. If you are updating an existing layer, the field will already be present, and you simply need to fill in the feature’s value in the appropriate field. If you are creating a new layer, you must define at least the key identifier’s attribute field before digitizing. This is important because each feature needs to be identified uniquely. The rest of the attributes are usually created in a separate file (perhaps in Access or Excel) and “joined” to the feature’s minimal attribute table after digitizing is complete. The joining process is covered in the second half of Chapter 4.

    undefined

    Figure 2.20: Entering key (unique) identifiers into the new layer’s attribute table.

    The procedure for digitizing features is similar to that used in “manual” digitizing (see below). Due to the increased popularity of heads-up digitizing, manual digitizing is less frequently used, but for certain applications it is an important way to input map data.

    Digitizing involves tracing by hand the extent of features directly from a hard-copy map or photograph that is mounted onto a digitizer (see Figure 2.21), a large table or board with an imbedded electronic grid that senses the position of a pointer called a puck (a mouse like device).

    undefined

    Figure 2.21: Digitizer with puck.

    All GIS packages have a specific procedure for manual digitizing. Generally, it involves three steps: mounting the map on the digitizer, establishing control points, and adding map features.

    In the first step, you mount the map in the middle of the digitizer’s surface with its corners held by masking tape. Stretch out any map creases, but try to obtain maps that are in good shape because spatial errors may result. If you are working with the same map over a series of days, you need to remount the map periodically because humidity can expand paper maps, and they can sag slightly.

    Step two involves establishing your map’s control points. Although software packages vary, establishing control points involves a routine where you identify at least four points that are common to both the mounted map on the digitizer and the map on your screen. To minimize error, pick control points that surround—and one that is within—the area to be digitized. Then point to the four control points on the digitizer’s map with the puck, and in the same order, identify the same four locations on the screen’s map with the mouse. This results in a spatial connection between the map on the digitizer and the screen’s map.

    Once this connection is established, you can trace the map features on the digitizer (Step 3). Take the puck and position it directly over one of the feature’s vertices (corners). Various buttons on the puck allow you to create vertices, delete vertices, and close polygons. Like “heads up” digitizing, you will want to enter at least the feature’s key identifier as you digitize

    GPS digitizing involves using a GPS receiver to record feature data in the field. Using GPS for point locations (waypoints) was described above, but mapping-grade GPS units, like Trimble’s GeoXT, are capable of recording the nodes and locations of points, lines and polygons by following the feature’s extent and registering a waypoint at each of its vertices.

    Pilot Project

    A pilot project is a rehearsal. Here you collect a small subset of the GIS datasets you require for the larger project. Then you input the data into the GIS, preprocess the datasets, analyze them, and create some output. When something goes wrong, you tweak the project’s parameters until the process works smoothly.

    Pilot projects give you the opportunity to “ground truth” your secondary data. Remember, it is foolish to believe these datasets are without flaws. You need to ground truth your GIS data to ensure that the datasets are representative of what’s on the ground. It is done by traveling to your study area and using your eyes to verify your datasets. Normally, you do not need to generate statistics nor check every feature. Checking an informal sample is good enough. Likewise, if any of your datasets are old or some features have oddly shaped features, this is a good opportunity to check them too. If ground truthing is difficult due to access or the aggregation of attributes (as in census data), cross validate your GIS data with other sources like aerial photography, satellite imagery, assessor data, or demographic figures.


    This page titled 2.4: Phase 3- Data Capture is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by via source content that was edited to the style and standards of the LibreTexts platform.