Tuesday, September 9, 2025

Special Topics in GIS - Module 2 - Data Quality - Standards

The second module in Special Topics in GIS focused on data quality standards with an exercise on determining the horizontal positional accuracy of two road networks in the city of Albuquerque, New Mexico. Our findings were to be reported in accordance with the National Standard for Spatial Data Accuracy (NSSDA). We were provided two polyline shapefiles that represented road centerlines from the city of Albuquerque and StreetMap USA as well as a mosaic of orthophotos of the study area.

The NSSDA standard requires at least 20 test points within our study area, with each point separated by more than one-tenth of the area's horizontal distance. The NSSDA value is an accuracy measurement of our Root Mean Square Error at the 95% confidence interval. To help achieve the requirements I used the split tool to divide the study area into four quadrants and then bookmarked each quadrant which reduced the need to zoom in and out to check the spacing of my points.

Next step was to begin finding "good" intersections that contained data from both the Albuquerque and Street Maps data sets.  After identifying my "good" intersections it was then time to determine what I thought were the "true" reference points at the intersections using the orthophotos. Below is a screenshot showing my reference or "true" locations of the intersections according to the orthophotos.




Next step was to use the Add XY Tool to determine X and Y coordinates for each of the points for all three datasets. I then exported the three attribute tables to excel files using the Table To Excel tool. Following the NSSDA horizontal accuracy statistic worksheet, the independent (true) points from the orthophotos X and Y coordinates were compared to the test points datasets. Calculations were then completed to determine the accuracy statistics for the two test datasets.  Below are the results from my worksheet for the StreetMap USA NSSDA value calculations: 


The final column of the table shows the calculated squared error distance. The values are summed and then averaged. The NSSDA horizontal accuracy is calculated by multiplying the Root Mean Square Error (RMSE) by 1.7308. Below are my final accuracy statements for each of the two datasets.

Tested 13.01 ft (3.96 m) horizontal accuracy at 95% confidence level for the Albuquerque Streets data set.

Using the National Standard for Spatial Data Accuracy, the Albuquerque Streets data set tested to 13.01 (3.96 m) feet horizontal accuracy at 95% confidence level.

Tested 312.95 ft (95.38 m) horizontal accuracy at 95% confidence level for the Street Map USA data set.

Using the National Standard for Spatial Data Accuracy, the Street Map USA data set tested to 312.95 ft (95.38) feet horizontal accuracy at 95% confidence level.



Tuesday, September 2, 2025

Special Topics in GIS - Module 1 - Calculating Metrics for Spatial Data Quality

The first module for Special Topics in GIS covered aspects of spatial data quality with focus on defining and understanding the difference between precision and accuracy. According to the International Organization for Standardization's (ISO) document 3534-1, accuracy can be defined as the "closeness of agreement between a test result and the accepted reference value". This document also defines precision as the "closeness of agreement between independent test results obtained under stipulated conditions" (ISO, 2007). 

In Part A of the lab assignment, the precision and accuracy metrics of provided data were determined. When determining precision, a distance (in meters) that accounts for 68% of the repeated observations was calculated.  When determining accuracy, the average waypoint was measured from an accepted reference point. Below is my map product showing projected waypoints, the average location, and circular buffers corresponding to 50%, 68%, and 95% precision estimates. A "true" reference point was later added to determine a horizontal distance to the established average waypoint location.



Horizontal accuracy refers to how close a measured GPS position (or the mean of many positions) is to the true location on the ground. It is typically reported as the distance between the GPS-derived position and a known reference point. 

Horizontal precision, on the other hand, describes how tightly repeated GPS measurements cluster together, regardless of whether they are centered on the true location. Precision is often expressed as the radius within which a certain percentage of positions (e.g., 68% or 95%) fall.

My horizontal precision (68%) was 4.5 m and my horizontal accuracy of 3.25 m produced a difference of 1.25 m. I would say that this would not be a significant difference because it sits within the 68% precision radius. My results for vertical accuracy were as follows with my mean waypoint elevation coming in at 28.54 and the mean elevation for the "true" reference point being 22.58. This is roughly a 5.96 m difference which I would think is significant at least in some cases. 


In Part B of the lab assignment, the RMSE metric was calculated, along with a cumulative distribution function (CDF). The CDF describes the probability of a random variable taking on a given variable or less, showing a more complete error distribution instead of selected metrics. For this portion we were provided another dataset where we used Excel for the analysis. Here we calculated minimum, maximum, mean, median, root square mean, and the 68th, 90th, and 95th percentiles. The final portion of the lab consisted of displaying the dataset using a cumulative distribution function (CDF) graph which is displayed below.



Overall, I really learned a lot in this lab and had the opportunity to brush up on my Excel skills which I have not utilized for a while.  I am looking forward to building upon what I learned in this module. 



Wednesday, August 7, 2024

Applications in GIS - Module 6 - Suitability & Least Cost Analysis

In Module 6, we learned about Suitability and Least Cost Path Analysis. We were introduced to performing suitability analysis using both vector and raster analysis tools. We prepared our data for suitability analysis using different approaches, such as Boolean and scoring, and adjusted specific parameters using scoring and weighting. Additionally, we performed least-cost path and corridor analysis using cost surfaces as well.

In Scenario 2, our task was to perform a suitability analysis for a land developer. We analyzed several variables including proximity to roads, elevation slope, proximity to rivers, and land cover type. These variables were reclassified and ranked based on the value of each cell in the raster. The raster layers were then combined using the overlay tool. Finally, we were required to create a map layout comparing the results from the two alternatives. Below is my final product.




Saturday, August 3, 2024

Applications in GIS - Module 5 - Damage Assessment

In this module, we delved deeper into the impact of Hurricane Sandy, focusing on damage assessment. To complete this task we learned how to create raster mosaics for pre and post hurricane, created attribute domains to categorize damage, and utilized our skills to decide how to summarize the data from the damage assessment.

To begin we used meteorological data to create a storm track map showing Sandy's progression from below the Caribbean islands on October 22nd, 2012 until it made landfall on the northeast coast of the United States as a Category 1 hurricane on October 29th, 2012. 



To complete our damage assessment we used our pre and post imagery to assess the damage to structures in our study area. Each property in our study area had to be identified and categorized according to the domains that we created. Each domain provided a choice for what level of damage each property appeared to have based on our remote imagery. After assessing the damage and symbolizing the data we were tasked with creating a polyline feature class representing the pre-storm coastline. I then created a multi-ring buffer establishing a 100m, 200m, and 300m buffer. This was used to summarize the data and help discern and identify patterns. Below are my results:








Sunday, July 28, 2024

Applications in GIS - Module 4 - Coastal Flooding

This week's module for Applications in GIS we were tasked with conducting three different coastal flood analyses. With the use of digital elevation models we delineated coastal flood zones. These analyses utilized both raster and vector data to ultimately achieve our desired result. This lab proved to very challenging for me as I had some issues early on that prohibited me from completing all the steps.

The initial analysis consisted of creating a raster of coastal New Jersey post hurricane Sandy showing erosion and buildup. Below is my map for this analysis where the darkest red shading represents the greatest erosion and the darkest blue represents the areas of the most buildup.


For the final portion of the lab we were tasked with analyzing storm surge in Collier County, Florida. Two DEMs were initially used one by the USGS and another LiDAR derived used to create 1 meter storm surge models. These were to be used to analyze and determine the number of buildings that would be affected by a 1 meter storm surge. Unfortunately, I was unable to complete this objective in its entirety. I was unable to understand the Region Group tool so I skipped it so my data isn't as accurate as it should be. Below is a map showing my two storm surge models.


Overall, this was a challenging model for me because I had some issues understanding some instruction starting in analysis 2. After getting behind I struggled to regroup after becoming frustrated. That said, I was able to get back on track and make some progress with analysis 3 even though I was unable to complete it in its entirety.

Thursday, July 18, 2024

Applications in GIS - Module 3 - Visibility Analysis

In this week's module for Applications in GIS, we were introduced to numerous 3D visualization techniques using ArcGIS Pro. Using ArcGIS Pro you can visualize your data in 3D by using a 3D scene. Using 3D scenes can be a powerful tool to enhance the visualization of your data by adding realistic environmental effects making it more attractive to your audience. 

Our assignment for this week was to complete a series of ESRI training courses that introduced us to numerous 3D data visualization techniques. The courses we were assigned included the following courses:


1. Introduction to 3D Visualization

2. Performing Line of Site Analysis

3. Performing Viewshed Analysis in ArcGIS Pro

4. Sharing 3D Content Using Scene Layer Packages 


In the first course, Introduction to 3D Visualization we completed 4 different exercises that taught us about visualizing 3D data in three different views: a map view, a local scene view, or a global scene view. We learned why and when the different views should be utilized, to most effectively display data. We also learned how to determine elevation types, cartographic offset, vertical exaggeration, and extrusion types and methods. The results of a couple of the exercises can be viewed below:


In the above image, we applied realistic enhancements to a downtown San Diego, California local and global scene such as illumination effects, enhanced tree symbology, and detailed buildings.


In the above image, we extruded 2D features by attributes creating a 3D scene. The building parcels shown are not depicted by actual height but rather by property value with yellow representing residential properties, pink commercial properties, and the two gray parcels public housing.


In the second course, Performing Line of Sight Analysis we performed a line of site analysis using a DEM of Philadelphia, Pennsylvania, a parade line, and locations for observers to determine which locations along the route could be observed by security personnel from two observation points. By using ArcGIS 3D analyst tools we were able to determine the optimal line of sight comparing lines greater than 1,100ft and 600ft. Below is a screenshot taken during the process of the analysis.


In the third course, Performing Viewshed Analysis in ArcGIS Pro we created a local scene depicting the hypothetical placement of a new lighting system for a campground in Eastern New York. Using the Viewshed tool, we modeled the range visibility at 3 meters and 10 meters above the surface. Below are the results of the analysis.

The above image shows the results of the 3 meter analysis.

The above image shows the results of the 10 meter analysis.


In the final course, Sharing 3D Content Using Scene Layer Packages we learned how to author a 3D scene using a set of data from the city of Portland, Oregon. Using methods similar to previous courses we converted 2D buildings and trees to 3D features. Our extruded building polygons were converted to multipatch features and the trees were converted to 3D symbology. Finally, we created a scene layer package from our multipatch feature data (the buildings) and published it as a hosted scene layer using ArcGIS Online. Below are two screenshots from the exercises.

The image above depicts the city of Portland global scene with multipatch building features and symbolized trees.

The image above depicts a scene from the published hosted scene layer from ArcGIS Online.


Overall, I really enjoyed these courses and learned a lot of new tools and methods that have provided me with a base knowledge of the 3D visualization capabilities of ArcGIS Pro. 





Thursday, July 11, 2024

Applications in GIS - Module 2 - Forestry and Lidar

In the second module for Applications in GIS we were introduced to a more in depth look into LiDAR. LiDAR stands for "Light Detection and Ranging", and while it has been a remote sensing method traditionally used in forestry science, I was first introduced to it while working the field of archaeology. This week's module focused on how LiDAR is applied in forest management and earth science in general.

In this week's lab we worked with LiDAR data from the Shenandoah National Park, Virginia, to examine tree height and tree canopy density.  The LiDAR data we used was obtained from the USGS in the form of an .las file, and then converted to a digital elevation model (DEM). To do this, we used the LAS Dataset to Raster tool in ArcGIS Pro. Next we created a forest height from the DEM and created a map with an accompanying chart showing the tree height. To finish we created a canopy density layer using the LAS to MultiPoint tool, Point to Raster tool, IS NULL tool, Con tool, Plus tool, and finally the Divide tool. 

Below are the maps I created as a result of the data that was generated throughout the process of the lab.














Special Topics in GIS - Module 2 - Data Quality - Standards

The second module in Special Topics in GIS focused on data quality standards with an exercise on determining the horizontal positional accur...