The first module for Special Topics in GIS covered aspects of spatial data quality with focus on defining and understanding the difference between precision and accuracy. According to the International Organization for Standardization's (ISO) document 3534-1, accuracy can be defined as the "closeness of agreement between a test result and the accepted reference value". This document also defines precision as the "closeness of agreement between independent test results obtained under stipulated conditions" (ISO, 2007).
In Part A of the lab assignment, the precision and accuracy metrics of provided data were determined. When determining precision, a distance (in meters) that accounts for 68% of the repeated observations was calculated. When determining accuracy, the average waypoint was measured from an accepted reference point. Below is my map product showing projected waypoints, the average location, and circular buffers corresponding to 50%, 68%, and 95% precision estimates. A "true" reference point was later added to determine a horizontal distance to the established average waypoint location.

Horizontal accuracy refers to how close a measured GPS position (or the mean of many positions) is to the true location on the ground. It is typically reported as the distance between the GPS-derived position and a known reference point.
Horizontal precision, on the other hand, describes how tightly repeated GPS measurements cluster together, regardless of whether they are centered on the true location. Precision is often expressed as the radius within which a certain percentage of positions (e.g., 68% or 95%) fall.
My horizontal
precision (68%) was 4.5 m and my horizontal accuracy of 3.25 m produced a
difference of 1.25 m. I would say that this would not be a significant difference
because it sits within the 68% precision radius. My results for vertical accuracy were as
follows with my mean waypoint elevation coming in at 28.54 and the mean elevation for the "true" reference point being 22.58. This is roughly
a 5.96 m difference which I would think is significant at least in some cases.
In Part B of the lab assignment, the RMSE metric was calculated, along with a cumulative distribution function (CDF). The CDF describes the probability of a random variable taking on a given variable or less, showing a more complete error distribution instead of selected metrics. For this portion we were provided another dataset where we used Excel for the analysis. Here we calculated minimum, maximum, mean, median, root square mean, and the 68th, 90th, and 95th percentiles. The final portion of the lab consisted of displaying the dataset using a cumulative distribution function (CDF) graph which is displayed below.
Overall, I really learned a lot in this lab and had the opportunity to brush up on my Excel skills which I have not utilized for a while. I am looking forward to building upon what I learned in this module.