The first module for Special Topics in GIS covered aspects of spatial data quality with focus on defining and understanding the difference between precision and accuracy. According to the International Organization for Standardization's (ISO) document 3534-1, accuracy can be defined as the "closeness of agreement between a test result and the accepted reference value". This document also defines precision as the "closeness of agreement between independent test results obtained under stipulated conditions" (ISO, 2007).
In Part A of the lab assignment, the precision and accuracy metrics of provided data were determined. When determining precision, a distance (in meters) that accounts for 68% of the repeated observations was calculated. When determining accuracy, the average waypoint was measured from an accepted reference point. Below is my map product showing projected waypoints, the average location, and circular buffers corresponding to 50%, 68%, and 95% precision estimates. A "true" reference point was later added to determine a horizontal distance to the established average waypoint location.
Horizontal accuracy refers to how close a measured GPS position (or the mean of many positions) is to the true location on the ground. It is typically reported as the distance between the GPS-derived position and a known reference point.
Horizontal precision, on the other hand, describes how tightly repeated GPS measurements cluster together, regardless of whether they are centered on the true location. Precision is often expressed as the radius within which a certain percentage of positions (e.g., 68% or 95%) fall.
Overall, I really learned a lot in this lab and had the opportunity to brush up on my Excel skills which I have not utilized for a while. I am looking forward to building upon what I learned in this module.


No comments:
Post a Comment