Module 4: Crime Analysis
The purpose of this week's lab was to explore 3 selected hotspot mapping techniques for crime analysis for 2017 homicides in the Chicago area. The results of each technique were compared against 2018 homicide data to assess each technique's reliability for predicting crime.
The first technique was Grid Overlay Hotspot Analysis. The goal was to determine the number of 2017 homicides in each grid cell and select the cells with the highest count. This was accomplished by first performing a spatial join between the 1/2 mile grid cells and the 2017 homicide data which added a field representing the number of homicides in each grid. I then used the Select by Attributes tool to select all counts greater than 0 and saved the selection as a separate feature class. I selected grids with the top 20% manually from the attribute table - the total number selected was calculated by dividing the total by 5 - and saved the selection as a separate feature class. To dissolve this feature class, I added a new field and filled it with the unit 1 (using the Calculate Field tool) and then used the Dissolve tool. Below is the resulting map from this analysis.
The second technique was Kernel Density Hotspot Analysis. The goal of this analysis was to create a density map of 2017 homicides. Although this technique does not provide statistical evidence of clustering, it does show hotspots in a way that is easy to interpret. I changed the Environment settings for the Processing Extent and Raster Analysis to the Chicago city boundary feature class. I performed the analysis by using the Kernel Density tool with the 2017 homicide data, a cell size of 100 and a search radius of 2630 square miles. I changed the symbology to exclude all 0 values and to include 2 break values, as 3*(mean) and the maximum value. I used the Reclassify tool to reclassify these values as 1 and 2, respectively, and then used the Raster to Polygon tool to create a polygon feature class. Finally, I used Select By Attributes with the gridcode equal to 2 (as this represented the areas with the highest homicide density) and saved the selection as the final feature class for the analysis. Below is the resulting map from this analysis.
The third and final technique was Local Moran's I Hotspot Analysis. This technique uses crime rates aggregated by meaningful boundaries, in this case, census tracts. The goal of this analysis was to create a map showing the areas with the highest homicides per 1000 housing units. This was accomplished by first performing a spatial join between the census tracts and the 2017 homicide data which added a field representing the number of homicides in each tract. I added a new field called crime rate to the resulting feature class and used the Calculate Field tool to calculate the number of homicides per 1000 housing units using the expression Join_Count/total_households*1000. I then used the Cluster and Outlier Analysis (Anselin Local Moran's I) tool to create the hotspots using the crime rate field. I used the Select By Attributes tool to select the fields where the COType IDW 8990 equaled to HH (which represented the highest areas of homicides per 1000 housing units) and saved the selection as a separate feature class. I finally used the Dissolve tool to dissolve by the COType IDW 8990 field to create the final hotspot feature class. Below is the resulting map from this analysis.
The first technique was Grid Overlay Hotspot Analysis. The goal was to determine the number of 2017 homicides in each grid cell and select the cells with the highest count. This was accomplished by first performing a spatial join between the 1/2 mile grid cells and the 2017 homicide data which added a field representing the number of homicides in each grid. I then used the Select by Attributes tool to select all counts greater than 0 and saved the selection as a separate feature class. I selected grids with the top 20% manually from the attribute table - the total number selected was calculated by dividing the total by 5 - and saved the selection as a separate feature class. To dissolve this feature class, I added a new field and filled it with the unit 1 (using the Calculate Field tool) and then used the Dissolve tool. Below is the resulting map from this analysis.
The second technique was Kernel Density Hotspot Analysis. The goal of this analysis was to create a density map of 2017 homicides. Although this technique does not provide statistical evidence of clustering, it does show hotspots in a way that is easy to interpret. I changed the Environment settings for the Processing Extent and Raster Analysis to the Chicago city boundary feature class. I performed the analysis by using the Kernel Density tool with the 2017 homicide data, a cell size of 100 and a search radius of 2630 square miles. I changed the symbology to exclude all 0 values and to include 2 break values, as 3*(mean) and the maximum value. I used the Reclassify tool to reclassify these values as 1 and 2, respectively, and then used the Raster to Polygon tool to create a polygon feature class. Finally, I used Select By Attributes with the gridcode equal to 2 (as this represented the areas with the highest homicide density) and saved the selection as the final feature class for the analysis. Below is the resulting map from this analysis.
The third and final technique was Local Moran's I Hotspot Analysis. This technique uses crime rates aggregated by meaningful boundaries, in this case, census tracts. The goal of this analysis was to create a map showing the areas with the highest homicides per 1000 housing units. This was accomplished by first performing a spatial join between the census tracts and the 2017 homicide data which added a field representing the number of homicides in each tract. I added a new field called crime rate to the resulting feature class and used the Calculate Field tool to calculate the number of homicides per 1000 housing units using the expression Join_Count/total_households*1000. I then used the Cluster and Outlier Analysis (Anselin Local Moran's I) tool to create the hotspots using the crime rate field. I used the Select By Attributes tool to select the fields where the COType IDW 8990 equaled to HH (which represented the highest areas of homicides per 1000 housing units) and saved the selection as a separate feature class. I finally used the Dissolve tool to dissolve by the COType IDW 8990 field to create the final hotspot feature class. Below is the resulting map from this analysis.
Based
on the 2017 hotspot analyses comparisons to the actual 2018 homicides, it would
appear the Kernel Density and Local Moran’s I were the best for predicting
future homicides. Both analyses predicted close to the same percentage of 2018
homicides within each analysis’s hotspot area (43% and 44%, respectively,
compared to only 27% for the Grid Overlay analysis). Given the similarities of
the results, and although the Local Moran’s I predicted a higher number of 2018
homicides within the 2017 hotspot, the most useful map for a police chief to
use in order to allocate policing resources within the city would be the Kernel
Density analysis. This is because the Kernel Density analysis has a smaller
total area and a higher crime density result than the Local Moran’s I, which
would save policing allocation costs and time by servicing a smaller area with
a similarly accurate result.
Comments
Post a Comment