Module 3: Visibility Analysis

The purpose of this week's lab was to complete 4 ESRI trainings to explore the concepts of line of sight analysis, viewshed analysis, 3D visualization and sharing 3D content.

The first training, 3D Visualization Using ArcGIS Pro was concerned with creating and navigating 3D scenes. 3D maps and scenes can be helpful for visualizing and analyzing data in a more realistic setting. Investigating data in 3D allows a different perspective in order to gain different and new insights that might not be answerable in a 2D setting. Applications for 3D maps are many and include showing the impact of a new building in an area, displaying a transportation route through an area, or visualizing subsurface features such as wells, pipelines or fault lines. Although 3D maps have wide ranging applications, navigating them can be initially cumbersome and sometimes difficult to interpret, depending on the data being displayed. In this training, I explored data for Crater Lake in Oregon, and San Diego, California. For the Crater Lake data, I linked a 2D and 3D scene in ArcGIS Pro. As seen below, the features of Crater Lake were more impactful in a 3D scene than the 2D scene.


The second training, Performing Line of Sight Analysis, focused on just that, performing line of sight analysis between an observer point and a target point using the line of sight analysis tools in ArcGIS ProA line of sight calculates intervisibility between an observer and a target along a straight line between the two points and considers any obstructions provided by a surface or multipatch feature class.The steps in this process are straightforward and include: (1) determine observers and targets, (2) construct sight lines, and (3) determine line of sight. The goal of the exercise was to determine which locations along a parade route in Philadelphia, PA could be seen by security personnel using inputs including a DEM of the city, 3D multipatch buildings, a parade route and observer points. For the exercise, I used the Construct Sight Lines tool to create the site lines between the observer points and parade route. These were then used in the Line of Sight tool to generate the lines of sight, which produced green lines (visible) and red lines (not visible). The Add Z Information tool was used to add the 3D length to each of the lines of sight. The Select by Attributes tool was used to select and then remove lines of sight of Length3D greater than 1,100 feet using the TarIsVis attribute field equal to 0 and deleted using the Delete Features tool. The TarIsVis field was added when the Line of Sight tool was run. ModelBuilder was also used to select and remove lines of sight greater than 600 feet. The scene below is from the first analysis, removing lines greater than 1,100 feet. 



The third training, Performing Viewshed Analysis, used the Viewshed analysis tool to identify areas on a surface and interpret visible areas using a DEM and observer data. The Viewshed tool creates an output that models the areas that are visible from given vantage points. In the exercise in this training, I learned how to modify the input features to model the visibility from a known vantage point.  The exercise was concerned with modeling a new lighting scheme for a campground in eastern New York. New fields were added to the attribute table of the Light Locations layer to include the height from which the viewshed was created, the illumination distance, and azimuth settings in a 100 degree angle swath.  The Viewshed tool was used with the NY DEM and light locations as inputs to create the area illuminated by all 4 lights. Raster functions was then used to model which part of the campground was illuminated by more than two lights. Because the result did not cover more than half of the campground, the height of the lights had to be adjusted using the calculate field and re-running the Viewshed tool. The result of this last analysis is below, showing the portions of the campground (red boundary) illuminated by more than 2 lights at a height of 10 meters (white features). The Viewshed tool works on points and polylines to model visibility and is controlled through fields that are added to the input data. These fields control the observation point elevation values, vertical offsets, horizontal and vertical scanning angles, and scanning distances. By adding these fields to the input feature class, the output of the Viewshed tool can be modeled for known vantage points.

The fourth and final training, Sharing 3D Content Using Scene Layer Packages, was concerned with authoring a 3D scene and publishing it as a scene layer package. For the 3D scene, the same data from a 2D map can be used in a 3D, the only difference is that the 3D data must have an elevation associated with it. The elevation value can be stored in the data, or the data may be draped over an elevation surface. Authoring a scene is achieved by loading the data, displaying 2D data as 3D layers by dragging it to 3D layers group and then extruding to the correct height, and converting 2D data to 3D data. Some best practices for 3D scenes include having all the data in the same coordinate system; structuring content to decide what the user needs to see; defining an area of interest; and understanding that 3D symbology is required for feature layers to be published and shared. To share a scene, the user must have sharing privileges and permissions. Scenes can be shared with users, groups, organizations and the public. To publish, the following steps are required: sign into ArcGIS Online, open My Content, add the scene layer package, add title and tags, publish as hosted layer and add the item. A few tips for publishing:

  • If the box for publishing a hosted layer is unchecked, the scene layer package will be added to My Content, but no scene layer will be published. It can be added later, however. 
  • The input layer must be a scene layer with multipatch feature data. 
  • The layer must use absolute heights for feature elevation; it cannot be defined as on the ground or relative to the ground. 
  • If the multipatch content is projected, the layer's x,y units should match the z units. 
  • Only the layer's visible fields will be included in the scene layer package.

In the exercise for this training, I authored a scene of Portland, Oregon. The buildings were extruded using the Extrusion expression in the Extrusion group under the Appearance tab. The trees were displayed as 3D points using the Add Surface Information tool to use a DTM file as the input surface. In order to share extruded polygons, they must be multipatch features. I used the Layer 3D To Feature Class tool to do this for the Portland data. Although the tree points had Z information stored as an attribute, to Z-enable the points, I used the Feature To 3D By Attribute tool to create a 3D point layer that aligned with the surface of the scene.The scene below was then shared as a scene layer package on ArcGIS Online.



Comments

Popular posts from this blog

Module 2: Coordinate Systems

Module 6: Proportional Symbol and Bivariate Choropleth Mapping

Module 8: Isarithmic Mapping