DEM visualisation techniques: Openness

Several DEM visualisation techniques are based on some sort of simulated terrain illumination: Shaded Relief simulates directional illumination from a point light source, Sky-View Factor simulates diffuse illumination from a homogeneously bright hemisphere.

Openness is a visualisation technique which is similar to Sky-View Factor. However, contrary to considering a homogeneously bright hemisphere centered above each pixel, the computation of Openness considers a full sphere centered on each pixel. The Openness algorithm looks at a surrounding area with a specified radius and assesses whether or not there are terrain points which would obstruct illumination from that direction. In practice this is achieved by finding (along each radial line) the smallest angle between terrain points and the zenith angle. These angles are then aggregated for n radial lines (usually 8 or 16) by computing their average. As a result, higher/lower Openness values are assigned to more/less exposed terrain points.

Charcoal burning platforms in the southern Black Forest. Positive Openness visualisation.

Charcoal burning platforms in the southern Black Forest. Positive Openness visualisation.

When you think about this principle for a while you might wonder what would happen if you calculated Openness as the average of the smallest nadir angles (instead of zenith angles). And yes, this can be used as a visualisation technique as well. To distinguish between the two approaches, the one based on zenith angles is called Positive Openness while the one based on nadir angles is called Negative Openness. Negative Openness has high/low values for strongly/weakly incised terrain points.

It is worth noting that Negative Openness is not simply Positive Openness with a ‘-‘ sign but is based on a different computation resulting in different values for the same point. Negative Openness calculated as described above has a positive range of values. However, to make the resulting visualisations more intuitively readable, Negative Openness values calculated by LiVT are multiplied with -1. As a result, positive terrain features are characterised by higher values of both Positive and Negative Openness then negative terrain features. Positive and Negative Openness visualisations can be combinedby computing a (weighted) average of th respective greyscale images.

Charcoal burning platforms in the southern Black Forest. Negative Openness visualisation.

Charcoal burning platforms in the southern Black Forest. Negative Openness visualisation.

Charcoal burning platforms in the southern Black Forest. Grayscale average of Positive and Negative Openness.

Charcoal burning platforms in the southern Black Forest. Grayscale average of Positive and Negative Openness.

Openness visualisations can be vey useful for interpreting lidar-based DEMs as they clearly show small-scale relief features. At first sight, they appear somewhat similar to Sky-View Factor visualisations; however, the visual impression of the overall landscape forms is lost. An advantage is that small-scale features are visulised equally well on flat terrain and on slopes.

References

Yokoyama, R., Shirasawa, M., Pike R.J., 2002. Visualizing topography by openness: a new application of image processing to digital elevation models. Photogrammetric Engineering & Remote Sensing 68(3), 257-265.

Doneus, M., 2013. Openness as visualization technique for interpretative mapping of airborne lidar derived digital terrain models. Remote Sensing 5(12), 6427-6442. [open access]

Advertisements

Working with LiVT: file size limits and performance

It’s been a bit more than half a year since I first published LiVT on Sourceforge. Since then, I have been able to add a few more algorithms, but there are still a few bugs waiting to be fixed.

All in all, feedback so far has been positive, but I am still hoping that someone will offer help to improve the project. One thing that has been mentioned repeadedly is the need to know the limits of the software regarding maximum file sizes.

Another relevant issue is the performance of LiVT, i.e. the time needed per unit area. This differs greatly from algorithm to algorithm. Furthermore, different settings in each algorithm will strongly influence processing times. Therefore, I have run all tests using the default settings with the exception of Cumulative Visibility where I used an angular resolution of 10° (instead of the 1° default). When changing processing parameters, processing times can change proportionally (e.g. for maximum radius or no. of direction in the radial Sky-View Factor algorithm), quadratic (e.g. for filter radius in the filter algorithms) or even faster (e.g. for the number of scales in Exaggerated Relief or Multi-Scale Integral Invariants). The test data set had a resolution of 1 m. Note that for the same total area, file size and processing times quadruple when resolution is doubled.

These are the results of the tests I have run:

Algorithm

maximum DTM file size

[million pixels]

performance (Intel Xeon 3.2 GHz, 64 bit)

[km2/min]

Filter (Laplacian of Gaussian)

132

30

Shaded Relief

30

15

Exaggerated Relief

30

 0.48

Sky-View Factor

131

0.96

Trend Removal

132

5.22

Local Relief Model

56

0.09

Local Dominance

90

2.22

Cumulative Visibility

90

0.25

Accessibility

132

1.45

Multi-Scale Integral Invariants

144

0.57

Openness

132

1.92

These tests were run on an 64 bit Intel Xeon at 3.2 GHz under Windows Vista. As a single instance of LiVT uses only one processor core anyway, the number of processors and cores does not play a role. Running the performance tests on other computers showed that 64 bit has some advantage over a 32 bit system: On a slightly faster clocked 32 bit AMD Phenom at 3.4 GHz (also under Windows Vista), performance was on average 87% of that on the 64 bit computer. Finally, just for fun I also tested a 32 bit Intel Atom processor (on a four or five year old EeePC) at 1.6 GHz under Windows XP. On that computer, performance was on average 18% of that on the 64 bit machine.

Lidar visualisation and interpretation workshop 2014 in Esslingen, Germany

Registration is now open for the Lidar visualisation and interpretation workshop 2014 in Esslingen, Germany. The workshop will be a four-day event (including one field day) for students, graduate students and young professionals who are looking for theoretical background as well as hands-on experience with visualisation techniques for high-resolution digital elevation models (mainly but not exclusively based on airborne lidar) in the field of archaeology.

Date: 08-11 July 2014

Location: Esslingen, Germany

Contact: ralf.hesse@rps.bwl.de

The aims of the workshop are to bring together students and young professionals to learn about lidar visualisation techniques and archaeological interpretation. The programme will include presentations, visualisation and mapping exercises and a field day.

The maximum number of participants will be 20; please register early. Participation fee will be 50 €. A limited number of Archaeolandscapes travel grants will be available.

Tentative programme

• Monday, July 7:
o arrival
• Tuesday, July 8:
o morning: introduction to workshop; presentations
o afternoon: visualisation and mapping exercises
• Wednesday, July 9:
o full day: field trip
• Thursday, July 10:
o morning: visualisation and mapping exercises
o afternoon: visualisation and mapping exercises
• Friday, July 11:
o morning: combining lidar and aerial photography
o departure

Venue

The workshop will take at the State Office for Cultural Heritage Baden-Württemberg (Landesamt für Denkmalpflege) in Esslingen, Germany. Esslingen is located 15 minutes by train from the city of Stuttgart and 30 minutes by bus from Stuttgart airport. The State Office for Cultural Heritage is located a short walk from the train station and bus terminal as well as from Esslingen’s historic city centre.

Important deadlines

• 28 October 2013: registration for workshop begins
• 28 February 2014: registration for workshop ends
• 04 April 2014: application deadline for Archaeolandscapes travel grants

Registration

If you are interested in taking part in the workshop, please send an e-mail to ralf.hesse@rps.bwl.de.

The e-mail should contain the following information:

• name, surname
• status (student / graduate student, post-doc…)
• institution, country
• reason for wishing to take part in the workshop
• previous experience with interpretation of airborne lidar (if any)

DEM visualisation techniques: Multi-scale integral invariants

Earlier this year, I was impressed by Hubert Mara‘s presentation at the CAA workshop in Berlin. He had used a method called multi-scale integral invariants (MSII) to extract the incised charcaters from cuneiform tablets and inscription from old tombstones. This was surely be something that could be useful for visualisaing LIDAR-based DEMs: back in the office, I implemented it as an additinal algorithm in LiVT.

Now, how does it work? To begin with, it is a multi-scale approach (hence the name). The algorithm places multiple spheres of different diameters on each pixel in the DEM and computes how much of the volume of the spheres is above/below the DEM surface. As a result, you get a number of values (volume fractions above surface) for each DEM pixel. These sets of n values are interpreted as n-dimensional vectors.

By computing the distance of these n-dimensional vectors from a reference vector, the data can be reduced to a raster map containing a single value for each pixel. Low values (low vector distance) indicate high similarity with the reference vector and vice versa. Using an appropriate greyscale histogram stretch, this raster map can be displayed as an image. The reference vector can, for example, be determined by extracting the vector values for a specific relief feature or a point within a cuneiform character or simply by chosing the origin of the n-dimensional coordinate system (i.e. zero).

MSII visualisation (distance from origin, 8 scales starting with radius=2 and scale-to-scale factor 1.414) of the same area as in the post about local dominance  LIDAR data (c) LGL/LAD.

MSII visualisation (distance from origin, 8 scales starting with radius=2 and scale-to-scale factor 1.414; greyscale histogram stretch 1.35…1.55) of the same area as in the post about local dominance. LIDAR data (c) LGL/LAD.

References

Mara, H., Krömker, S., Jakob, S., Breuckmann, B., 2010. GigaMesh and Gilgamesh – 3D Multiscale Integral Invariant Cuneiform Character Extraction, in: A. Artusi, M. Joly-Parvex, G. Lucet, A. Ribes u. D. Pitzalis (Hg.), The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST (Paris, France, 2010), 131–138.

DEM visualisation techniques: Local dominance

Local dominance visualisation of a DEM is based on computing, for every pixel of the DEM, how dominant an observer standing on that point would be for a local surrounding area. Dominance as used here is the average steepness of the angle at which the observer looks down at the surrounding land surface. It is higher for points on local elevations and lower for points in local depressions.

Local dominance is computed for pixels within a specified maximum radius and a specified observer height above the surface. To reduce the noisy appearance of the resulting image due to small-scale surface roughness, a minimum radius can be specified. Pixel brightness is derived from the local dominance values by applying an appropriate greyscale histogram stretch.

Local dominance visualisation is well suited for very subtle positive relief features such as former field boundaries or strongly eroded burial mounds, but also delivers very good results for topographic depressions such as dolines, mining traces or hollow ways.

Shaded relief image (illumination azimuth 45°, elevation 30°, no vertical exaggeration) of a low-relief area. Grey and black lines are present-day roads and field boundaries, respectively. Former field boundaries are faintly visible depending on illumination direction.

Shaded relief image (illumination azimuth 45°, elevation 30°, no vertical exaggeration) of a low-relief area. Grey and black lines are present-day roads and field boundaries, respectively. Former field boundaries are faintly visible depending on illimation direction. LIDAR data (c) LGL/LAD.

Local dominance visualisation of the same area. Former field boundaries are clearly visible.

Local dominance visulaisation of the same area. Former field boundaries and a former road are clearly visible. LIDAR data (c) LGL/LAD.

DEM visualisation techniques: Cumulative visibility

A viewshed is the area visible from a given vantage point. The viewshed area depends on the topographic position of the vantage point and the surrounding topography, but also on the height of the observer standing on the vantage point, the height of the objects that should be visible to the observer (e.g. the ground surface or a second person) and the radius under consideration.

Cumulative visibility, on the other hand, specifies the size of the area from which a point in the DEM (or an object on that point) is visible to observers of a certain height. Essentially, it’s the same principle of intervisibility, just don’t get observer and object height mixed up.

DEM visualisation by cumulative visibility is based on computing, for each pixel of the DEM, the size of the area within a specified radius from which an object is visible. Because the surrounding topography plays a dominant role for intervisibility, the resulting raster map can also be a suitable technique to visualise that topography. Besides this, such a visualisation can be used as a tool for analysing for example the locations of archaeological sites.

The resulting raster map contains percentage values (0…100) of the size of the cumulative visibility area relative to the entire area within the specified radius.

Visualisation of cumulative visibility (radius 100 m, greyscale histogram stretch to 10...75%) of the same area as in the previous post on accessibility visualisation.  LIDAR data (c) LGL/LAD.

Visualisation of cumulative visibility (radius 100 m, greyscale histogram stretch to 10…75%) of the same area as in the previous post on accessibility visualisation. LIDAR data (c) LGL/LAD.

DEM visualisation techniques: Accessibility

DEM data can be visualised by computing surface accessibility. This means that an algorithm determines (for every pixel of the DEM) the maximum radius of a sphere that could be placed on the surface at this position without being impeded by the heights of surrounding pixels.

To reduce computation time, the algorithm only takes into account surrounding pixels within a pre-defined radius. Computation time can be further reduced by taking into account only surrounding pixels along a small number of radial lines rather than all pixels.

The range of values in the resulting accessibility raster map corresponds to the range of sphere radii. Greycale or colour mapping is used to display the results as an image.

Shaded relief image (ilumination azimuth 315°, elevation 45°, no vertical exaggeration) of an area on the edge of the Upper Rhine Valley. LIDAR data (c) LGL/LAD.

Shaded relief image (ilumination azimuth 315°, elevation 45°, no vertical exaggeration) of an area on the edge of the Upper Rhine Valley. LIDAR data (c) LGL/LAD.

Accessibility visualisation (greyscale histogram stretch for accessibility radii 1...15 m) of the same area. LIDAR data (c) LGL/LAD.

Accessibility visualisation (greyscale histogram stretch for accessibility radii 1…15 m) of the same area. LIDAR data (c) LGL/LAD.

Accessibility is particularly suitable for visualising negative relief features (e.g. pits, hollow ways) and features on slopes (e.g. agricultural terraces). Subtle relief features on more or less horizontal surfaces show up only poorly or not at all.

As an additional modification of the algorithm, the implementation in the Lidar Visualisation Toolbox LiVT also allows the computation of surface accessibility for directional cones.

References

Miller, G., 1994. Efficient algorithm for local and global accessibility shading. Computer Graphics Proceedings, Annual Conference Series SIGGRAPH, 319–325.