Working with LiVT: file size limits and performance

It’s been a bit more than half a year since I first published LiVT on Sourceforge. Since then, I have been able to add a few more algorithms, but there are still a few bugs waiting to be fixed.

All in all, feedback so far has been positive, but I am still hoping that someone will offer help to improve the project. One thing that has been mentioned repeadedly is the need to know the limits of the software regarding maximum file sizes.

Another relevant issue is the performance of LiVT, i.e. the time needed per unit area. This differs greatly from algorithm to algorithm. Furthermore, different settings in each algorithm will strongly influence processing times. Therefore, I have run all tests using the default settings with the exception of Cumulative Visibility where I used an angular resolution of 10° (instead of the 1° default). When changing processing parameters, processing times can change proportionally (e.g. for maximum radius or no. of direction in the radial Sky-View Factor algorithm), quadratic (e.g. for filter radius in the filter algorithms) or even faster (e.g. for the number of scales in Exaggerated Relief or Multi-Scale Integral Invariants). The test data set had a resolution of 1 m. Note that for the same total area, file size and processing times quadruple when resolution is doubled.

These are the results of the tests I have run:

Algorithm

maximum DTM file size

[million pixels]

performance (Intel Xeon 3.2 GHz, 64 bit)

[km2/min]

Filter (Laplacian of Gaussian)

132

30

Shaded Relief

30

15

Exaggerated Relief

30

 0.48

Sky-View Factor

131

0.96

Trend Removal

132

5.22

Local Relief Model

56

0.09

Local Dominance

90

2.22

Cumulative Visibility

90

0.25

Accessibility

132

1.45

Multi-Scale Integral Invariants

144

0.57

Openness

132

1.92

These tests were run on an 64 bit Intel Xeon at 3.2 GHz under Windows Vista. As a single instance of LiVT uses only one processor core anyway, the number of processors and cores does not play a role. Running the performance tests on other computers showed that 64 bit has some advantage over a 32 bit system: On a slightly faster clocked 32 bit AMD Phenom at 3.4 GHz (also under Windows Vista), performance was on average 87% of that on the 64 bit computer. Finally, just for fun I also tested a 32 bit Intel Atom processor (on a four or five year old EeePC) at 1.6 GHz under Windows XP. On that computer, performance was on average 18% of that on the 64 bit machine.

Advertisements

Rotation is the enemy

Last week I have published a simple tool that calculates (among a few other things) motion blur resulting from camera movement relative to the photographed object. Looking at the results of those calculations, one could say that motion blur is a very minor issue in UAV photography: at a platform speed of 30 km/h and a shutter speed of 1/1000 s, motion blur is as low as 0.8 cm. Flying a Canon G12 at the wide angle limit (28 mm) and 200 m above ground, this amounts to only 0.25 image pixels. From the calculation results of UAVphoto, motion blur does not appear to be a relevant issue. The need to take images at short intervals to achive sufficient overlap appears to be much more important when using a UAV. But why do I even get blurred images when using a kite that is almost immobile relative to the ground?

The point is that motion blur due to translation (i.e. linear movement of the camera relative to the object) is only one reason for blurred images. Another (and much more relevant) reason is rotation of the camera. Unfortunately, this is also much harder to measure and to control. To show how important rotation is for image blur, I have added the calculation of rotation blur to the new version of UAVphoto. Two types of rotation have to be distinguished: rotation about the lens axis and rotation about the focal point but perpendicular to the lens axis. I am not using the terms pitch, roll and yaw here because the relation of platform pitch, roll and yaw to rotation about different camera axes depends on how the camera is mnounted to the platform.

Rotation about the lens axis results in rotation blur that is zero at the image centre and reaches a maximum at the image corners. Rotation about an axis orthogonal to the lens axis results in rotation blur that is at first sight indistinguishable from motion blur due to high speed translation movement. Of course, all types of blur combine to the total image blur. Rotation blur about the lens axis is independend of focal length. Orthogonal rotation blur, on the other hand, increases with increasing focal length. In both cases an increase in shutter speed will result in a proportional decrease in image blur.

Most UAV rotation movements are due to short-term deflections by wind gusts or steering. Wind gusts are also the main source of rotation movements of kite-carried cameras. Let’s say we’re using a Canon G12 at the wide angle limit (28 mm). The maximum rotation rate which will not result in image blur (using a 0.5 pixel threshold) is 12.4 °/s (or 29 s for a full circle) for rotation about the lens axis and 8.1 °/s (or 44 s for a full cirlce) for rotation orthogonal to the lens axis. At a focal length of 140 mm, the maximum rotation rate orthogonal to the lens axis is only 1.9 (or 189 s for a full circle). If all this sounds very slow to you, you’ve got the point: even slow rotation of the camera during image capture is a serious issue for UAV photography, in most cases much more important than flying speed.

UAVphoto – a simple calculation tool for aerial photography

I have to admit that I am sometimes a bit lazy. Rather than solving a problem once and working with the solution, in some cases I keep twiddling with the same problem again and again. Calculating things like viewing angles, ground resolution, motion blur or image overlap for aerial photography is a case in point. There must be a dozen or so spreadsheet files on my various computers which I used to do such calculations. I kept re-inventing the wheel again and again for myself and when others asked me for help.

UAVphoto_1.0.0.0_screenshot

Now I finally got around writing a small piece of software for this specific task. It is a simple tool that allows to calculate parameters like ground pixel size, motion blur and sequential image overlap from UAV flight parameters (velocity and altitude) and camera parameters (focal length, shutter time, image interval etc.). Calculation assumes a vertical camera view for simplicity. Image y dimensions are those in flight direction, image x dimensions are those perpendicular to flight direction. Default camera values are for Canon G12 at the wide angle limit. Five to six seconds is the approximate minimum image interval using a CHDK interval script. In continous shooting mode, a minimum interval of approximately one second can be achieved.

Now that I created this tool, why not share it? UAVphoto is published under the GNU General Public License and can be downloaded from Sourceforge.

7500 lines of code: the open source Lidar Visualisation Toolbox LiVT

One of the first things I did when I began working with lidar-based elevation models back in 2006/2007 was to think about how to best visualise these data to be able to discern sublte surface morphology. The “standard” shaded relief just didn’t convince me, and of course I wanted to get as much as possible out of the data. So I started developing algorithms and writing simple software tools to implement them. When I started working full time with lidar data in 2009, I had to upscale this a bit and add some data management and automatisation. Still, these tools were meant to be used for one particular project (the archaeological prospection of Germany’s federal state Baden-Württemberg; since 2010 with support by the European Commissions’s Culture Programm through the multinational project Archaeolandscapes Europe), and I was the only person working on the project. It didn’t matter that I had to make changes to the code every now and then. The amount of code grew step by step: I saw a poster about sky-view factor visualisation at the AARG conference in Bucarest in 2010, came back and added it to the other algorithms. I had other ideas or found interesting algorithms in the literature and added them. And so on.

Once in a while people would ask me if I could process data for them or if they could get the software that I was using. What software? The software I use for the project was (and still is) a makeshift collection of tools implemented in VBA under MS Excel, with a dedicated user interface showing a map of Baden-Württemberg, all the data directories and many of the parameters written directly into the code and the geocoordinates coded into the file names. Works fine for me, but it isn’t portable at all. It would not work on a different computer, and it would be a lot of work to adapt it to a different project.

With the obvious demand for a software which I could share with my colleages, I finally decided to create a portable stand-alone software in which all the visualisation algorithms would be implemented. Easy enough, just take the code from VBA, adapt it to VB2010, tidy it up and, voilá, create an executable. Well… it wasn’t quite that easy. Creating a software that others could simply download and use turned out to be quite different from just writing some code to implement one or another algorithm. And it took much more time than I had expected. Finally, there was some pressure to finish and release at least a beta version, because I had promised to give a software/visualisation tutorial (using my software) at the CAA Workshop in Berlin in January.

LiVT screenshot

Coming back from that workshop, I was inspired by Hubert Mara’s talk to add yet another visualisation technique: multi-scale invariant integrals. And finally, it was decided that what had by then been named the Lidar Visualisation Toolbox (LiVT) was more or less ready to be published at http://sourceforge.net/projects/livt/as open source software under the GNU General Public License. It’s still very much a beta version, and the code is certainly not as tidy as I would want it to be. But finally it’s out there, and if anyone is willing to help improve it, just let me know!