GENERATION OF ENVIRONMENT DATABASES
Use some form of CAD tool. All points still specified by hand.
Mainly landscapes - Information taken from aerial or satellite surveys and compiled into form usable for flight simulators etc.
Synthesized Data is generated by a procedural mechanism - possibly in real time.
Early prototype work only, very time consuming and error prone. Provides the ultimate in control but requires a lot of painstaking work. You do inevitably end up with an efficient database (Minimum polygons for the structure represented) All automatic tools tempt the user into creating 'lazy' structures with more polygons than are strictly needed in places. With a 'language based' file format such as VRML it is possible to construct quite complex models by hand because of the built in replication and re-use features.
Problem is to engage computer and human skills in appropriate ways. Humans know qualitatively what type of structure is required and where this should match or join smoothly. Computers can actually work out the numbers required to achieve the desired effect. Most appropriate for building “objects” rather than landscapes. Really needs a dedicated 'VR' tool as general CAD systems tend to produce 'inefficient' models and may not be equipped to insert the particular features required by VR systems such as splitting planes for BSP hidden surface removal or texture maps. Historically such tools have tended to be expensive (eg Multigen at £100,000+)
Two main kinds of data can be incorporated automatically into a database.
Height Data - gathered by radar.
Texture data - gathered by aerial/satellite photography.
Height data on a regular grid can be used crudely by triangulation of the grid to form a polygon mesh. The heights can then be inserted directly as z coordinates. However this is a wasteful procedure since large flat areas will use as many polygons as more rugged terrain and in the latter aeas polygon shapes may be very distorted.
We can use automatic processes to detect areas of high curvature and vary the polygon count accordingly. Texture Data can then be mapped directly onto these polygons. Note that polygon edges and texture feature edges (like river banks) will not necessarily coincide.
Note also that 3D objects like buildings will need to be treated separately. Further problems are caused by (lack of) consistency in time of day, year etc (light intensity, tides, shadows, crop types etc) Note loss of control in all this.
If we do not care about the exact details of the environment then it is possible to synthesize a scene.
What are the inputs to this process?
Large scale information eg low resolution aerial photography or qualitative information about the type of landscape (pasture, arable, scrub, urban etc) But what do these terms mean??
We may have samples of the type of landscape but these cannot be repeated endlessly. One approach is to extract some statistical properties of a sample landscape and try to reproduce them in our generated version.
The most prominent of these statistical measures is what is called 'Fractal Dimension'.
Consider a 2D example.
This is called a 'Koch Curve'. The interesting case is the mathematical limit when the individual lines get infinitely small. the question is 'How long is it?' This depends on how we measure it.
Suppose we measure it with a ruler of a certain length L.
The length of the curve is just nL. where n is the number of times that we can place the ruler up against the curve. e.g.
Measured with this ruler the length of the curve is just 4L.
With a ruler of length L/3 however we achieve a length:
HOW LONG IS THE KOCH CURVE?
which is larger than before.
Clearly the shorter we make our measuring stick the longer the line appears to be. This behaviour is mirrored by natural features like rivers and coastlines. The way to understand the problem with the length of the Koch Curve is to consider a related problem.
USE THE SAME METHOD ON A SQUARE
Using a little square of side L we arrive at a measurement of: 16L
Whilst using a square of side L/2 we arrive at a measurement. 64(L/2)=32L. Exactly the same problem as with the Koch curve.
Of course with the square we know the answer. We should be calculating with areas not lengths and hence instead of multiplying by L/2 we should multiply by (L/2) to get the same answer (16L ) in each case. If we generalise this situation for the Koch curve we have to use the fact that for an object of dimension x, measured with a 'stick' of length l the total length L is given by:
USE OF FRACTAL DIMENSION
This kind of curve is too regular for our purposes - but we can avoid this problem by randomising the process. The classic two dimensional version is the 'Fractal Mountain'. We start with a triangular grid that follows any large scale structure that is known to exist.
This is then recursively subdivided as shown:
The new vertices are then randomly displaced, with the displacement controlled to produce the correct Fractal dimension - which could have been obtained by statistical analysis of some sample data. The fractal dimension turns out to give quite a good guide to the qualittative appearance of the landscape.
The appearance does also depend on the method of generation however. In this particular example the effect can be improved by diplaceing the original vertices as well as the new ones at each 'generation'.
SOME TOPICS IN GEOMETRY
This section will give a summary of some mathematical terms, their definitions and uses. Detail of the theorems, algorithms etc. is beyond the scope of this course - but it helps to know what the words mean.
These two mean the same thing - a subdivision of a plane containing points into polygons such that: each polygon contains only one point; all points in the polygon are closer to that point than to any other.
The technique is applicable in a wide range of geographical/ecological/biological applications and may form the basis for the visualization of data.
There is also a 3D version of this construct, defining a kind of honeycomb of cells around the points.
Closely related to the Voronoi diagram.
The connections between the points shown create a mesh of triangles.
Note the choice of the connection x rather than the alternative y (shown dotted). This IS a Delaunay triangulation.
The alternative connection y links two Voronoi tiles that do not share an edge or vertex and hence cannot form part of a Delaunay triangulation.
However, even with this constraint, the Delaunay Triangulation is not necessarily unique.
UNIQUE DELAUNAY TRIANGULATION
An alternative definition of the Delaunay Triangulation works in terms of circles.
The smaller circle passes through the vertices of the triangle containing the edge x. The larger circle passes through the vertices of the triangle containing the edge y. The smaller circle contains no other points, whilst the larger circle contains the point v. The test of “a triangle not containing any other points” provides an alternative definition.
NO UNIQUE DELAUNAY TRIANGULATION
In this case the four way vertex in the Voronoi diagram at p creates an ambiguity between the two possible triangulations using the lines x and y. This happens because the four points involved lie on a single circle.
This is the minimal convex polygon/polyhedron that contains a given set of points. There is also a 3D version of this object using planes rather than lines.
Algorithms for convex hulls.
As with all sorting problems the construction of convex hulls requires some subtlety if the time taken is not to be proportional to the square of the number of points.
Good algorithms will find a way of discarding groups of points at a stroke, bad ones will test all combinations.