Home > Untitled

Untitled


Page 1

Fundamentals of Texture Mapping and Image Warping
Master's Thesis
under the direction of Carlo Séquin
Paul S. Heckbert
Dept. of Electrical Engineering and Computer Science University of California, Berkeley, CA 94720
1989 Paul S. Heckbert
June 17, 1989
Page 2

Page 3

Abstract
The applications of texture mapping in computer graphics and image distortion (warping) in image processing share a core of fundamental techniques. We explore two of these techniques, the two- dimensional geometric mappings that arise in the parameterization and projection of textures onto surfaces, and the filters necessary to eliminate aliasing when an image is resampled during texture mapping or warping. With respect to mappings, this work presents a tutorial on three common classes of mapping: the affine, bilinear, and projective. For resampling, this work develops a new theory describing the ideal, space variant antialiasing filter for signals warped and resampled ac- cording to an arbitrary mapping. Efficient implementations of the mapping and filtering techniques are discussed and demonstrated.
Page 4

Page 5

Contents
1 Introduction
1.1 Texture Mapping
1.2 Image Warping

1.3 Organization of this Thesis
2 Two-Dimensional Mappings
2.1 Naive Texture Mapping
2.1.1
Coordinate Systems
2.1.2 A Naive Implementation of Texture Mapping
2.2 Simple Two-Dimensional Mappings
2.2.1 Affine Mappings
2.2.2
Bilinear Mappings
2.2.3 Projective Mappings

2.2.4 Comparison of Simple Mappings
2.3 Two-Dimensional Mappings in Texture Mapping
2.3.1
Parameterizing Planar Patches
2.3.2 Deducing Projective Mappings
2.3.3 Other Parameterizations.
·

CT
5
5
6
8
9
9
9
10
11


13

14

17

2.3.4 Scanning Textures and Image Warps
2.3.5

Incremental Techniques for Screen Order Scanning
·
+
2.4 Example: Photomosaics by Image Warping
2.5 Summary
3 Resampling Filters
3.1 Is Filtering Needed?

1
2 2 2
23
23
2 2 2 2
24
26
26
220
27
30
30
Page 6

3.2 Sampling: The Cause of Aliasing
3.2.1 Frequency Analysis of Aliasing
3.3 Antialiasing
3.3.1
3.3.2
Sampling at Higher Frequency
Prefiltering
3.3.3 Some Low Pass Filters
30
·
33
·
36
36

36
37
3.4 Ideal Resampling Filters
41
3.4.1 General, Ideal Resampling Filters.
42
3.4.2
Affine Mappings and Space Invariant Resampling .
45

3.4.3
3.4.4 Summary: Resampling Affine Mappings
Two-Dimensional Affine Mappings
45
46
3.5 Image Resampling in Practice.
3.5.1
Resampling by Postfiltering

46
46
3.5.2 Image Processing Considerations
3.5.3 Filter Shape.
3.5.4 Resampling Filter Shape .
47
47
47
3.5.5
Local Affine Approximation
50
3.5.6 Magnification and Decimation Filters
3.5.7 Linear Filter Warps
3.5.8 Elliptical Gaussian Filters
·
3.5.9 An Implementation of Elliptical Gaussian Filters
3.6 Future Work
A Source Code for Mapping Inference
A.1 Basic Data Structures and Routines
A.1.1 poly.h
A.1.2 pmap.h
51

52
53
57
62
A.1.3 mx3.h
66
66
66
66
9999
67
A.1.4 mx3.c
A.2 Inferring Projective Mappings from 2-D Quadrilaterals
A.2.1 pmap-poly.c .
+
A.2.2 pmap_gen.c

2
67
68

68
70
288
Page 7

A.2.3 mapping_example.c
A.3 Inferring Affine Parameterizations from 3-D Polygons
A.3.1 pmap_param.c
A.3.2 param_example.c
B Ellipses
B.1 Implicit Ellipse

B.2 Transforming an Implicit Ellipse
B.3 Ellipse Analysis by Rotation.
B.4 Ellipse Synthesis by Transforming a Circle .
B.5 Parametric Ellipse
B.6 Converting Parametric to Implicit
B.7 Converting Implicit to Parametric

3
+

73
74
2 2 2 2
72
73
76
76

77
77

79
79

80
81
Page 8

Page 9

Acknowledgements
Thanks to Charlie Gunn for clarifying projective geometry, Rick Sayre for his careful reading, and Jules Bloomenthal for detailed critiques between runs on Granite Chief. My readers, Professors Carlo Séquin and Brian Barsky, and Ed Catmull, provided helpful comments. Carlo Séquin and a grant from MICRO generously supported this research. Reference recommendations by Professor Avideh Zakhor and Ken Turkowski, formatting help from Ziv Gigus, and photographic assistance. from Greg Ward were also invaluable. The guys of 508-7, John Hartman, Mike Hohmeyer, and Ken Shirriff, created the crucial camaraderie.
This paper is based on research begun at the New York Institute of Technology Computer Graphics Lab, where Lance Williams suggested numerous texture applications, Ned Greene and Dick Lundin demanded faster Omnimax resampling programs, and Ephraim Cohen showed me the 8 x 8 system for projective mapping inference. Kevin Hunter suggested the decomposition into two square-to-quadrilateral mappings. Discussions of "resize bugs" with Pat Hanrahan and Tom Porter at Pixar underscored the importance of both reconstruction and prefiltering during resampling.
If only there were more time, we could do it all.
4
Page 10

Page 11

Chapter 1
Introduction
In this thesis, we are interested in the modeling and rendering problems common to texture mapping and image warping. We begin by reviewing the goals and applications of texture mapping and image warping.
1.1 Texture Mapping
Texture mapping is a shading technique for image synthesis in which a texture image is mapped onto a surface in a three dimensional scene, much as wallpaper is applied to a wall [Catmull74], [Blinn- Newell76], [Heckbert86b]. If we were modeling a table, for example, we might use a rectangular box for the table top, and four cylinders for the legs. Unadorned, this table model would look quite dull when rendered. The realism of the rendered image can be enhanced immensely by mapping a wood grain pattern onto the table top, using the values in the texture to define the color at each point of the surface. The advantage of texture mapping is that it adds much detail to a scene while requiring only a modest increase in rendering time. Texture mapping does not affect hidden surface elimination, but merely adds a small incremental cost to the shading process. The technique generalizes easily to curved surfaces [Catmull74].
Texture mapping can be used to define many surface parameters besides color. These include the perturbation of surface normal vectors to simulate bumpy surfaces (bump mapping), transparency mapping to modulate the opacity of a translucent surface, specularity mapping to vary the glossiness of a surface, and illumination mapping to model the distribution of incoming light in all directions. These applications are surveyed, and their original literature is cited, in [Carey-Greenberg85].
In all of the varieties of texture mapping mentioned above, geometric mappings are fundamental. Two-dimensional mappings are used to define the parameterization of a surface and to describe the transformation between the texture coordinate system and the screen coordinate system. In texture mapping applications the latter mapping is usually fully determined by the 3-D transformation defined by the camera, the modeling transformations describing the geometry of the scene, and the parameterization that maps a texture onto a surface. There are many texture representations, but in this work we restrict ourselves to the most common: discrete 2-D images.
When rendering a textured surface it is necessary to sample the texture image to produce the screen image. Except for scenes viewed with parallel projections, texture mappings are non-
5
Page 12

affine, so they have nonuniform resampling grids, as shown in figure 1.1 (affine mappings are those composed of rotations, scales, and translations). Antialiasing non-affine texture mappings requires a space variant texture filter, however. Such filters change shape as a function of position, and are much more difficult to implement than space invariant filters. Space variant filters are particularly important for texture mapping applications because of the extreme range of scales involved in many 3-D scenes. (These issues are explored in detail in chapter 3.)
1.2
Image Warping
Image warping is the act of distorting a source image into a destination image according to a mapping between source space (u, v) and destination space (x, y). The mapping is usually specified by the functions x(u, v) and y(u, v). (We use the term "warp" instead of its synonyms "distortion" and “transformation" because of its conciseness and specificity: "warp" specifically suggests a mapping of the domain of an image, while "transformation" can mean a mapping of the image range as well).
Image warping is used in image processing primarily for the correction of geometric distortions introduced by imperfect imaging systems [Gonzalez-Wintz87]. Camera lenses sometimes introduce. pincushion or barrel distortions, perspective views introduce a projective distortion, and other nonlinear optical components can create more complex distortions. In image processing, we do image warping typically to remove the distortions from an image, while in computer graphics we are usually introducing one. Image warps are also used for artistic purposes and special effects in interactive paint programs. For image processing applications, the mapping may be derived given a model of the geometric distortions of a system, but more typically the mapping is inferred from a set of corresponding points in the source and destination images. The point correspondence can be automatic, as for stereo matching, or manual, as in paint programs. Most geometric correction systems support a limited set of mapping types, such as piecewise affine, bilinear, biquadratic, or bicubic mappings. Such mappings are usually parameterized by a grid of control points. A survey of mapping and filtering techniques for image processing is found in [Wolberg88].
The appropriate antialiasing filter for non-affine mappings in image warping is, in general, space variant, just as for texture mapping. Filter quality and efficiency is less of a problem for image processing, however, since the mappings used there usually have a more uniform sampling grid than those for texture mapping. When it is known a priori that the sampling grid is nearly uniform, a fixed "interpolation" filter can be used with good results [Gonzalez-Wintz87]. Uniform resampling (often known as "sampling rate conversion") is a fairly common task for 1-D signals such as audio [Crochiere-Rabiner83]. Nonuniform sampling has received much less attention to date.
1.3
Organization of this Thesis
This thesis is split into two halves: chapter 2 contains a discussion of basic 2-D mappings, and chapter 3 develops and applies a theory for resampling filters. There are also two appendices: the first contains source code for several mapping tasks, and the second summarizes the mathematics
6
Page 13

screen
+
(a)
+
+
of
+
+
+
screen
textured surface
+
+
texture pixel center
screen pixel center
+
+
+
+
(b)
+
+
+
+
+
+
+
Figure 1.1: a) A textured surface viewed in perspective. b) Nonuniform grid of texture samples in the screen.
7
Page 14

of ellipses relevant to filtering.
8
Page 15

Chapter 2
Two-Dimensional Mappings
In our discussion of mappings we restrict ourselves, for the most part, to 2-D images mapped to planar surfaces. The simplest classes of 2-D mappings are the affine, bilinear, and projective transformations. These mappings are well known in mathematics and have been used for years in computer graphics. Despite their fundamental importance, however, the latter two mappings have received little attention in the computer graphics literature. Other aspects of texture mapping and image warping, such as texture synthesis and filtering, have received much more attention, probably because they have more apparent impact on image quality than simple 2-D mappings.
We believe that texture mapping would be more common in rendering systems if basic 2-D mappings were more widely understood. In this work we hope to revive appreciation for one class of mappings in particular, the projective mapping.
2.1 Naive Texture Mapping
2.1.1
Coordinate Systems
To discuss texture mapping, we need to define several coordinate systems. Texture space is the 2-D space of surface textures and object space is the 3-D coordinate system in which 3-D geometry such as polygons and patches are defined. Typically, a polygon is defined by listing the object. space coordinates of each of its vertices. For the classic form of texture mapping with which we are concerned, texture coordinates (u, v) are assigned to each vertex. World space is a global coordinate system that is related to each object's local object space using 3-D modeling transformations (translations, rotations, and scales). 3-D screen space is the 3-D coordinate system of the display, a perspective space with pixel coordinates (x, y) and depth z (used for z-buffering). It is related to world space by the camera parameters (position, orientation, and field of view). Finally, 2-D screen space is the 2-D subset of 3-D screen space without z. When we use the phrase "screen space" by itself we mean 2-D screen space.
The correspondence between 2-D texture space and 3-D object space is called the parameteri- zation of the surface, and the mapping from 3-D object space to 2-D screen space is the projection defined by the camera and the modeling transformations (figure 2.1). Note that when we are ren- dering a particular view of a textured surface, it is the compound mapping from 2-D texture space
9
Page 16

2-D texture space
parameterization
compound mapping
3-D object space
projection
2-D screen space
Figure 2.1:
The compound mapping is the composition of the surface parameterization and the viewing projection.
to 2-D screen space that is of interest. For resampling purposes, once the 2-D to 2-D compound mapping is known, the intermediate 3-D space can be ignored. The compound mapping in texture mapping is an example of an image warp, the resampling of a source image to produce a destination image according to a 2-D geometric mapping.
2.1.2
A Naive Implementation of Texture Mapping
To demonstrate the subtleties of 2-D mappings, we outline here an algorithm for texture mapping within a z-buffer polygon renderer [Rogers 85]. This simple algorithm will produce a number of visual flaws that are quite instructive.
We begin by defining the object space coordinates (x, y, z) and texture coordinates (u, v) at each vertex of each polygon. The polygons are transformed to screen space, yielding x, y, and z coordinates for each vertex. We will need to compute z, u, and v at each pixel, and we would like to do this with fast, incremental techniques. It is well known that the perspective transformation maps lines to lines and planes to planes [Foley-van Dam 82], so linear interpolation is appropriate for computing z. Linear interpolation is also used in Gouraud and Phong shading, so it might seem reasonable to interpolate the texture coordinates (u, v) linearly as well. Following this reasoning, we scan convert the polygon into pixels by linearly interpolating r, z, u, and v along the left and right sides of the polygon and linearly interpolating z, u, and v across each scan line. Having texture coordinates (u, v) at each pixel, we can sample the texture array and use the resulting color as the screen pixel value.
10
Page 17

The inner loop of this primitive texture mapper will then be:
for x = xleft to xright
if z < ZBUF [x,y] then ZBUF [x,y] = Z
(r, g, b) = TEX [u,v]
SCR[x,y] = (r, g, b)
z = z+dz
(u, v) = (u, v) + (du, dv)
is new point closer? update z-buffer
sample texture
write screen
where (a, b, c) denotes a vector. The increment values dz, du, and dv are calculated once per scan line. This code uses the texture as the output pixel color (i.e. no lighting effects) and it point samples the texture (i.e. no filtering).
Close examination of figure 2.2 reveals some of the flaws in the algorithm above. Aliasing (not visible here) would result from a high frequency texture. This can be eliminated by filtering an area surrounding the texture point (u, v) rather than sampling just a single pixel [Feibush- Levoy-Cook80], [Heckbert86b]. More fundamentally, however, these images do not exhibit the foreshortening we expect from perspective (compare with figure 2.3). The textured polygon also shows disturbing discontinuities along horizontal lines passing through each vertex. These discon- tinuities are artifacts of the linear interpolation of u and v. In animation, the horizontal ripples will move distractingly as the camera rolls, since the ripples are rotation variant, and the lack of foreshortening will make the texture appear to swim across a surface. As we will see, this naive texture mapper is correct only for affine mappings.
These problems occur because the texture transformation effected by our linear interpolation of u and v is inconsistent with the geometry transformation used to transform the vertices to screen space. Linear interpolation of (u, v) in a scan line algorithm computes a bilinear mapping, while the actual image warp defined by perspective is a projective mapping. One ad hoc solution to this inconsistency is to continue with linear interpolation, but to finely subdivide the polygon. If the texture coordinates are correctly computed for the vertices of the new polygons, the resulting picture will exhibit less rippling. It is hard to know how finely to subdivide the polygon, however. We can find more satisfactory, theoretically correct, solutions to these problems by studying simple, generic 2-D mappings. Later we will apply the generic techniques to texture mapping.
2.2
Simple Two-Dimensional Mappings
There is a limitless variety of possible 2-D mappings. Mappings for applications of texture mapping or image warping must satisfy the needs of both the user and the implementer of the system. For a designer modeling a 3-D scene or the user of image distortion software, one of the most important criteria for selecting a mapping is predictability. One form of predictability present in the simplest mappings is the preservation of straight lines. If a texture mapping or image warp bends lines then the results are less predictable than those of a line-preserving mapping. Other desirable properties are the preservation of equispaced points, angles [Fiume-Fournier-Canale87], and areas. The implementer of such a system seeks efficient, incremental computations and well- behaved functions (single-valued and invertible). We will see how well the simplest 2-D mappings meet these criteria.
11
Page 18

Figure 2.2: Left: A checkerboard texture. Right: Image produced by naive texture mapper using linear interpolation of u, v. Note horizontal lines of discontinuity passing through vertices (indicated with dashed lines).
Figure 2.3: Image produced by projective warp algorithm of §2.3.5 using rational linear interpolation of texture coordinates. Note proper foreshortening.
12
Page 19

A two-dimensional mapping is a mapping (transformation) that distorts a 2-D source space into a 2-D destination space. It maps a source point (u, v) to a destination point (x, y), according to the functions x(u,v) and y(u,v). We will discuss three classes of mappings: affine, bilinear, and projective.
Homogeneous Notation
The homogeneous representation for points provides a consistent notation for affine and projective mappings. Homogeneous notation was used in projective geometry [Maxwell46], [Coxeter78] long before its introduction to computer graphics [Roberts66]. The homogeneous notation is often misunderstood so we will take a moment to clarify its use and properties.
In familiar Euclidean geometry we represent points of the real plane R² by vectors of the form (x, y). Projective geometry deals with the projective plane, a superset of the real plane, whose ho- mogeneous coordinates are (x', y', w). In projective geometry the 2-D real point (x, y) is represented by the homogeneous vector p = (x', y', w) = (xw, yw, w), where w is an arbitrary nonzero number. Vectors of the form (xw, yw, w) for w 0 form the equivalence class of homogeneous representations for the real point (x, y). To recover the actual coordinates from a homogeneous vector, we simply divide by the homogeneous component; e.g., the homogeneous vector p = (xw, yw, w) = (x', y', w) represents the actual point (x, y) = (x'/w, y'/w). This division, a projection onto the w= 1 plane, cancels the effect of scalar multiplication by w. When representing real points with homogeneous notation we could use any nonzero w, but it is usually most convenient to choose w = 1 so that the real coordinates can be recovered without division. Projective space also includes the points at infinity: vectors of the form (x', y', 0), excluding (0,0,0). The points at infinity lie on the line at infinity. We will see later how augmentation of the real plane by these points simplifies projective geometry.
-
In homogeneous notation, 2-D points are represented by 3-vectors and 3-D points are represented by 4-vectors. For affine and projective mappings, we denote points in source space by ps (u', v', q) and points in the destination space by pd = (x', y', w).
2.2.1
Affine Mappings
Affine mappings include scales, rotations, translations, and shears; they are linear mappings plus a translation [Foley-van Dam82]. Formally, a mapping T(x) is linear iff T(x + y) = T(x) + T(y) and T(ax) aT(x) for any scalar a, and a mapping T(x) is affine iff there exists a constant c and a linear mapping L(x) such that T(x) = L(x) + c for all x. Obviously, linear mappings are a subset of affine mappings.
A general 2-D affine mapping may be written algebraically as:
Pd = PsMsd
a
d 0
(x y 1) = (u v 1) b e 0
c f
1
Transform matrices are denoted Mab where a is the initial coordinate system and b is the final coordinate system. Notice that we have chosen w = q = 1 without loss of generality. The 3 x 3
13
Page 20

ABC
ABC
ABC
Figure 2.4: Affine warps of image at left.
matrix Msd has 6 degrees of freedom. Geometrically, the vectors (a, d) and (b, e) are basis vectors of the destination space, and (c, f) is the origin.
As shown in figure 2.4, affine mappings preserve parallel lines and preserve equispaced points along lines (meaning that equispaced points along a line in the source space transform to equispaced points along a line in the destination space, although the spacing in the two coordinate systems be different). We can invert an affine mapping to find the destination-to-source transformation simply by inverting the mapping matrix (iff M¸d has an inverse). Similarly, affine mappings may be composed by concatenating their matrices.
may
Since an affine mapping has 6 degrees of freedom, it may be defined geometrically by specify- ing the source and destination coordinates of three points. The mapping and its inverse will be nondegenerate if the three points are noncollinear in both spaces. To infer the affine transform matrix from a three point correspondence that maps (uk, vk) to (xk, yk) for k = 0, 1, 2, we solve the following matrix equation for Msd:
- (
Md = M&Msd
TO Уо 21 Y1 1 12 Y2 1
1
κα VO 1
ալ V1 1 մշ 02 1
a d 0
ь
0
c f
f 1
The above relation is 6 scalar equations in 6 unknowns a-f, but the system simplifies easily into two 3 x 3 systems, one for r and one for y.
Affine mappings can map any triangle in source space into any triangle in destination space, or map a source rectangle into a destination parallelogram, but no more general distortions are possible. To warp a rectangle into a general quadrilateral, we will need a bilinear, projective, or some other more complex mapping.
2.2.2
Bilinear Mappings
Bilinear mappings are most commonly defined as a mapping of a square into a quadrilateral. As shown in figure 2.5, this mapping can be computed by linearly interpolating by fraction u along the top and bottom edges of the quadrilateral, and then linearly interpolating by fraction v between
14
Search more related documents:Untitled
Download Document:Untitled

Set Home | Add to Favorites

All Rights Reserved Powered by Free Document Search and Download

Copyright © 2011
This site does not host pdf,doc,ppt,xls,rtf,txt files all document are the property of their respective owners. complaint#nuokui.com
TOP