1 Introduction

OpenGL is widely used in computer graphics (and many other fields) applications for visualization. In this project, an icosahedron whose facets are painted with different colors is created using OpenGL, and an animation is generated exhibit the 3-D structure of the icosahedron.
figure Icosahedron0_2.png
(a) Icosahedron, viewed from location 1.
figure Icosahedron0_4.png
(b) Icosahedron, viewed from location 2.
figure Icosahedron0_5.png
(c) Icosahedron, viewed from location 3.
figure Icosahedron0_6.png
(d) Icosahedron, viewed from location 4.

figure Icosahedron1_1.png
(e) Geometric solid obtained when each facet of an icosahedron is subdivided into four congruent equilateral triangles.
figure Icosahedron1_3.png
(f) The geometric solid in Fig. e↑ observed from a different angle.
figure Icosahedron2_3.png
(g) Geometric solid obtained when each facet of in Fig. f↑ is subdivided into four congruent equilateral triangles.
figure Icosahedron3_5.png
(h) Geometric solid obtained when each facet of in Fig. g↑ is subdivided into four congruent equilateral triangles.
Figure 1 Icosahedron and derived geometric solids

2 The explanation of coordinate transformations in OpenGL

An icosahedron has 12 vertices, 20 facets, and 30 edges, which can be displayed by OpenGL if the 3-D coordinates of of the 12 vertices are given.
OpenGL uses two coordinate systems—(xo, yo, zo)T and (xo, yo, zo)T—to locate 3-D points (as shown in Fig. a↓). The relationship between the two coordinate systems is [xo, yo, zo, wo]T = Mmodel[xo, yo, zo, wo]T,in which [xo, yo, zo, wo]T and [xo, yo, zo, wo]T are the homogeneous coordinates of [xo, yo, zo]T and [xo, yo, zo]T, respectively. Mmodel, a four-by-four matrix, facilitates the creation of objects at various poses. Imagine that one wants to use OpenGL to create the model of a gull diving into the surface of a lake. Instead of directly generating a gull diving into water at a certain angle, one can generate a flying gull at the origin of virtual reality (possibly through the import of the points obtained from a laser scanner) and then use Mmodel to move it to an appropriate position and orientation.
figure vision_graphics_conversion(orthographic_frustum).png
(a) Orthographic frustum.
figure bunny_in_ndc.png
(b) Normalized device coordinates (NDC).
figure Dissertation/experiments/bunny_in_window.png
(c) Rendering.
Figure 2 Fig. a↑ is the view frustum. In OpenGL, only the points located inside the frustum are rendered on the viewport. The frustum is determined when 1) parameters l, r, t, b, f, and n are specified and 2) the origins and the orientations of the eye coordinate are fixed. The readings of any point in the eye coordinates are converted to the normalized device coordinates (NDC) in Fig. (b↑). The points whose coordinates are located in the frustum are individually converted to values between  − 1 and 1 in NDC. If we denote the height and the width of the viewport in Fig. c↑ by h and w, respectively, the origin of the x component of NDC is mapped to (w)/(2) (the center of the x component of the viewport) and the origin of the y component of NDC is mapped to (h)/(2) (the center of the y component of the viewport). In addition, xn serves as the ratio of the horizontal displacement off the viewport center to the half width ((w)/(2) ), and yn serves as the ratio of the vertical displacement off the viewport center to the half height ((h)/(2)).
The coordinate system of the observer in OpenGL is usually called the eye coordinate system—[xe, ye, ze]T (as shown in Fig. a↑). The transformation from [xo, yo, zo, wo]T to [xe, ye, ze, we]T is [xe, ye, ze, we]T = Mview[xo, yo, zo, wo]T. Therefore,
xe ye ze we  = Mmodelview xo yo zo wo , 
where Mmodelview = MviewMmodel. In OpenGL, Mmodel and Mview can not be separately accessed through built-in APIs; only their product, Mmodelview, can be accessed through some specific APIs.
In OpenGL, not all points in space will be rendered for users to visualize; only the points whose locations inside the view frustum (shown in Fig. a↑) are visible, and other points are culled. The view frustum is controlled by six parameters : l, r, t, b, f, and n. The first four parameters can be positive or negative numbers, but f and n are restricted to positive numbers given the coordinate configurations shown in Fig. a↑, in which the positive direction of ze points away from the bottom of the frustum.
Coordinate readings in the eye space ([xe, ye, ze, we]T) are converted to the clip coordinates (denoted by [xc, yc, zc, wc]T) through the multiplication with the projection matrix, Mproj. That is, [xc, yc, zc, wc]T = Mproj[xe, ye, ze, we]T. Next, the clip coordinates are converted to the normalized device coordinates (NDC, shown in Fig. b↑) through normalization, so [xn, yn, zn]T = [xc ⁄ wc, yc ⁄ wc, zc ⁄ wc]T. xn, yn, and zn all range between [ − 1, 1] because l, b, and  − n in the eye space are mapped to  − 1 and r, t, and  − f are mapped to 1. Multiplied with the half width and height of the viewport, respectively, xn and yn specify the projected location of point P on the viewport, as shown in Fig. (c↑). Note that although theoretically, wc can be any nonzero value, OpenGL choose to set wc =  − ze, which may be used to preserve the depth information of a point even after projection.