Prev Page Relativistic Visualization
~The State of the Art~
07/05/04
Next Page

1.0 Introduction

At the time of Einstein's discovery of relativity, Maxwell's (electromagnetic) Equations were known and the speed-of-light (c) was being calculated with some accuracy. The theoretical physicists of the time were considering the physical implications of such a speed limit to the laws of the Universe.

This is when Einstein (see Timeline) departed from convention and suggested that the speed-of-light was constant but time varied (rather than vice-versa). This concept had profound implications on the meaning of causality and simultaneity. The 'future' was no longer easily distinguishable from the 'past' - it was possible for a space-time event to be neither future nor past. These theories have implications in computer graphics as will be discussed below.

Furthermore, Einstein's Theory of Relativity incorporated the concept of Time as another Dimension for our heretofore 3D Universe.

1.1 Einstein's Two Theories of Relativity

Einstein's Theory of Relativity was developed in two stages: Special Relativity and General Relativity. As the names suggest, Special Relativity (or SR) deals with the special case of relativity where the Universe (locally) is empty of mass and space is 'flat'. In SR there is no gravitational curvature and light always travels in a straight line. General Relativity (or GR) on the other hand deals with the general case where space may be flat or curved. In GR there may be gravitational curvature. In other words, light rays can bend in this warped space. The Special Theory (SR) is a special case or subset of the General Theory (GR). (Do not confuse acceleration, per se, which can exist in SR, with acceleration due to gravitational curvature which can exist only in GR).

The common premise of these two theories is that: (1) the laws of physics are the same in all reference frames; and (2) the speed of light is finite. Consequently, all observers must measure the same value for lightspeed no matter what their velocity. If that is so, then their subjective clocks must change to compensate since the lightspeed will not. Furthermore, since all distance measurements ultimately rely on timing the speed of light between known points is space, the lengths of objects must appear to change, depending on the velocity. Experiment has shown this to be true.

It has been mathematically demonstrated that there can be only one such limiting velocity. If this is accepted, and the speed of light is the limiting velocity, then the propagation of all causality (in our Universe) must be limited to lightspeed. This includes all information about the local state of our Universe such as gravity, energy, force, mass, etc. If so, the speed-of-gravity must likewise be equal to c. Experimentation is in progress to determine if this is indeed true.

This is all very different from our day-to-day experiences, i.e. - it is nonintuitive.

1.2 Extra Dimensions

We are all familiar with 3D space, a well as Einstein's introduction of a time dimension, thus yielding a 4D space-time. This time dimension has different characteristics than our 3 space dimensions. This space-time is sometimes referred to as 3+1 dimensionality: 3 isotropic spatial dimensions plus 1 anisotropic temporal dimension.

In 1914, still before General Relativity, Nordstrom attempted to unify gravity and electromagnetism in a 5D flat space-time. In the wake of Einstein's General relativity, Theodor Kaluza extended GR into 5D and attempted to extract ordinary 4D Einstein gravity and Maxwell electromagnetism. Einstein, who refereed Kaluza's paper, refused to sponsor it until 1921, when Oskar Klein specified that the extra dimension's size was on the order of the Planck scale (10-35m). Since that time, String Theories have built on the Kaluza-Klein Theories to suggest 6, 11 and even 26 dimensions of various sizes and characteristics.

The latest theories suggest that our 3D Universe is but one 3D (or 4D) membrane (brane) in a higher dimensional bulk universe, similar to one of the many 2D walls in a 3D house, or if you prefer, the 2D pages in a 3D book.

2.0 Visualization - A Brief Overview

Let us now discuss the history of visualization of relativistically moving objects, and how this can affect computer graphics applications when applied to rapidly (relativistically) moving objects.

2.1 Pencil & Paper Analysis

Mathematical analysis suggested that objects are Lorentz contracted [Lorentz] in the direction of relative motion. This implied that a basket ball moving with sufficient velocity relative to an observer would look like a pancake to that observer, with its symmetrical axis collinear with the velocity vector.

In 1958 Penrose [penrose], and in 1959 Terrell [online reference], showed that a relativistically moving object would appear to rotate about the object's axis that is perpendicular to the 1) velocity vector and 2) the line-of-sight vector to the object from the observer in the laboratory (or stationary) frame.

Further work was performed by F. Weisskopf in 1960 and Marie Boas in 1961 [boas] to prove that the silhouette of a relativistic moving sphere appears to be circular due to the coincidental contributions of Terrell Rotation and Lorentz Contraction.

This seeming contradiction was resolved with visualization software at the end of the 20th century, as shown in these animations.

2.2 Early 4D Visualization

In the early 1980's, when commercial workstations were developed with enough power to generate real-time animation, the author of this discussion independently created a real-time 4D visualization of a rotating shaded Hypercube as a test fixture for the Calcomp Prisma CAD Workstation [hyperd link]. Developing Hypercube visualizers has become a popular pastime in the last few years as is evidenced by their proliferation on the Internet. [Link to 4D site list]

These visualizations treat this hypothetical 4th dimension as simply another space dimension, isotropic and orthogonal to our 3 space axes. Conventional rotations and translations are applied via matrix transforms and projected onto a 2D screen (or two 2D windows for stereo viewing).

In 2001, Mike d'Zmura of UCI [mike] created such a Virtual 4D World [link to lab] using this isotropic 4D technique that was successfully navigated by test subjects wearing real-time headsets.

A list of published papers, indicative of the activity in the field of flat 4D Visualization, is provided below.


2.3 Relativistic Visualization

Relativistic Visualization can be implemented as a 4D application and addressed by these 4D technologies.

Einstein's SR and GR theories will be addressed independently. The Special Theory of Relativity (SR) will be addressed first since it is simpler, requiring simpler algorithms.

2.3.1 Special Relativity: Flat Space Visualization

The simplest implementation is a 4D object representation, where each vertex of the object is a 4 component tuple: (t, x, y, z) [0123 order]. In addition to the usual 3 spatial coordinates, there is a 4th t or time coordinate. As with a 3D raytracer, a ray from each screen pixel is intersected, not with a 3D object space, but with with the 4D space-time object space. It can be demonstrated that a straight line will intersect a 4D [flat] space-time (R4) object in the same way a line intersects a 3D space object (R3). If we define a scalar distance in the usual way, i.e.. As the dot product of a vector with itself, then we can solve for the shortest intersection of a light ray from a pixel into the 4D object space. The exact form of the dot product is a function of the metric that is assumed for the particular space-time under consideration. For our flat space we shall explore several [i.e. - Minkowski, Lorentz, ...?] metrics.

In conventional terms, each vertex of the object is represented as a 4 component coordinate or 4-tuple as in (t,x,y,z). Where the t component represents the particular vertex's location at time t. Thus a stationary sphere would trace out a hyper-cylinder along the t axis in this 4 space (ignoring one of the 3 spatial dimensions). A moving sphere would trace out a cylinder at an angle to the t axis. The angle would increase with the object's velocity.

This is the strategy selected by the author [DVBlack] to represent Einstein's spacetime model.

A list of published papers, indicative of the state-of-the-art activity in the field of SR Visualization is provided below.

2.3.2 General Relativity: Curved Space Visualization

The difference between SR & GR is that in GR the space is not only warped, but it warps dynamically. The local warp is a function of the local energy. This warp bends the light rays, and the light rays cause the warp. Consequently, intersection of a light ray and a 4D object is no longer a case of intersecting a straight line with an object in a 4D object-space-time. The light ray can bend, as is evidenced by 'gravity-lensing'.[conjecture 1][conjecture 2].

A list of published papers, indicative of the state-of-the-art activity in the field of GR Visualization is provided below.

2.4 Multidimensional Models

Multidimensional visualization is of itself a young field worthy of research. [IEEE Visualization Proceedings 98]. String Theorists calmly refer to 7, 11 and 27 dimensional bulk universes. [implementation] Large Extra Dimension (LXD) models are appearing in contemporary physics literature [Randall-Sundrum].

Exploring higher dimensional representations of empirical physical phenomena may yet lead to a new and profound understanding of the nature of our Universe beyond the Standard Model.

2.5 Spacetime Visualization

Spacetime visualization is a new field that has not been addressed on its own merit. Spacetime visualization is Euclidean 4D visualization with constraints. These constraints are: 1) a finite limit on lightspeed; and 2) each inertial frame has its own orthogonal time axis collinear with its velocity 4-vector.

3.0 Visual Effects

There are certain visual effects that must be accounted for by any relativistic visualization technology. These effects include geometric distortions, changes in brightness, viewing directions, and color. The optimal algorithm would accommodate all these phenomena without special consideration. The best technique may also yield new and unexpected results.

Rather than treat the effects of relativistic velocities, the fundamental nature of spacetime that causes the relativistic effects will be addressed. The visual effects of relativistic motion can be broadly divided into two categories: physical phenomena that are a result of the fundamental nature of spacetime; and illusory visual effects due to viewing with a finite lightspeed. In the phenomena category are length contraction and time dilation. In the later illusory category are visual effects such as retarded time, aberration, Terrell rotation and Doppler shift.

In the following discussion, it is assumed we are at rest in our own 'rest-frame', moving objects are moving relative to us (relative to our rest-frame).

3.1 Geometric Distortion

Apparent distortion vs. real distortion. Apparent distortion is due to a finite light speed. Any light speed, e.g. 30 km/h or .3*10^9 km/s, would cause visual distortion, due to the delay in delivering information from different parts of the object. However, certain distortions are fundamental characteristics of our spacetime continuum, not merely 'apparent'. In this category are length contraction and time dilation.

3.1.1 Length Contraction

Length contraction in the direction of travel is a fundamental property of our spacetime. This is not an apparent effect resulting from the finite speed of light, as are many of the other effects. Length contraction of a moving object is a 4D effect. For an object with a constant velocity, the object's dimensions in 4-space remain constant. [conjecture 3] The length contraction can be considered to be the projection of a cross section of the 4D object perpendicular to the observer's time axis. While it can be considered an 'apparent' effect in 4-space, it is a 'real' and measurable physical phenomena in 3-space.

The effect becomes apparent as the relative velocity of an object approaches the speed of light. For example, at 86.6% of the speed of light, an object shrinks to 1/2 its rest length. Is this apparent or real? The length contraction is theoretically measurable.

3.1.2 Terrell Rotation

Terrell rotation is an effect of light's finite speed. The proximal surface of a relativistically moving object appears to rotate so that the surface faces the direction of motion, and the distal surfaces becomes visible. This occurs because the light from the distal surfaces of the object left the object before the light from the proximal surfaces. Hence the light from the distal surfaces carry information about the object's past, i.e. where it used to be, not where it is now.

Consequently, when a spherical object moves past an observer, the observer can 'see' both the front and the back (the side facing away from the observer) of the relativistic sphere. At a relative velocity near to the speed-of-light the sphere will appear to have rotated nearly 90 degrees such that the trailing quarter of the sphere opposite the observer is visible to the observer. This effect can be seen in the accompanying animation. [conjecture 4 ]

3.2 Aberration

Just as when you speed along in your car, the falling rain streams from front to back, so light's photons seem to stream from front to back, as you move along relativistically. This is 'aberration'. All the light from every location, except that directly behind you, migrates to the front in your direction of motion.

Since the speed of light is fixed, if an object travels at 90% of the speed of light, all photons must pass, front to back at a speed of at least 90% of the speed of light. The other 10% would be divided among other velocities (or other angles). Most of the light seems to come from directly before you.

3.3 Intensity & Color

3.3.1 Searchlight Effect

The increased intensity looking forward along the velocity vector due to relativistic speeds is called the searchlight effect. Intensity is the radiant energy per unit area per unit time. The effect is a combination of time dilation, length contraction and aberration. As the observer's time dilates, more radiant energy (more photons) can impact the observer's surface (camera lens) per unit time. Aberration also increases the intensity, since there appears to be more light sources ahead of the camera as the forward photons are concentrated into a smaller source area. The wavelength of the incident photons also decrease (higher frequency or greater energy), however this effect is accounted for in the Doppler effect, just as are the light sources in the aberration effect.

3.3.2 Doppler Shift

The change in apparent frequency of a a lightray is attributed to the Doppler shift. Length contraction, time dilation and finite light speed all contribute to the relativistic Doppler effect. As with sound in air, a sensor converging with a source hears a higher frequency. However, there are also counter contributions from length contraction and time dilation.

3.4 Unruh Radiation

Similar to the Hawking radiation emitted by a Black Hole, Unruh radiation is the result of the constant acceleration of a particle as per Einstein's Equivalence Principle (EEP). A constantly accelerating particle creates a hyperbolic event horizon in the direction opporite its direction of acceleration. The distance from the particle to its event horizon increase monotonically with the particle's acceleration. Effectively this means that a photon beyond the particles event horizon could never catch up to the particle. Unruh radiation is similar to Hawking radiation in that it is created at the particle's event horizon by the same mechanism that creates Hawking radiation at a Black Hole's event horizon. [conjecture 5]

4.0 Visualization Technologies

As with any software algorithm, we seek a technique that has no special cases - an algorithmic strategy, a model, from which the empirical evidence will naturally and transparently emerge. Optimization strategies may encompass shortcuts and assumptions, but our research model must not be so compromised. Let us first examine the various relativistic raytracing technologies explored to date, then select the one that transparently and most simply represents empirical phenomena.

Conventional 3D visualization techniques have been adapted to relativistic visualization applications. These techniques include polygon rendering, ray tracing, radiosity, texture-based rendering and image-based rendering. A short discussion and examples are provided for each technique. The following is extracted from Daniel Weiskopf's Ph.D. dissertation [2.01.05], of 2001, Section 5. An in depth description of each of these techniques can be found in the above referenced document.

4.1 Five Contemporary Visualization Techniques

In his dissertation Dr. Weiskopf describes five visualization strategies.

4.1.1 SR Polygon Rendering

Hsiung and Dunn [1.89.01] used image shading of fast moving objects in 1989. In 1990, Hsiung, Thibideau & Wu introduced the time-buffer, similar to the traditional z-buffer. This strategy allows the application to use a scanline algorithm and z-buffer hardware to accelerate relativistic visualization.

Conventional computer graphics techniques can be used for hidden-surface removal and projection onto an image plane. Z-buffer or other hidden-surface removal techniques still work, since the chronological distance (back in time) to the object (or vertex) corresponds to Z distance. Hence more recent events hide older events

Resulting color and brightness can be found by applying Doppler and searchlight models to the object's color.

Since I have been, as yet, unable to acquire a copy of Dr. Weiskopf's source code, the following interpretation of Wesikopf's techniques are extracted from the text of his dissertation. I need the source code to do this right.

Traditional rendering techniques are applied to render a series of still frames. Moving objects with identical velocities (including the camera or observer) are grouped into inertial reference frames. Conventional R3 transforms are applied to the frames. "Photo-surfaces" are then produced. Shadowing and conventional lighting models are selected and then applied to the photo-surfaces. Lorentz contraction is applied to the geometry of all the objects (polygons via their vertices) in each of the frames. Accelerated observers and objects can also be handled via the technique of Momentary Comoving Reference Frames (MCRF). Shadowing and moving light sources can be similarly handled.

For our purposes, this technique must be considered a model of a model - i.e. the visualization models the mathematical model of the physical event. The technique improves performance by taking advantage of customer-off-the-shelf (COTS) videogame hardware. However, assumptions are made about the physical phenomena in order to exploit optimization. These assumptions, while providing reasonable mathematical approximations, may be physically invalid. Optimization and such assumptions are problematical at this stage of research since we really don't know what we are looking for, and in many cases, have no empirical data with which to compare the visualization's results.

4.1.2 SR Ray Tracing

Hsiung and Dunn [1.89.01] also suggested a 3D raytracing static solution to display the apparent geometry of a relativistic object, and also the inclusion of a Doppler shift model [1.90.06].

First, a lightray is projected back in time from each screen pixel, in sequence. The camera parameters provide the starting 3D point and direction vector for the ray.

Second, for each Momentary Comoving Reference Frame, the ray is transformed from the camera frame (Scam) into the object's frame (Sobj) via a Lorentz Transform, and intersected with the 3D object space. The event closest to Scam (most recent) is selected as the appropriate pixel color.

Third, conventional 3D raytracing lighting models can be implemented along with the addition of the Doppler and searchlight effects. In all cases, the object contains lighting model parameters (color, reflectivity, etc.).

Fourth, as with 3D raytracing, lighting is calculated recursively to a specified depth. The deeper the recursion, the more photo-realistic the image. Each reflected (or refracted) lightray is recursively transformed from Sobj's rest frame into the reference frame of subsequent Sobj's as with the Second step, above.

4.1.3 SR Radiosity

Radiosity is based on global energy conservation, and works well for diffuse shadowing. Radiosity first determines the radiant energy at surface patches independent of the viewer's position. A renderer then computes the view from a particular position. Daniel Weiskopf, et al, developed a relativistic extension of radiosity that allows rendering of diffusely reflecting scenes [1.99.02].

The technique is good for relativistic fly-thru's of stationary scenes, since the rendering phase can be performed by conventional graphics hardware. However, the researcher is limited to stationary scenes.

4.1.4 SR Texture-based Rendering

"Texture based relativistic rendering utilizes contemporary computer graphics hardware in order to visualize relativistic effects on geometry and illumination in real-time." [2.01.05] Weiskopf developed an OpenGL implementation and demonstration of this technique.

Daniel Weiskopf proposed techniques to use texture mapping hardware to view apparent geometry [1.99.03] and relativistic illumination [2.00.10]. His technique simulates Searchlight, Aberration, and Doppler effects, which can be combined in a plenoptic function. The technique allows interactive frame rates by judicious use of contemporary graphics texture mapping hardware.

However, the researcher is limited to the Searchlight, Aberration, Doppler model.

4.1.5 SR Imagebased Rendering

Imagebased relativistic rendering utilizes the techniques developed for 3D imagebased rendering, and has all the advantages of Imagebased rendering: no 3D modeling, rendering is quick, photo-realism is easy. Weiskopf also implemented a relativistic panorama viewer - Imagine (IMAge-based special relativistic rendering enGINE).

Panoramas and movies based on data acquired by standard cameras at non-relativistic velocities can thus easily produce photo-realistic relativistic images. Of course, photo-realistic is still pretty much undefined, since we have not, as of this writing any empirical relativistic images.

A major limitation here is that the sampled image does not allow for relative motion of the objects, only the camera (Scam) can move relativistically.

4.2 Proposed Visualization Strategies

Of the above described strategies, SR Raytracing provides the closest simulation of the true behavior of light. Dr. Weiskopf mentions 4D Raytracing in his dissertation, but does not provide any details. I propose to explore 4D Raytracing as the strategy that holds the greatest promise. [alternative implementation]

4.2.1 SR 4D Raytracing

Since space is 'flat' in Special Relativity, a lightray is a straight line in 4D space, just as it is in 3D space. A 4D lightray can thus be intersected with the 4D objects in 4D space just as a 3D line can be with objects in 3D space. So the heretofore 3D coordinates (x,y,z) of each object are maintained as 4D coordinates (t,x,y,z). The procedure shall be as follows.

First, a 4D straight lightray is projected (back in time) from each screen pixel, in sequence. The camera parameters provide the starting 4D point and 4D direction vector for the ray. However, since the 4th dimensional axis in this implementation is the time axis, certain constraints are made upon the ray in 4D. Specifically, the ray must maintain a 45 degree angle with the time axis. Other constraints will be discussed in Section 5.2.1.2.

Second, this 4D ray is intersected with each object in the 4D database, as is the 3D ray in conventional Raytracing. The intersection algorithm is described in Section 5.2.1.1.2.

Third, for each intersection, the ray is transformed from the camera frame (Scam) into the object's frame (Sobj) via a Lorentz Transform derived from the information in the object's momentary comoving inertial reference frame (MCRF). Each intersection corresponds to a possible emission, absorption or transmission event. The event closest to Scam (most recent) is selected as the appropriate intersection event.

Fourth, the selected lighting model, the Hsiung model [1.90.06] wherein the spectral power distribution is carried along with the lightray, is implemented in Sobj. The object contains lighting model parameters (power-spectrum or color, reflectivity, etc.).

Fifth, as with 3D raytracing, lighting is calculated recursively to a specified depth. The deeper the recursion, the more photo-realistic the image. Each reflected (or refracted) lightray is recursively transformed from Sobj's rest frame into the reference frame of subsequent Sobj's as with the Third step, above.

This technique appears to be the 'best of breed'. The implementation is most realistic, in the sense that it implements a simple and honest simulation of relativistic first principles, and hence, could lead to the discovery of new principles of physics.

4.2.2 Lightfield Lattice

In 1991, Steven Hollasch created a 4D Raytracer that processed a ray from a 4D POV through a 3D Grid in the way that a 3D raytracer processes a ray from a 3D POV through a 2D gridded viewplane. As with the 3D to 2D raytrace transform which projected the 3D information into a 2D array of pixels, Hollasch performed a 4D to 3D raytrace transform which projected the 4D information onto a 3D grid of voxels. Hollasch then sliced this cubic 3D grid and displayed the resultant 2D slice as an image. [Hollasch '91]

I propose a similar approach but with the following modifications in order that it may be treated as a time axis. Firstly, I constrain the 4th dimension such that it represents the timeline of the imaged objects. Second, the rays used to construct the 3D light-field lattice will each be project at a 45 degree angle to the time (4th) axis. Third, each voxel will contain sufficient information to reconstruct a low resolution snapshot of the surrounding spacetime - i.e. a low resolution (i.e. compressed) view of the stage as seen from that voxel.

The Lightfield Lattice models 3D (or 4D) space as a regular lattice of equal sized cells. Each cell is adjacent to its two neighbors along each axis of the space's dimensionality. Each cell contains all the phase and spectrum information about the light (Energy/Momentum) that is incident upon it. All light incident upon it must come from the immediately adjacent cells. Each cell 'remembers' the spectrum and angle of the incident energy, i.e. the energy changes the Cell's state. The Cell's state is then communicated to the adjacent Cells via re-radiating the energy at the appropriate rate as a function of the Cell's state. The potential to re-radiate the incident energy is a function of the azimuth and elevation at which the Cell is sampled. Hence, incoming radiation will be re-radiated on the opposite side of the Cell.

This can be thought of as a holographic representation of the state of local space[time], since phase information is encoded and can hence form a holographic image of the local space[time].

Each Cell's state will be represented as 6,12,18, or 24 vectors containing an associated power-spectrum representation for the direction represented by the associated vector. The radiant spectral energy of the Cell can be determined by interpolating the radiant energy along the selected direction-vector normal to the cell between the adjacent aforementioned vectors. (See figure ...)

The image space can be viewed (in real-time) by intersecting an image plane with the Lattice of Cells, and copying the contents of each intersected Cell onto the image plane (as identified by the positive normal to the image plane passing through the Cell). This strategy allows a user to interactively explore a Special Relativistic scene by interactively positioning the image plane within the scene via a mouse.

Extending the 3D Lattice to 4 Dimensions (4D) will allow interactive exploration of a dynamic (or animated) 4D Raytraced image. The user can explore the spacetime so encoded by moving back and forth in time as well as moving about in 3 Space. In addition the SR perspective for arbitrary relativistic velocities can be extracted from the 4D lattice by slicing the 4D hypercube at various angles wrt the time axis.

4.3 Exotic Physics Models and Future Directions

The software envisioned shall incorporate the ability to visualize extra spatial dimensions (beyond 3 or 4 D). The term "exotic physics" refers to theories of physics that include extra dimensions. Likewise "exotic visualization" shall refer to the representation and visualization of extra spatial and time dimensions. The project could thus visualize the Kaluza-Klein 5D spacetime, as well as other less well known 5D & 6D exotic physics models. The exploration of an extra time dimension will also be considered.

The simulation and resultant visualization of electrodynamic particle interaction could occur via the mediation of photons. In like manner, gravitational interaction can be simulated via mediation by gravitons, whose behavior is similar to photons in some respects. It is not inconceivable that visualization at the quark and gluon level could likewise be explored with suitable modification of the Physics Model Manager.

The visualization and manipulation of the Schoedinger Wave Equation in 4D is also intriguing.

Prev Page Next Page

 

Site & Bandwidth Provided Courtesy of
Don V Black
Digital ChoreoGraphics

vstf_1.html