Prev Page Relativistic Visualization
~Proposed Plan of Development~
07/02/04
Next Page

5 Proposed Plan of Development

The intent is to develop a multidimensional viewer with the capability of constraining or imposing relationships among the extra-dimensions. It is envisioned that the Viewer will have both Raytrace and projective aspects. The projective geometry viewer will allow interactive adjustments of the n-dimensional point-of-view for display by the compute intensive multidimensional Raytracer.

The fundamental concept will be demonstrated via an initial proof-of-concept effort. This initial effort will adapt an extant 3D Raytracer to a 4D environment, and update the latter to implement an SR 4D Raytracer as described in Section 4.2.1, above. The proof-of-concept should provide insights into the design and development of an implementation that shall be extensible to extra dimensions and lightray intersection via hyperbolic (nonlinear) curves.

5.0 Visualization Software Test Fixture (VSTF) Overview

The Visualization Software Test Fixture shall be designed (see Figure 1) to easily vary the presentation strategy and sensory modality, as well as the physical model used for the simulation. The benefit of this modular design is its versatility and simplicity.

Conceptually, the implementation shall consist of four software modules, as follows:

Exotic-
Physics
Model
Manager
Multi-
Dimensional
Database
Manager
Relativistic
Hyper-
Visualization
Manager
Human/
Computer
Interface
Manager
             
Figure 1 - The Visualization Software Test Fixture

5.0.1 Exotic Physics Model Manager (EPMM) - Selects among the physics models (e.g. classical, standard) and implements that model by defining field parameters and transforming the DB objects by passing matrix information to the Multidimensional DB Manager. The speed-of-light parameter, and Minkowski metric (E4) are also provided to both the MDBM and the HVM.

5.0.2 Multidimensional Database Manager (MDDM) - Maintains the geometry and physics of the database objects. A hyper-object's (n-dimensional) geometry must include the object's extent in each of the object's dimensions. A hyper-object's physics includes mass, charge, position, attitude, velocity, and spin, for each axis. This manager shall perform dimensional transforms (e.g. - translation, rotation, Lorentz) as directed by the Physics Model Manager. More exotic physics models will require maintenance of additional states, and will require more exotic transforms.

5.0.3 Hyper-Visualization Manager (HVM) - Performs n-Dimensional raytracing using the specified metric and speed-of-light value supplied by the Physics Model Manager (the term 'hyper' refers to the extra dimensionality). This raytracing is relativistic since the lightray is not instantaneous. The lightray's path may be curved (Phase II). Optimization techniques will be implemented here to improve interactive performance. This implementation will be built on the work of Daniel Weiskopf of Germany and others, and adapted to this project.

5.0.4 Human/Computer Interface Manager (HCIM) - Manages the communication between the user (subject) and the system via various I/O devices. This includes output devices (LCD & stereo-3D displays, 3D audio, force-feedback, etc) and input devices (keyboard, joystick, space-probes, 3D bats, knobs & dials, sliders, pedals, voice, touch screen, tonal, etc.), and their combinations.

5.1 Phase I - VSTF Implementation

The VSTF will be implemented in four phases. The first of these will visualize relativistic objects via 4D Raytracing, the second phase will simulate Energy/Momentum Transport and Particle Collision. The third phase will introduce Extra Dimensions and the fourth phase will visualize General Relativistic phenomena.

5.1.1 Stage I - Special Relativistic Visualization

The initial phase of the VSTF development will immediately go beyond contemporary implementations to include provision for acceleration of objects. This may very well be the first implementation of an SR Visualizer that includes acceleration. As a consequence of processing acceleration, the implementation can be extended to include objects with angular velocity - another first.

The physics of Special Relativity takes place in a 'flat' or undistorted 4D Spacetime. Since space is 'flat' in Special Relativity, a lightray is a straight line in 4D space, just as it is in 3D space. A 4D lightray can thus be intersected with the 4D objects in 4D space just as a 3D line can be with objects in 3D space.

5.1.1.1 4D Visualization

The first order of business is to recreate the current state of the art in 4D Raytracing. I propose to proceed with the modification of an existing simple 3D Raytrace Engine to enable 4D Viewing. The POV and 4D objects will be defined by their 4D vertices in a text file. So the heretofore 3D coordinates (x,y,z) of each object are maintained as 4D coordinates (t,x,y,z). The procedure shall be as described in Section 4.2.1:

5.1.1.1.1 3D to 4D Vector Math

The 3D Vector mathematics package can be extended to 4D. The Vector object of (x,y,z) components is prepended with a 't' component to yield a (t,x,y,z) 4D Vector. The 't' component or fourth dimension is added as component zero to allow the upgrade to exotic physics.

All the vector operations, except for the cross (or vector) product can be extended by adding one more operation to the vector. For example, the dot (or scalar) product is the summation of the products of each of the four (formerly three) components. Using Dirac notation for the dot product:

<A|B> = At * Bt + Ax * Bx + Ay * By + Az * Bz

The cross-product in 3 space is used to define the normal to the plane in which lie two linearly independent vectors. This 3D cross-product can be formed from the determinant of the vectors in question and a third vector consisting of the three basis vectors of the 3 space in which the plane exists. The normal thus formed can be used to find the plane equation for the plane in question.

In like manner, the equivalent to the cross-product in 4D can define the normal to the 3D hypersurface in which lie three linearly independent vectors. This 4D cross-product (wedge product) can be formed from the determinant of the three vectors in question and a fourth vector composed of the four basis vectors of the 4 space in which the hypersurface exists. The normal thus formed can be used to find the hypersurface equation for the 3D object in question. (hypersurface in this context refers to an object of (N-1) dimensions in an N dimension space. An infinite hypersurface, and only an infinite hypersurface, can divide its bulk (N dimensional) space in twain.)

The 4D cross-product is thus the determinant of the hypersurfaces three spanning 4D vectors, as the 3D cross-product is the determinant of the plane's two spanning 3D vectors.

Cross4D[A,B,C] = {{...}} (determinant equation)

5.1.1.1.2 4D Ray / Object Intersection

It is a common strategy for rendering engines to represent complex objects by surface polygons, often tessellated into triangles. A 4D object can likewise be represented by its hypersurfaces tessellated into tetrahedrons.

A 3D Raytracer need only know how to render a 2D triangle, if the objects to be rendered are represented as surface polygons tessellated into triangles. Likewise a 4D Raytracer need only know how to render a 3D tetrahedron, if the objects to be rendered are represented as tetrahedrons. Other basic shapes and simplices are often implemented for optimization purposes, but will not be considered for the proof-of-concept package.

A 3D Raytracer renders an object by intersecting a 1D ray with each of the 2D triangles that comprise the object to be rendered. A common method by which this is accomplished is to intersect a ray with the plane in which the triangles lies, and then to determine if the intersection point lies within the boundaries of the triangle. A point coplanar with a triangle can be determined to lie within the triangle if the Barycentric coordinates are within range. The 3D solution is well documented.

In like manner, a 4D Raytracer can intersect a 1D ray with each of the tetrahedra that form the object's hypersurface. The hypersurface equation is formed from the 4D determinant of the three vectors that describe the three edges of the tetrahedron. The resultant vector is then normalized, and used to solve for the 'perpendicular' distance of the hypersurface form the origin. This information is then used to computer the hypersurface equation. Since the ray is stored as a parametric equation, the normalized hypersurface equation can then be used to find the ray/hypersurface intersection.

The intersection is then represented in its Barycentric coordinates. These coordinates are then examined to determine if the intersection lies within the tetrahedron. If it does, it is registered as a hit, and the color of the pixel is determined in the usual way ( as described in Section 5.1.1.3.4). This is costly as implemented, since it requires a matrix inversion for each intersection.

A tetrahedron can be created from a triangle by extruding the triangle along an axis (not coplanar with the triangle), and then decomposing this extruded triangle into three adjacent tetrahedra as shown in figure 5.0. Now, if this axis is orthogonal to the 3 Space in which the triangle lives, i.e. it is extruded along the 't' (orthonormal) axis, then the extruded axis is not coplanar with the triangle. Hence, the resultant tetrahedra define three identical hypersurfaces in 4 Space.

This strategy allows for the easy creation of 4D objects since existing 3D objects defined by tessellated surfaces, can be extruded into the fourth dimension and rendered.

In a simple 3D Raytracer a cube can be created from a sequence of 6 pairs of adjacent of triangles. A triangle can thus be considered the basic building block of a set of 3D objects. If the Raytracer can render a triangle, it can render any of the set of objects that are formed from triangles. Likewise, a simple 4D Raytracer which can render a tetrahedron, can render any object created by a 3D editor, tessellated and extruded into 4 Space.

5.1.1.1.3 4D Summary

1 A fourth component is prepended to the data base vector representation.
     This fourth component represents the object's position at the time indicated by the position on the fourth (or time) axis.
2 Intersect the 1D ray with the 4D object's hypersurface. Confirm that it is within the bounds of the object.

 

5.1.1.2 4D SR Visualization - Proof of Concept

The 4D Raytracer described in Section 5.1.1.1, above, can be extended to explore Special Relativistic Visualization in simple stages. The first stage will visualize the results of a limiting velocity on the speed of light. The second stage will introduce the mechanics of relativistic velocities such as time dilation and length contraction. The third stage will add other effects such as searchlight and aberration. Finally, the effects of acceleration will be included.

The 4th dimension described above becomes a time dimension by placing suitable constraints upon the relationship between the 't' axis and the (x,y,z) axes, as described in the next section. The extrusion of a 3D object along the time axis corresponds to an extrusion in time, or the delineation of the lifetime of the object - it appears on our stage (comes into existence) at tbeg, and exits the scene (ceases existence) at tend. The astute reader will realize that extruding an object through time to another spatial position also imparts a velocity to the extruded object. That is, distance traveled is the spatial component of the extrusion vector, and the speed is this same Pythagorean distance traversed in the (x,y,z) space divided by the length of the 't' component.

5.1.1.2.1 The Speed of Light

Spacetime is anisotropic. That is, there is a difference between the three spatial axes (x,y,z) and the fourth temporal (t) axis. This must be reflected in the 4D SR Visualization. One anisotropy is a constraint on the ray used to visualize the 4D objects in 4 Space. The ray must traverse a route that maintains a constant ratio between the (x,y,z) spatial axes and the 't' axis wrt the observer. If the spatial axes' scale were meters, and the time axis scale was seconds, then this constant ratio would be in units of meters/second, and would be in the neighborhood of 3 * 108 m/s. That is, of course, the speed of light.

Likewise, since no object can exceed the speed of light, no extruded triangles or objects could equal or exceed this ratio wrt the object. With suitable selection of scale factors on the time (t) and space (x,y,z) axes, the ratio of the ray's traversal of the 4 Space can be set to a 45 degree angle. Furthermore, the above mentioned angle must remain at 45 degrees within each reference frame, so the frame must be adjusted accordingly (more on this later). Also, objects can only be extruded at angles less than 45 degrees wrt the time axis: {ct,x,y,z} such that ct > |x,y,z|; i.e. - all objects are timelike. (See Mikowski Diagrams). Throughout the remainder of this document, we shall assume that the units of the time (t) axis are ct. This simplifies the math.

5.1.1.2.3 Summary of Stage One

1) A fourth component ('t' value) is added to the data base representation, which represents the time when the vertex so located in 3 space.
2). The 4D ray passed through the 4D database will always have the same ratio of the space axes to the time axis wrt the observer - i.e "-c" or the negative of the speed of light
3) Intersect the 1D ray with the 4D object's hypersurface. Confirm that it is within the bounds of one of the extruded triangles' tetrahedra. Select the closest intersection by selecting the ray with the minimum positive 4D length.
4) The 4th axis is coincident with the radial distance from the observer (POV). Hence the 4th axis is degenerate and the resulting projection will be a 2D viewplane.

Thus will the 4D Viewer have been extended into an SR Viewer by constraining the Fourth Dimension to the spacetime anisotropy as described above.

The resulting Stage One implementation will be:

1) Egocentric The observer (camera) is at rest, and the objects have relativistic velocities wrt the observer.
2) Static All velocities will remain constant (no acceleration).
3) Modifiable Object velocities (and timelines) will be defined via a text file.
4) Observable    Objects moving with relativistic velocities will introduce anomalies in the images (between the 3D and 4D views) with respect to the objects' reflections and shadows.
5) Testable Results will be comparable to prior egocentric SRV implementations described in the literature, and will thus establish a baseline for further research

5.1.2 Stage 2 - The Lorentz Transform Effects

As an object's velocity, relative to an observer, approaches the speed of light (c), certain physical changes in the observed object are perceived by the observer. Some of these changes can be considered physical or real, some are merely perceptual, for others this distiction is purely philosophical. In order to process these effects, the system must maintain a databse of the relative velocity of every object in the scene. A list of these effects, their descriptions and visual implementations are itemized below following the definition of the reference frame and the Lorentz transform.

For this discussion, all motion (velocities) shall be, unless otherwise specified, relative to the Laboratory Frame. This shall be described in more detail, below. The Laboratory Frame can be coincident with the Center of Mass Frame, with the Observer, with one of the objects in the database, or can have an arbitrary (but well defined) velocity vector. In all cases, the laboratory frame shall be considered to have an absolute velocity 4-vector of (c, 0,0,0).

5.1.2.1 The Reference Frame

As an object's velocity, relative to an observer, approaches the speed of light (c), the apparent geometry of the object changes (lorentz contraction) along with the object's apparent time base (time dilation) from the point of view of the oberver. It is necessary to keep track of the constant relative velocity of an object in order to determine from this information what the apparent changes are in the object with respect to an observer (or lightray). This information, stored in the object's Interial Reference Frame (IRF) datum, will be used to compute the Lorentz Transform, as described below. An IRF has constant velocity, i.e. - there is no accleration. Multiple objects can share an IRF. The IRF datum will contain the object's relative velocity with respect to some a' priori universal reference velocity to be known as the Laboratory Frame. The egocentric display (zero veloicity camera) requires that the Laboratory Frame and the observer's Rest Frame be the same. The possible intertial frames of reference include the POV (observer) , the Camera, the Center Of Mass, or the IRF of one of the objects.

The IRF datum would contain the Lorentz Factor (g) and the Lorentz Matrix (Section 5.1.1.3.2) to be applied as required for shading parameters. The IRF datum will also contain the beta value (b), and the instantaneous velocity vector for the object (gc, v). Each object must contain a link to its IRF datum. This IRF cpp object will include a Lorentz Factor method, and a Lorentz Transform method.

Note - The speed of light is constant in all frames - i.e. it is the same in the lightray's source frame, the object frame and the camera frame. Since all velocities are relative, any inertial frame can be selected as the working frame and be used for the basis of computation. This working frame shall be called the Laboratory Frame. All other frame's velocities will be specified wrt the laboratory frame. The camera frame's data will be computed wrt the laboratory frame. The reversed lightray's vector from the camera will be computed in the camera frame and then transformed into the laboratory frame in order to perform the 4D intersection with the moving object.

In order to determine the lighting in the usual way, the normal to the surface of the object at the intersection point must be determined, as well as the angle of intersection of the reversed lightray. The reflected (and refracted) vector component(s) of the reversed lightray can then be computed, as well as the contributions of the spectral and diffuse components. Since the lorentz contraction can change the object's surface normal, the object will be Lorentz contracted in the lab frame, and the normal will then be computed in the lab frame.

5.1.2.2 The Lorentz Transform

As noted above, the apparent geometry of the object changes with the relative velocity of the object wrt the observer. These changes are described by the Lorentz Transform. The only information required by the Lorentz Transform is the relative velocity of the object (wrt the observer) and the 4-Vector (t,x,y,z). By convention, beta represents the speed/lightspeed ratio:

b = v/c

and gamma (g) is used to represent the Lorentz Factor:

g = 1 / (sqrt[ 1 - (v/c)2]) = (1-b2)(-1/2)

A close examination of this equation shows that as the velocity approaches the speed of light, the denominator approaches zero, and the Lorentz factor rapidly increases towards infinity. The Lorentz Factor is used as follows to compute the change in length due to relativitic velocities. In the following two equations, it is assumed that the velocity (v) is along the positive x axis, and the L vector represents the object's length. L' is the apparent length as viewed by the relativistic observer.

(L't,L'x,L'y,L'z) = (gLt-gvLx, -gvLt+gLx, Ly, Lz)

or more elegantly in matrix notation:

L' = ( g  -gv    0    0 ) * L
 -gv g 0 0
0 0 1 0
0 0 0 1

Note that the first row and first column of the matrix are the time axis, and only the time and the length's x component (direction of motion) are affected. Note also the symmetry about the diagonal, as if the spatial component is being rotated into the negative time axis, and vice 'a versa. Einstein suggested that this is exactly what is happening. We are all falling thru time at the speed of lighttm. We gain velocity by rotating our time axis into our spatial axis.

5.1.2.3 Shading Parameters

When the lightray intersects an object, the Lorentz Transform must be applied to the object in order to determine the shading parameters. Length contraction will change the surface normal and hence the reflection and refraction angles, as well as the specular and diffuse contributons to the color.

5.1.2.4 Power (Intensity) Spectrum Analysis ( Pixel Color)

Inspection of the Lorentz Transform will show that the 't' component of the 4D length vector is modified, indicating that the duration (length of time axis) of the object has increased as the length decreased. This is known as time dilation. This will affect the color property (via the frequency) of the sampled pixel.

Hence, the light sources must contain the power-spectrum representing the color of the light ray emitted by that light source, and each ray must either accumulate delta-velocity and frequency filter information, or provide a linked list from each of the terminal light sources back to the camera so that the pixel color can be determined from the accumlated filtered power-spectrum. It may be possible to combine these effects into one matrix. This will be explored.

The initial implementation shall create a doubly linked list for each ray from the observer to an emitter (light source). The pixel color will then be computed by following the linked list in reverse order from the emitter (tail end) back to the observer (view plane pixel) accumulating the power spectrum of the resultant pixel by performing a Lorentz Transform upon the power spectrum and the usual lighting calculations at each intersection event.

5.1.3 Stage 3 - Searchlight and Aberration Effects

Other lighting effects such as searchlight (increased photons per unit time) and aberration (apparent position) should automatically affect the observer by the nature of the raytrace algorithm's interaction with time dilation and length contraction.

I must examine the relativistic contribution to the searchlight (or headlight) effect versus the 'doppler' contribution. Is this a relativistic phenomena? It is the increase of intensity (number of photons per unit time) in the direction of motion. The number of photons per unit time will increase since the observer will encounter more photons in a unit time due to the oberver's velocity wrt the light source, as with a the doppler shift. Relativistic effects are introduced due to the time dilation effect. How would this be dealt with via the raytrace algorithm? (How is time dialtion treated effectively and accurately via raytrace?).

I believe Section 5.1.1.3.4 above, addresses the searchlight effect by transforming the intensity spectrum for the emitter, thus increasing the photon count for each frequency. I must compare the math for both cases.

In the classical case of aberration, e.g. in the case of raindrops passing an automobile on the highway, the raindrops' motion would be found by the vector addition of the observer's velocity and the velocity of the falling raindrops. In the case of adding light vectors, the constant speed of light must be considered.

Relativistic aberration is the resulting decrease in the angle (between the incoming lightray and the velocity vector) towards a limit of zero (a point) in the direction of relativistic motion . The addition of relativistic velocities is handled differently than velocities in classic Newtonian mechanics. Relativistic velocity vector addition is modulated by the Lorentz Factor (g) such that the resultant velocity is never greater than c, even if one or more of the velocity vectors is equal to c. For the addition of two parallel relativistic velocities u and v, the resultant velocity w, where U and V are oriented in the usual way, is given by:

w = (u + v) / (1 + (u * v) / c2)

Consider three Interial Reference Frame, U, V, W. Consider also the addition of two velocities: v- the velocity of frame V with regard to Lab Frame U; and w - the velocity of frame W with regard to frame V. Frame V is oriented in the standard way wrt U. W is oriented in the standard way wrt V. (note - this does not mean that W is oriented in the standard way wrt U. Determine u - the velocity of frame W wrt frame U.

ut = f( vx, w)
ux = (vx + wx) / (1 + vx wx/c2) = (vx + cx) / ( 1 + vx / c (cx/c))) = (vx + cx) * K
uy = wy / ( g (1 + vx wx / c2)) = cy / ( g ( 1 + vx / c (cx/c))) = cy / g * K
uz = wz / ( g (1 + vx wx / c2)) = cz / ( g ( 1 + vx / c (cx/c))) = cz / g * K

In the case of relativistic aberration, the incoming lightray is equal to c and the observer's velocity is relativistic. So the realtivistic velocity vector addition will yield an incoming lightray, wrt the observer, still traveling at c, even though the incoming lightray overtook the observer from behind. The observer will see the lightray as oncoming from the front. Hence an object, say 120 degrees from the observer's velocity vector will be observed as being in front of the observer - i.e. the angle of incidence is compressed. I suspect that there is a simple strategy (as with the searchlight effect, above) to incorporate this effect into the raytrace algorithm without resorting to a special case of relativistic velocity vector addition.

5.1.4 Stage 4 - Acceleration

The treatment of acceleration in flat space has been a controversial topic in some quarters due to confusion between Special Relativity and General Relativity. It has been suggested that only GR can accurately represent acceleration. This is not true. It has be shown by experiment that SR accurately handles acceleration wthin the SR domain via the introduction of the Momentary Comoving Reference Frame (MCRF). Visualizing acceleration in flat space is not a problem with this technique.

Acceleration is a change in the velocity vector's magnitude and/or direction. This implies that the 4D extrusion of an accelerating object along its velocity vector is not a straight line. The changing velocity can be expressed as a sequence of short varying timelines, or a smooth gradual curve.

Representing the acceleration of relativistic objects will be implemented merely by allowing the extrusions of the 2D triangles into the 4D bulk to be non-linear. For example, where a non-accelerating object's vertex may be extruded from a (t,x,y,z) of (0,0,0,0) to (1,1,0,0) and then to (2,2,0,0); the same object may be accelerated by extruding from (0,0,0,0) to (1,1,0,0) and thence to (2,3,0,0). This is an acceleration since the distance between the first and second extrusions has increased, while the time-step stays at a value of one unit.

A more sophisticated and natural method would be to implement an animation editor that would apply acceleration to move objects along a b-spline. The editor would then divide the b-spline into many linear steps at the specified stepsize, and output these to the 4D database.

In either case, the 4D SR Raytracer would intersect the 4D represenation of the accelerated objects with no changes to the raytrace engine and faithfully render the relativistic scene.

As noted in Section 5.1.1.3.1, there must be an Inertial Reference Frame linked to each unique velocity vector in order to perform relevant Lorentz Transforms. In addition, there is need for a Non-inertial Reference Frame (NRF) from which the MCRF and IRF data can be extracted. The NRF cpp object must contain a getMCRF() method, and a getIRF() method, that return the MCRF and IRF, respectively, for a specific ray/object intersection event. Thus a 4D intersection will getMCRF(), getIRF() and apply the Lorentz Transform to the lighting model in order to determine the new lightray power-spectrum.

An interesting conundrum can be raised when considering the intersection of a ray with an accelerated object. Assume prior to acceleration a cube with an edge of length 'a' with two adjacent vertices at (x,y,z) and (x+a,y,z). Following acceleration it has a edge of length 'b' = g 'a', and the same vertices are at (x',y,z),(x'+b,y,z). Now, the question arises, where is the object? Since it is now 'shorter' than it was prior to acceleration, in which volume of space does the object lie? What is the value of x'? The answer to the question depends on where on the object the force that caused the acceleration was applied to the object. Using the principal of relativity, consider the same system from the point of view of an accelerated observer. (By applying the accelerated observer principle to both frames, extracting the MCRF and considering only the relationship between the MCRF in one frame to the MCRF in the other frame). In the latter case from th epoint of view of the observer, the entire Universe has shrunk, and so every position of the object (specifically, x') has repositioned itself linearly with respect to its distance from the observer by the scale factor g.

Note that if constant 'proper' acceleration 'a' were applied to an object, a photon emitted c2/a behind the object while the object was at rest, would never reach the object.

A future version of the VSTF could upgrade the ray/object intersection algorithm to allow ray/b-spline and ray/b-patch intersection, and hence treat the b-spline extrusion directly without resorting to linear steps.

(Physics note 1: All electromagnetic radiation is due to accelerating charges. The ability to visualize acceleration is necessary to simulate the visualization of electrodynamic phenomena.)

(Physics note 2: There are interesting effects due to 'large' accelerations, notably unruh radiation, hawking radiation, and event-horizon effects, etc).

5.1.4.1 Angular Velocities

An angular velocity (rotation) requires an acceleration orthogonal to the direction of motion, i.e. an acceleration along the radial axis towards the center of rotation. An object rotating at a constant rate, like a child on a merry-go-round, undergoes a constant acceleration.

Relativistic rotation can thus be visualized by attaching an MCRF at each intersection point of the lightray and the rotating object, and processing the MCRF as if it were an IRF.

5.2. Phase II - Visualization of Non-Intuitive Physical Phenomena

5.2.1 The Pole-Barn Paradox

Provide views of the event from various Inertial Frames to view the pole from the Pole's frame, the Barn's frame, and from any arbitrary frame. (thus demonstrate theory of simultaneity).

5.2.2 Energy/Momentum Transport

Bosons (or force particles), such as photons, transport Energy and Momentum between subatomic particles. It will be possible to simulate this exchange of forces at the subatomic level with the VSTF tools developed in Phase I.

5.2.2.1 Particle Collision

5.3 Phase III - Visualizing Extra Dimensions

5.3.1 Kaluza-Klein Dimensions

5.3.2 Spin

5.3.3 Large Extra Dimensions

5.4 Phase IV - General Relativistic Visualization

5.4.1 Spinning Charged Sphere

5.4.2 Ultra High Energy Particle Collisions

5.4.3 Blackhole Formation

Appendix A - Acronyms

3+1D - 4D spacetime: three spatial plus one time dimension
3D
- Three Dimensions
4D - Four Dimensions
GR
- General Relativity
IRF - Inertial Reference Frame
MCRF - Momentary Comoving Reference Frame
QCD - Quantum Chromo Dynamics
QED - Quantum Electro Dynamics
SR - Special Relativity
SRV - Special Relativistic Visualization


Appendix B -
Glossary

Aberration of light
Bulk
Center of Mass Frame
Doppler effect
Emergent
Emergent Magnetism

Emergent Physics

Emergent Relativity
Extrude
Extrusion
Gravitmagnetic
Gravitational Thomas Precession (GTP)
Gravitational Lienard Weichart Potenial
Headlight effect
Inertial Reference Frame (IRF)
Laboratory Frame
Lattice - a regular 3D (or 4D) array.
Lienard Weichart Potenial
Lorentz Contraction
Lorentz Metric
Minkowski Diagram
Minkowski Metric
Metric
Momentary Comoving Reference Frame (MCRF)
n-bulk

Non-inertial Reference Frame
Quaternions
Rest Frame
Retarded Time
Searchlight effect
Space-time
Terrell Rotation
Thomas Precession

Prev Page Next Page

 

Site & Bandwidth Provided Courtesy of
Don V Black
Digital ChoreoGraphics

copyright 2003
Digital ChoreoGraphics

vis_2.html

abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ
ABCDEFGHIJKLMNOPQRSTUVWXYZ