|
|
Visualization
- Visualize The Future
- Wide Angle
- Hollywood
- A Mixed View
- Electromagnetic
- Vision
- Color
- X-Ray
- Lasers
- Optics in Everyday Life
- Optics in Science
- Light Microscopes
- Electron Microscopes
- Medical Imaging
- Eye Glasses
- Surveillance
- Telescopes
- Optics in Review
- TV
- Scientific Visualization
- Virtual Reality
- What's Next ?
Other Pages
Visualizing The Future
( Scientific Visualization )
Scientific Visualization: An Overview
Visualization in its broadest terms represents
any technique for creating images to represent abstract data.
Scientific Visualization has grown to encompass many other
areas like business (information visualization), computing
(process visualization), medicine, chemical engineering, flight
simulation, and architecture. Actually there's not a single
area of human endeavor that does not fall under scientific
visualization in one form or another.
From a crude perspective, scientific visualization was born
out of the conversion of text into graphics. For instance,
describing an apple with words. Bar graphs, charts and diagrams
were a 2-dimensional forerunner in converting data into a
visual representation. Obviously words and 2-dimensional representations
can only go so far, and the need for more mathematically accurate
datasets was needed to describe an object's exterior, interior,
and functioning processes.
Such datasets were huge, and it wasn't until the development
of supercomputers with immense processing power combined with
sophisticated digital graphics workstations that conversion
from data into a more dynamic, 3-D graphical representation
was possible. From the early days of computer graphics, users
saw the potential of computer visualization to investigate
and explain physical phenomena and processes, from repairing
space vehicles to chaining molecules together.
In general the term "scientific visualization" is used to
refer to any technique involving the transformation of data
into visual information. It characterizes the technology of
using computer graphics techniques to explore results from
numerical analysis and extract meaning from complex, mostly
multi-dimensional data sets.
Traditionally, the visualization process consists of filtering
raw data to select a desired resolution and region of interest,
mapping that result into a graphical form, and producing an
image, animation, or other visual product. The result is evaluated,
the visualization parameters modified, and the process run
again.
Three-dimensional imaging of medical datasets was introduced
after clinical CT (Computed axial tomography) scanning became
a reality in the 1970s. The CT scan processes images of the
internals of an object by obtaining a series of two-dimensional
x-ray axial images.
The individual x-ray axial slice images are taken using a
x-ray tube that rotates around the object, taking many scans
as the object is gradually passed through a tube. The multiple
scans from each 360 degree sweep are then processed to produce
a single cross-section. See MRI and CAT scanning in the Optics
section.
The goal in the visualization process is to generate visually
understandable images from abstract data. Several steps must
be done during the generation process. These steps are arranged
in the so called Visualization Pipeline.
Visualization Methods
Data is obtained either by sampling or measuring,
or by executing a computational model. Filtering is a step
which pre-processes the raw data and extracts information
which is to be used in the mapping step. Filtering includes
operations like interpolating missing data, or reducing the
amount of data. It can also involve smoothing the data and
removing errors from the data set.
Mapping is the main core of the visualization process. It
uses the pre-processed filtered data to transform it into
2D or 3D geometric primitives with appropriate attributes
like color or opacity. The mapping process is very important
for the later visual representation of the data. Rendering
generates the image by using the geometric primitives from
the mapping process to generate the output image. There are
number of different filtering, mapping and rendering methods
used in the visualization process.
Some of the earliest medical visualizations, created 3D representations
from CT scans with help from electron microscopy. Images were
geometrical shapes like polygons and lines creating a wire
frame, representing three-dimensional volumetric objects.
Similar techniques are used in creating animation for Hollywood
films. With sophisticated rendering capability, motion could
be added to the wired model illustrating such processes as
blood flow, or fluid dynamics in chemical and physical engineering.
The development of integrated software environments took visualization
to new levels. Some of the systems developed during the 80s
include IBM's Data Explorer, Ohio State University's apE,
Wavefront's Advanced Visualizer, SGI's IRIS Explorer, Stardent's
AVS and Wavefront's Data Visualizer, Khoros (University of
New Mexico), and PV-WAVE (Precision Visuals' Workstation Analysis
and Visualization Environment).
These visualization systems were designed to help scientists,
who often knew little about how graphics are generated. The
most usable systems used an interface. Software modules were
developed independently, with standardized inputs and outputs,
and were visually linked together in a pipeline. These interface
systems are sometimes called modular visualization environments
(MVEs).
MVEs allowed the user to create visualizations by selecting
program modules from a library and specifying the flow of
data between modules using an interactive graphical networking
or mapping environment. Maps or networks could be saved for
later recall.
General classes of modules included:
data
readers - input the data from the data source
data
filters - convert the data from a simulation or other source
into another form which is more informative or less voluminous
data
mappers - convert information into another domain, such as
2D or 3D geometry or sound
viewers
or renderers - rendering the 2D and 3D data as images
control
structures - display devices, recording devices, open graphics
windows
data
writers - output the original or filtered data
MVEs required no graphics expertise, allowed for rapid prototyping
and interactive modifications, promoted code reuse, allowed
new modules to be created and allowed computations to be distributed
across machines, networks and platforms.
Earlier systems were not always good performers, especially
on larger datasets. Imaging was poor.
Newer visualization systems came out of the commercial animation
software industry. The Wavefront Advanced Visualizer was a
modeling, animation and rendering package which provided an
environment for interactive construction of models, camera
motion, rendering and animation without any programming. The
user could use many supplied modeling primitives and model
deformations, create surface properties, adjust lighting,
create and preview model and camera motions, do high quality
rendering, and save images to video tape.
Acquiring data is accomplished in a variety of ways: CT scans,
MRI scans, ultrasound, confocal microscopy, computational
fluid dynamics, and remote sensing. Remote sensing involves
gathering data and information about the physical "world"
by detecting and measuring phenomena such as radiation, particles,
and fields associated with objects located beyond the immediate
vicinity of a sensing device(s). It is most often used to
acquire and interpret geospatial data for features, objects,
and classes on the Earth's land surface, oceans, atmosphere,
and in outerspace for mapping the exteriors of planets, stars
and galaxies. Data is also obtained via aerial photography,
spectroscopy, radar, radiometry and other sensor technologies.
Another major approach to 3D visualization is Volume Rendering.
Volume rendering allows the display of information throughout
a 3D data set, not just on the surface. Pixar Animation, a
spin-off from George Lukas's Industrial, Light and Magic (ILM)
created a volume rendering method, or algorithm, that used
independent 3D cells within the volume, called "voxels".
The volume was composed of voxels that each had the same property,
such as density. A surface would occur between groups of voxels
with two different values. The algorithm used color and intensity
values from the original scans and gradients obtained from
the density values to compute the 3D solid. Other approaches
include ray-tracing and splatting.
Scientific visualization draws from many disciplines such
as computer graphics, image processing, art, graphic design,
human-computer interface (HCI), cognition, and perception.
The Fine Arts are extremely useful to Scientific Visualization.
Art history can help to gain insights into visual form as
well as imagining scenarios that have little or no data backup.
Along with all the uses for a computer an important part of the computers future is the invention of the
LCD screens,
which helped tie it all together. This brought the visual graphics to life, with better resolution,
lighter weight and faster processing of data than the computer monitors of the past.
Computer simulations have become a useful part of modeling
natural systems in physics, chemistry and biology, human systems
in economics and social science, and engineering new technology.
Simulations have rendered mathematical models into visual
representations easier to understand. Computer models can
be classified as Stochastic or deterministic.
Stochastic models use random number generators to model the
chance or random events, such as genetic drift. A discrete
event simulation (DE) manages events in time. Most simulations
are of this type. A continuous simulation uses differential
equations (either partial or ordinary), implemented numerically.
The simulation program solves all the equations periodically,
and uses the numbers to change the state and output of the
simulation. Most flight and racing-car simulations are of
this type, as well as simulated electrical circuits.
Other methods include agent-based simulation. In agent-based
simulation, the individual entities (such as molecules, cells,
trees or consumers) in the model are represented directly
(rather than by their density or concentration) and possess
an internal state and set of behaviors or rules which determine
how the agent's state is updated from one time-step to the
next.
Winter Simulation Conference
The Winter Simulation Conference is an important
annual event covering leading-edge developments in simulation
analysis and modeling methodology. Areas covered include agent-based
modeling, business process reengineering, computer and communication
systems, construction engineering and project management,
education, healthcare, homeland security, logistics, transportation,
distribution, manufacturing, military operations, risk analysis,
virtual reality, web-enabled simulation, and the future of
simulation. The WSC provides educational opportunity for both
novices and experts. As of this writing, the next conference
will be held December 3-6, 2006, in Monterey, CA, at the Portola
Plaza.
^ Top ^
|
|