Computing

On November 12, 2003, in Computing
As of last month, the world’s second (maybe third) fastest supercomputer is a
cheap (US$5 million) Mac cluster. As of next year, one of the
world’s top supercomputers could be your desktop PC, thanks to a new chip being shipped to computer manufacturers for
evaluation right about now.

Photonic computing is not yet ready for your desktop, but it is ready for business. So is grid computing for medicine, in
the form of the new National Digital Mammography Archive,
and the bioinformatics breakthrough potential builds explosively as new ways to
handle petabytes and exabytes of molecular-level data are brought online through
the grid. Even camera-equipped cell phones will soon be able to handle and store
the multi-megabytes of data that make up an hour of video
footage, thanks to a new Intel chip on its way to cell phone manufacturers.

Medicine, by the way, is not the only field to go molecular: Electronics is
going that way, too, through molecular switches made of
gold atoms and nanowires grown like grass on a sod farm
to make flexible plastic computer chips.

A technological flip seems to be underway, as humans are turned into network hubs, while a new branch of AI puts instinct into machines, and NASA’s Jet Propulsion Lab builds it into robots.

Big Mac

We see computer price/performance ratios improving every day, but for a
really stark demonstration of the acceleration in the falling cost of computing,
consider that a new supercomputer built out of 1,100 dual-processor Power
Mac
G5 chips could be the second fastest in the world — after Japan’s
Earth Simulator — if the number is confirmed in final tests in
mid-November. Big Mac, as it is inevitably known, cost $5.2 million,
versus $350 million for the Earth Simulator.

Reference: Kahney, Leander (2003). “Mac
Supercomputer: Fast, Cheap
.” Wired News, October 15.

Laptop Supercomputers

We thought AMD’s introduction last month of the Athlon 64 CPU
chip for standard PCs was a big deal. After all, it brings mainframe computing
to the desktop. Barely a month later, we learn that ClearSpeed Technologies has
introduced a 25 gigaflop parallel processing chip offered as a single chip or as
a PCI card containing four chips. A desktop PC stuffed with six of the PCI cards
would qualify as one of the 500 most powerful supercomputers in the world,
operating at 600 gigaflops. Prototypes of the chips will be supplied to computer
manufacturers by the end of this year.

The chip needs only two watts of power — low enough to run off a laptop
battery. A two-chip card would turn a laptop into a 50 gigaflop machine with the
processing power of a small Linux cluster. With the next generation chips, it
would be a 200 gigaflop low-end supercomputer. The next generation of the chip
could lead to petaflop supercomputers — equal to 25 of the current world
champion supercomputer, Japan’s Earth Simulator. And instead of taking up
an entire warehouse, it will only occupy about 20 racks.

Reference: Kahney, Leander (2003). “Turn That PC
Into a Supercomputer
.” Wired News, October 14.

Photonic Computing Arrives

Custom-built photonic chips have been around for a while in well financed
government labs, but the eight gigaflop Enlight chip (about a thousand
times faster than a standard electronic chip) is the first commercially
available photonic chip. Actually, it’s a hybrid photonic-electronic chip, but
owes its speed to the photonic part, which permits massively parallel
processing. It is not a general purpose processor like a Pentium. Each must be
custom-built for specific tasks, is not programmable, and costs “tens of
thousands of dollars.”

Reference: Graham-Rowe, Duncan (2003). “New processor
computes at light speed
.” New Scientist, October 3.

Breast Cancer and Grid Computing

The National Digital Mammography Archive (NDMA), developed at the University
of Pennsylvania and apparently now spun off to private interests, enables
facilities with digital mammography machines to share mammograms with other
digital mammography facilities via the NDMA grid computing network.
De-identified data from NDMA will be sold for pharmaceutical research and
development, and decision-support software can be applied to the NDMA data to
help doctors detect early cancers.

Currently, only 490 of 15,400 mammography machines in the United States are
digital, but accelerating adoption is expected as more competitors enter the
market and prices of the very expensive machines come down.

Grid computing has hitherto been more about processing than storage, but NDMA
turns that on its head. Digital mammogram image files are much bigger than those
of MRI and CT scans, therefore the storage needs are mammoth as facilities
transition from film to digital. The NDMA grid computing system is designed to
handle up to ten petabytes of data, or about 200 million mammograms.

The first phase of a similar grid initiative, apparently using the same IBM
technology, is underway at CERN (the European Organization for Nuclear
Research). CERN’s grid will initially link 70,000 computers in 12 countries, and
eventually enable scientists around the globe to analyze the five to eight
petabytes of data to be generated annually when CERN’s massive new particle
accelerator, the Large Hadron Collider, gets into top gear in 2007. The final
phase of the grid will add the computing power of scientific centers across the
world to create a virtual supercomputer network, and will make that power
available to anyone. The IBM grid technology, codenamed Storage Tank, makes
those petabytes of data appear to the user to be on a local network file server.

Many computing technologies are mutually multiplying — a major advance in
one is turned into a gigantic advance by another. For example, a recent major
step toward using carbon nanotubes for dense data storage could, in five years,
massively magnify the power of grid computing and grid storage, which by then
will already have magnified through spread.

References: Patsuris, Penelope (2003). “Grid
Computing Takes On Breast Cancer
.” Forbes, October 27; Unknown (2003). “Huge
computing power goes online
.” BBC News, September 30; Delio, Michelle
(2003). “A Storage
Tank Like No Other
.” Wired News, October 13; Unknown (2003). “Nanotubes Boost
Storage
.” Technology Research News, October 9.

Store An Hour of Video in Your Cell Phone

Early next year, Intel will introduce its StrataFlash Wireless, a
memory system designed to store up to an hour of full motion video, as well as
still pictures and music, on camera-equipped cell phones, without increasing the
size of current phones and without dramatically increasing their cost.

Reference: Fordahl, Matthew (2003). “Intel
unveils flash memory next-generation phones
.” Associated Press, October 13.

Molecular Circuits

U.S. researchers have built a memory circuit from atoms of gold. This new
field of “molecular electronics” has succeeded in making molecular and
atomic-scale switches, and is now trying to assemble them into vast arrays to
serve both as memory and processing units. So far, one of several teams pursuing
this goal has created self-assembling circuits some ten times denser (but also
slower) than those in silicon chips.

Reference: Markoff, John (2003). “Electronic
Memory Research That Dwarfs the Silicon Chip
.” New York Times, October 20.

Nanowires

German researchers have developed a new method for making flexible
transistors by growing vertical semiconductor nanowires, like a field of grass,
inside a thin sandwich of plastic with a metal filling. The nanowires are
transistor channels and the metal layer is the gate electrode. Source and drain
electrodes are then added to the top and bottom of the stack to complete the
transistors. Development is continuing, to improve their performance.

Reference: Unknown (2003). “Nanowires Boost
Plastic Circuits
.” Technology Research News October 20.

Human Hubs

ElectAura-Net is a ten megabits per second indoor wireless network
that uses human bodies rather than the radio waves, infrared light, or
microwaves of conventional networks. It seems we all walk around in a sort of
electric field envelope, which ElectAura-Net uses to connect with a
similar field emanating from transmitters place at about one meter intervals
under carpets or tiles. A user carrying — or wearing — a PDA would
automatically connect to the network just by being on the floor. The network
transmits data faster than both Bluetooth (1 Mbps) and infrared (4Mbps.)

Among the data transmitted could be the user’s indoor location, complementing
the GPS and cellular systems that provide location information outdoors. But it
would also compete with RFID tag technology, which is much further along the
development path and already being introduced commercially. The researchers
themselves admit the technology has an uncertain future.

Reference: Smalley, Eric (2003). “Body
network gains speed
.” Technology Research News, October 22/29.

AI Handles Real-world Complexity

There’s been no spectacular breakthrough with regard to artificial
intelligence to report this month. In fact, except for a chess match now and
then, AI has produced few headlines since the field was conceived, relative to
other fields of computer science. But that does not mean it has not made
significant progress, or that AI does not underlie many of the technologies
(robotics, for example, and diagnostic decision support systems) in common use
today.

One branch of AI, “complex agent-based dynamic networks,” is achieving some
unheralded success in explaining individual and group behaviors through the use
of selfish software agents that mimic the selfish behaviors of people in
real-world social, environmental, business, political, and other systems. The
method has been especially successful in modeling the financial markets, where
the end, if not the means, is simple and unambiguous: Money.

An agent-based network successfully predicted some of the impacts of NASDAQ’s
move to change stock price denominations from fractions to decimals. Another,
working at the level of the individual human “market-maker” rather than the
whole market, simulated a successful market-maker in accurately predicting stock
values, and in doing so helped explain what market-makers themselves can only
describe as “instinct.”

NASA is using such agents in prototype airplane wings with hundreds of small
ailerons, each of which has its own agent. By communicating with the other
aileron agents, each decides — instinctively, as it were — whether and in what
direction (up or down) it should move. Researchers have also created agent
networks that simulate the interactions among Colombian organized crime and
paramilitary groups in a “game” of drugs, money, and politics.

Reference: Unknown (2003). “Agents
of creation: Artificial “agents” can model complex systems
.” The Economist,
October 9.

Humanizing Robots

NASA’s Jet Propulsion Laboratory is working to program robots with human-like
artificial intelligence, in order to make them more independent, capable of
learning, and able to adjust their own programming.

The traditional approach, “deliberative control,” relies on painstakingly
constructing maps and models the robot then uses to plan sequences of action
with mathematical precision. It is essentially a sequential algorithm the robot
blindly follows. The newer “reactive control” approach, on the other hand,
relies on real-time observation of the environment, without maps and
pre-planning. The JPL researchers are using a hybrid approach they call
“behavior-based control,” which uses fuzzy logic and neural networks to give the
robots a plan but enable them to react flexibly and step outside the plan when
something unexpected happens — much as we humans do. (This sounds almost
identical to “model-based
reasoning
.”)

Neural networks are already in common use, in digital cameras, computer
programs (for handwriting recognition and other applications), dishwashers,
washing machines, car engines, mail sorters, and more.

Reference: Jet Propulsion Lab (2003). “People are
robots too. Almost
.” Red Nova, October 29.

 

Leave a Reply

Your email address will not be published. Required fields are marked *