Computing

On December 12, 2003, in Computing
Supercomputing is a hot
topic as the Big Mac(intosh) takes 3rd place in the World’s Fastest list and IBM
promises to blow NEC’s monster Earth Simulator off top spot with a
petaflop machine. IBM’s Blue Gene/P will be used to explore protein
folding, injecting atomic fire-starter into the already-exploding field of
systems biology.

Europe may be outclassed by the US and Japan in supercomputers, but
not in supercomputing. How so? The answer is grid computing. The US is starting to catch
up in grid development, in part through “the biggest, fastest network ever undertaken for
scientific research,” which will lead not only to “some truly extraordinary
discoveries” but also to “extreme multimedia,” such as “real telepresence,”
enabling scientists to collaborate across the world as easily as with colleagues
across the hall.

Competition is as hot at the Lilliputian as at the giant end of the computing
devices scale. Japanese and US researchers have independently devised ways to
read and write data to individual molecules, promising massive molecular memory storage in tiny
packages. But if the promise is not kept, there will always be plastic to fall
back on. It may be big compared to molecular substrates, but plastic memory is so easy and cheap to
manufacture that size may be an acceptable sacrifice.

If you prefer something more exotic than plastic to satisfy your
computing aesthetic, consider circuits built by DNA, or crystal photonic
circuits, or “adiabatic” circuits that recycle their own electrons. You may have
to wait a few years for those, but not for something possibly even more exotic:
unbreakable quantum encryption, on
the market now for US$50-100,000.

Supercomputing Renaissance

The New York Times‘ John Markoff notes three significant shifts in
computing this year.

  • First: While attendance at Comdex, the leading annual show for the computing
    industry, was down, attendance at the annual supercomputing expo (SC2003)
    surged, stimulated by the completion of NEC’s Earth Simulator, a
    supercomputer an order of magnitude more powerful than any other.
  • Second: The supercomputing exhibition “highlighted the extent to which the
    industry has seized on grid computing.” (See next article for recent
    developments in grid computing, which is essentially about achieving
    supercomputing power without supercomputers.)
  • And third: There is evidence of a distinct trend to the adoption of
    “reconfigurable logic” chips whose circuits can be reconfigured on the fly to
    run different kinds of application. Indeed, there is speculation that the reason
    Microsoft chose IBM chips over Intel’s for the next generation of its
    Xbox video game machine is because IBM is going the reconfigurable logic
    route.But the low end of supercomputing is generating as much buzz as the high end,
    now that Virginia Tech’s “Big Mac” supercomputer has been officially declared
    the world’s third-fastest. Here’s a brief run-down on supercomputers making the
    headlines:
  • Big Mac: Uses 1,100 dual-processor Power Mac G5 chips. Cost
    US$5.2 million. Performs 10.28 trillion floating-point operations a second
    (teraflops). Big Mac II to begin construction in 2006, will be shooting
    for 50 teraflops.
  • Earth Simulator: Uses 5,120 processors; would fill four tennis
    courts. Estimated cost US$350 – $400 million. Performs 35 teraflops.
  • Blue Gene/L: 130,000 PowerPC 440GX processors, would fill half
    a tennis court. Will perform 360 teraflops to become easily the world’s fastest
    supercomputer when it comes online next year. The machine will likely run on
    Linux. IBM is due to deliver that machine plus the 100-teraflop ASCI
    Purple
    supercomputer to the Lawrence Livermore Laboratories by early 2005
    for a combined cost of $267 million.
  • A dishwasher-sized prototype of Blue Gene/L just unveiled is the
    73rd-fastest computer in the world in its own right and may be produced for sale
    to corporate data centers. It will be air-cooled and use no more power than the
    average home. Performs about 1.4 teraflops.The US Defense Department has set a goal of a petaflop (1,000 teraflop)
    computer by the end of the decade. Battelle, the firm that manages the Oak Ridge
    National Laboratory in Tennessee, is working to deliver just that by 2008 for
    about US$500 million. It is intended in part specifically for medical research
    applications. But Battelle could be beaten to the punch by IBM, which is already
    planning the petaflop BlueGene/P for completion as early as 2006.

    Among the first applications IBM is exploring to harness Blue Gene‘s
    power is to model the “folding” of proteins in human cells, which will lead to a
    better understanding of diseases and potential cures. Already in use for medical
    research is Arizona State University’s TGen supercomputer, which cost
    US$4.4 million and gets 1.8 teraflops out of 1,024 Intel Xeon microprocessors.
    TGen started work in September, searching for genetic markers of cancer,
    Alzheimer’s, and other diseases. It uses genetic data from the Human Genome
    Project and from the cells of cancer patients who did and did not respond well
    to specific treatments. Demand is expected to “saturate the system power” within
    a year, based on the flood of requests for use of the machine by researchers.

    Reference: Markoff, John (2003). “Switching
    Allegiances in Computers
    .” New York Times, November 24.

    Reference: Unknown (2003). “Mac
    Supercomputer Joins Elite
    .” Wired News/AP, November 15.

    Reference: Markoff, John (2003). “IBM Says
    Supercomputer to Be Suitable for Businesses
    .” New York Times, November
    14.

    Reference: Unknown (2003). “Supercomputer
    Could Be Used to Analyze Medical Data
    .” iHealthBeat, November 19.

    Reference: Wichner, David (2003). “Powerful computer to
    aid in research
    .” Arizona Daily Star, 20 November.

    Grid Computing: US Lags EU

    We reported last month on the grid-based National Digital Mammography
    Archive. Now, to expedite researchers’ access to “key bioinformatics platforms,”
    the National Cancer Institute (NCI) plans to build a “cancer biomedical
    informatics grid (caBIG),” a “common, extensible informatics platform that
    integrates diverse data types . . . supports interoperable analytic tools . . .
    [and] will allow research groups to tap into the rich collection of emerging
    cancer research data while supporting their individual investigations.” These
    are welcome developments, if a little tardy compared to Europe, where the
    potential of grid computing was recognized and acted upon much faster than in
    the United States — as we have noted in recent issues of HFD.

    “Europe has decided that this is a real competitive advantage, and they are
    going after it,” a US National Science Foundation director told John Markoff and
    Jennifer Schenker of the New York Times. The fact that Europe’s lead in
    grid computing may be as much as 18 months, and that the European Union has a
    five- to ten-year strategic plan in place, is “a slap in the face and a wake-up
    call that things have gone global,” another US expert said. However, Europe is
    going after it with US-made technology.* The US National Science Foundation is
    in discussions with European Commission officials to try to bring about more US
    participation in the European grid activities.

    The University of California at San Francisco, meanwhile, appears to be
    heading in the opposite direction. Assisted by IBM — one of the chief
    proponents of grid computing — UCSF plans to move all the medical data from 150
    databases into one. Perhaps the US, or at least IBM, is hedging its bets.

    * For example, Swiss pharmaceutical company
    Novartis used software by United Devices of Austin, Texas, to link 2,700 desktop
    PCs to help create drugs. The grid has already helped discover several promising
    new chemical molecules, and Novartis now plans to expand the grid to all 70,000
    PCs in its corporate network.

    Reference: caBIG website

    Reference: Markoff, John, and Jennifer L. Schenker
    (2003). “Europe Exceeds
    U.S. in Refining Grid Computing
    .” New York Times, November 10.

    Reference: Kirby, Carrie (2003). “UCSF,
    IBM to team up: Medical info to go into one database
    .” San Francisco
    Chronicle, November 17, p. E-1.

    Warp Speed Networks

    The first leg of the US National LambdaRail — “the biggest, fastest
    network ever undertaken for scientific research” — has gone live between the TeraGrid
    facility in Chicago and the Pittsburgh Supercomputing Center. Hundreds of
    research institutions around the United States could be linked when the NLR is
    completed by the end of 2004.

    It is being created from 10,000 miles of unused or “dark” fiber.* It will use
    about 40 wavelengths (“lambdas”) operating at 10 Gbps each. If all wavelengths
    can be combined for a single transmission, that would make a bandwidth of 400
    Gbps and do to network speeds what IBM’s Blue Gene/L is about to do to
    existing supercomputers: blow them away. Meanwhile, the Oak Ridge National
    Laboratory has been awarded a contract by the US Department of Energy to design
    a 10-40 Gbps network called Science UltraNet. We’re not sure if this is
    intended as a leg in the NLR, but it certainly could become one.

    Grid computing puts a new premium on bandwidth, since trying to accommodate
    petaflops of processing power and petabytes of data over the existing Internet
    would be like trying to operate a jumbo jet from a dirt airstrip. Quoting from a
    US National Science Foundation report, Wired‘s Leander Kahney notes: “The
    amounts of calculation and the quantities of information that can be stored,
    transmitted and used are exploding at a stunning, almost disruptive rate.
    Powerful data-mining techniques operating across huge sets of multidimensional
    data open new approaches to discovery. Global networks can link all these
    together and support more interactivity and broader collaboration.”

    An NLR director told her: “We’re going to have some truly extraordinary
    discoveries and data-mining capabilities, but we need these kinds of network
    connections to allow the scientists to trawl through these enormous amounts of
    data.” As if that were not enough to revolutionize science, the new global
    networks will also enable “extreme multimedia,” such as “real telepresence,”
    enabling scientists to collaborate across the world as easily as with colleagues
    across the hall.

    * Perhaps the fiber laid by over-optimistic
    interexchange carriers during the dot com boom.

    Reference: Kahney, Leander (2003). “Fast Track
    for Science Data
    .” Wired News, November 17.

    Reference: Associated Press (2003). “Tenn.
    lab to design high-speed network
    .” USA Today, November 25.

    Molecular Memory

    Japanese researchers have found a way to write (store) and read (retrieve) a
    bit of data electronically on a single molecule of photochromatic diarylethene.
    The method may be compatible with existing electronics, works at room
    temperature, and requires very little power. Molecular memory could store very
    large amounts of information in a very small space, and is inexpensive enough to
    be disposable. The researchers think inexpensive disposable memory circuits
    could become practical into three years, and ultra-high density molecular memory
    systems in five to ten years.

    Not to be outdone, US researchers have bonded a molecule of photochromic
    fulgimide with a fluorescent dye molecule to form a single molecule with two
    states — the basis for binary computing. The molecule “resists accidental
    erasure,” switches quickly, and could lead to terabyte floppies. This method
    uses photons rather than electrons to read and write data to the molecule, and
    could become practical in five to seven years.

    The race is on.

    Reference: Unknown (2003). “Molecular Memory
    is Electric
    .” Technology Research News, November 14.

    Reference: Unknown (2003). “Paired Molecules
    Store Data
    .” Technology Research News, November 3.

    See alsoMemory“; “Molecular
    Memory

    Plastic Memory

    Princeton University and HP Labs researchers have combined a conducting
    polymer commonly applied to photographic film as an antistatic coating with foil
    and silicon diodes to make a WORM (write-once, read many times) memory material.
    Production would be simpler and per-megabyte costs should be lower than silicon
    memory, because manufacturing won’t need lithography, expensive clean rooms,
    vacuum chambers, and high temperatures, and because layers of the material could
    be stacked.

    Reference: Fordahl, Matthew (2003). “New
    Plastic Memory Technology Unveiled
    .” Associated Press, November 12.

    Reference: Biever, Celeste (2003). “Plastic memory
    promises cheap, dense storage
    .” New Scientist, November 13.

    Reference: Unknown (2003). “Layers Promise
    Cheap Storage
    .” Technology Research News, November 24.

    Exotic Computing

    Progress in DNA Computing

    Israeli scientists have harnessed DNA to “self-assemble” a nanoscale
    electronic device made of gold-plated carbon nanotubes. The DNA is not involved
    in operation of the circuitry, only in its construction.

    The scientists attached the nanotubes to a protein that helps construct DNA
    as part of a natural biological process called “recombination,” which cells use
    to repair damaged DNA and to swap genes. The protein moves to an exact location
    along the DNA strand, taking its attached nanotube along for the ride, rather
    like construction workers carrying building materials to a specific spot
    according to the architect’s plan. The nanotubes are then connected to gold
    wires at each end, enabling current to flow when an electric field is applied —
    in other words, they have become transistors.

    The next step will be to build an actual circuit by stretching DNA across a
    surface to provide a template, a process that has worked in simulated
    experiments.

    Reference: Chang, Kenneth (2003). “Smaller Computer
    Chips Built Using DNA as Template
    .” New York Times, November 21.

    Photonic Computing Comes a Step Closer

    Stanford and MIT researchers have designed (but not yet made) a simple
    optical switch and an optical transistor. To be constructed from man-made
    photonic crystal, both would be smaller than a micron, require little power, and
    be capable of manufacture in existing chip fabrication plants. The devices could
    be in practical use in photonic computing in two to five years.

    Reference: Unknown (2003). “Switch Promises
    Optical Chips
    .” Technology Research News, November 20.

    Reversible Computing

    We noted in “Screeching Halt?” (in the Acceleration section) that heat
    dissipation will be a show-stopper for Moore’s Law when transistor gates on
    computer chips reach the five-nanometer level in about 2018 — unless a way is
    found to recycle the electrons.

    University of Florida researchers have found a way. They propose to
    incorporate tiny oscillators that allow the chips to recapture the energy
    expended in one calculation and re-use it for the next calculation — a process
    known as reversible, or adiabatic, computing. A startup company, Adiabatic
    Logic, has already designed a commercial reversible processor. (It is not clear
    from the Wired article, but the inference is that Adiabatic Logic is
    using the University of Florida method.) Yet AMD, IBM, and Intel are showing
    little interest, apparently because the technology is not yet developed enough
    to be competitive with current technology. Adiabatic Logic thinks it is only a
    matter of time before the chipmakers will have no choice but to turn to
    reversible computing.

    Reference: Asaravala, Amit (2003). “Chip Design
    Reverses a Hot Trend
    .” Wired News, November 13.

    Quantum Encryption For Sale

    Startup firm MagiQ Technologies has begun selling what may be the first
    commercially available quantum encryption system. It uses photons to transfer
    the encryption keys used to protect documents from prying eyes and is
    unbreakable, because if the photons — quantum particles — are observed, they
    change, scrambling the key and also alerting sender and recipient to the
    attempted intrusion. The system, called Navajo after the Navajo language
    code used with great success in World War II, is contained in small boxes that
    generate and read the signals over a fiber-optic line. The boxes can be up to 70
    miles apart, and more boxes add more distance. Navajo costs
    $US50,000-100,000, affordable to banks, insurers, government agencies,
    pharmaceutical companies, and other organizations that transmit sensitive
    information worth considerably more. Navajo also changes the keys ten
    times a second, so is impervious to the human carelessness with keys that makes
    other otherwise invulnerable systems vulnerable.

    The device has significance beyond encryption, because it represents a step
    towards practical quantum computing. And the beauty of it is that even a quantum
    computer has been shown theoretically to be incapable of decoding a
    quantum-encrypted message.

    Reference: Bergstein, Brian (2003). “Quantum
    leap for encryption
    .” Associated Press/Australian IT, November 17.

 

Leave a Reply

Your email address will not be published. Required fields are marked *