Computing

On August 2, 2003, in Computing
Computer memory built from nanoscale
components — molecules or carbon nanotubes — might be as close as five years
hence. So too might quantum computing, with
serious venture capital backing an innovative approach. When you add to the
mix:

Memory

Researchers have created a process that adds electrical contacts to a sheet
of molecules to produce molecular memory. It may take five to ten years to apply
the process commercially, because mass production capability at nanoscale is
lacking.

So far, we have been able to continually squeeze more data onto magnetic
storage media by reducing the size of the physical, magnetic bits that store the
digital bits. But beyond a threshold point of smallness — a point we will reach
in the next five to ten years — heat overcomes magnetism and will halt
progress.

An international team of researchers has now found a way to boost the
magnetism of nanoscale magnetic particles, thus overcoming the problem and
enabling storage capacities of more than a trillion bits per square inch. The
method could be used to make practical devices soon, but that might not be
necessary until we are closer to the threshold.

Korean researchers have also made progress toward high-capacity nonvolatile
(permanent) computer memory made out of carbon nanotube transistors. They have
made nanodevices which, when fully developed in an estimated ten to 15 years,
could store 200 gigabits per square inch, about 200 times the capacity of
today’s memory chips.

Such capacity will be necessary to process the prodigious output from
Nanolabs and distributed computing initiatives discussed in this and a
preceding article (see the Acceleration section, and the quantum
computers dicussed in the next article.

References: Unknown (2003). “Material Helps
Bits Beat Heat
.” Technology Research News, June 24 (citing the June
19, 2003 issue of Nature); Unknown (2003). “Nanotube
Transistors Make Memory
.” Technology Research News, June 13, citing
the January 13, 2003 issue of Applied Physics Letters; Unknown (2003).
Prefab Key to
Molecular Memory
.” Technology Research News, June 19 (citing the
January 27, 2003 issue of Applied Physics Letters).

VC Joins QC

Few venture capitalists look as far as five years, let alone 50. So the
infusion of serious venture capital into quantum computing is a sign that
quantum computing is closer than many suppose. D-Wave, a Canadian company
co-founded by a former top Soviet physicist, has received capital from one of
Silicon Valley’s best-known venture firms to build a quantum computer. If it
works, it will have rapid and profound impacts in medicine, especially in the
design and creation of new drugs and materials.

Today’s computers can model relatively coarse-grained structures and
processes, but are not fast enough to model atomic forces inside a molecule, and
molecules are the key to new drugs and materials. If a molecule can’t be
modeled, it has to be painstakingly assembled through trial-and-error. D-Wave’s
superconducting quantum computer sidesteps constraints on conventional
approaches to quantum computing. One of its circuits “easily etched on the head
of a pin would contain more useful memory than all the (computer) memory that’s
ever been built,” according to D-Wave’ CEO, who expects to be producing circuits
good enough to beat regular supercomputers in five years.

Reference: Maney, Kevin (2003). “Quantum
computer developer gets funds
.” USA Today, June 23.

Distributed Computing & Petri PCs

The Berkeley scientist who created the SETI@home distributed computing
system to analyze space noise in search of signals from aliens is providing an
easier and cheaper way to write such software. Distributed computing provides
the equivalent of supercomputing power, but writing the software has been
expensive and time-consuming. The new, open source method will open the door to
more distributed computing initiatives, of which Popular Science‘s Jenny
Everett provides four current examples:

  • The Great Internet Mersenne Prime Search (GIMPS), involving 130,000
    PCs providing a total average power of 9 teraflops,* has discovered five
    Mersenne primes in six years, including one with more than 4 million digits.
  • SETI@home uses over four million PCs supplying on average 56.2
    teraflops of processing power.
  • The Smallpox Research Grid employs two million PCs with a total
    average of 180 teraflops to analyze potential smallpox drugs. 35 million drug
    candidates have been screened since February this year.
  • The new climateprediction.net plans to predict significant changes in
    Earth’s climate using a million PCs.Distributed computing will grow in importance as the volume of data from an
    accelerating number of increasingly powerful sensors and analyzers expands
    exponentially. CNET’s Ed Frauenheim reports that the explosion of data coming
    from Aventis’ high-tech labs led the drug-maker to apportion a whopping $3
    billion to IT in its 2002 research and development budget. All pharmaceutical
    companies, government research centers, and other bioscience organizations are
    similarly affected by the data glut, which is partly the result of increasing
    use of “in silico” experimentation, where binary 1s and 0s simulate the
    biochemical compounds of traditional in-vitro petri-dish research; partly the
    result of microtitre
    plates
    that hold tens of thousands of miniscule drops of compounds and
    enable millions of automated experiments a day; and partly the result of the
    decoding of the human genome, which every 15 months has doubled the amount of
    information pouring from biomedical research. A single proteomics lab can
    generate 12 terabytes of data — roughly the equivalent of all the information
    stored in the Library of Congress — in a single year.

    Multiply all this by the torrents about to be unleashed by Dr. Hood’s
    Nanolab, and we may be in danger of losing our already tenuous
    intellectual grasp on the scope and scale of things to come.

    *A teraflop is a trillion calculations (strictly,
    floating-point mathematical operations) per second. The world’s largest
    supercomputer to date operates at 35.9 teraflops. The average PC weighs in at a
    mere 0.0001 teraflops.

    References: Everett, Jenny (2003). “Headlines:
    This summer new software will help researchers tap the unlimited power of idle
    PCs
    .” Popular Science, June; Frauenheim, Ed (2003). “Computers replace petri dishes
    in biological labs
    .” CNET News.com, June 2.

    Adaptive Computing

    Most of today’s electronic gadgets require several different chips, usually
    semi-integrated in the form of a “chipset,” to perform different functions. A
    cell phone, for example, needs one chip to search for a local cell site, one to
    verify that the caller is an authorized network user, and one to put the call
    through. To add new or more efficient functions to the device means designing
    new chips and chipsets. All the major chip manufacturers and not a few small
    ones are now working to make a single chip whose circuitry can be reconfigured,
    under software control, on the fly to perform any given function. The result
    will be more flexible, functional, and longer-lived devices. Given such an
    “adaptive computing” chip, adding or updating new functions could be
    accomplished by automated wireless Internet download.

    Reconfigurable chips have existed for 17 years in the form of
    “field-programmable gate arrays” (FPGAs), used mainly by chip designers to
    design and test circuits for conventional chips before going to the expense of
    etching masks. The difference between FPGAs and adaptive computing chips is that
    the latter would be much easier to program. Despite skepticism and some serious
    difficulties to be overcome, the measure of the likelihood of their arrival can
    be discerned in a flurry of activity and acquisitions by major players.
    Infineon, Philips, and Motorola have bought or invested in smaller companies
    working on reconfigurable circuits.

    When it happens, adaptive computing will be the biggest shift in chips since
    the invention of the microprocessor in 1972, thinks New York Times
    correspondent John Markoff.

    Arguably, that is assuming photonic (optical)
    chips don’t arrive first (see next article).Reference: Markoff, John (2003). “Computing’s Big
    Shift: Flexibility in the Chips
    .” New York Times, June 16.

    Photonic Computers

    An advance hailed as “significant” in the emerging field of integrated
    optical microchips could result in high-speed photonic computers and smaller
    communications devices. Optical chips are made from crystals with gaps that can
    be engineered to channel specific wavelengths (colors) of light. Japanese
    researchers have now built a multiplexer, a device that sends several signals of
    differing wavelengths simultaneously over the same circuit, 250,000 times
    smaller than a conventional one. The result is a massive increase in bandwidth,
    and therefore in the amount of data an optical chip can process in a given time.
    Practical devices could result from this development in two to five years.

    Reference: Unknown (2003). “Chip Channels
    Multiple Colors
    .” Technology Research News, June 11 (citing the June
    6, 2003 issue of Science.)

    Automatic Language Translation and
    Babelfish

    Globalization has created a big and growing demand for computerized automatic
    language translation (ALT). Accuracy rates of current systems range between 70
    and 80 percent, enough to significantly reduce overall translation time, and
    with constantly refined techniques and ever more powerful computers, things can
    only improve.

    There are three main approaches to ALT. First is “knowledge-based machine
    translation,” which relies on human programmers to codify and build rules for
    all possible relationships between parts of speech, and then searches the rules
    to find matching words and relationships between the two languages. An example
    fo this approach is IBM’s WebSphere Translation Server, which translates
    e-mail, websites, and real-time chats. The second approach, “example-based
    systems,” relies on brute force searches through documents already accurately
    translated into multiple languages. The third approach refines example-based
    systems by applying statistical techniques to compare previously translated
    text. Microsoft used this method to translate a 60 million word knowledge base
    from English to Spanish.

    ALT will become a very significant force in the trend to globalization of
    healthcare, as consumers and researchers around the globe seek healthcare
    related documents from all sources to aid in their care or research.

    Meanwhile, the Defense Advanced Research Projects Agency (DARPA) is funding a
    crash project to create what appears to be a demonstration or proof of concept
    program that will translate Hindi documents into English. Researchers at
    multiple universities will pool data from “dictionaries, religious texts, news
    sources and native speakers,” writes Wired‘s Katie Dean, then build
    statistical models to do the translation.

    References: Walter, Chip (2003). “The Translation
    Challenge
    .” Technology Review, June; Dean, Katie (2003). “Pick a
    Language, Any Language
    .” Wired News, June 7.

 

Leave a Reply

Your email address will not be published. Required fields are marked *