Accelerating Innovation = Accelerating
Obsolescence
Why is it that the medical device industry is growing at an estimated seven
percent a year, while some medical device companies are struggling? asks David
McNaughton of the Atlanta Journal-Constitution. Part of the answer is
that it is no longer enough to make a good product and relax while the money
flows in. New products offering better clinical outcomes, or at least equal
outcomes at lower cost, must constantly be in the pipeline.
The struggling device manufacturers — those that have not heeded this
message — are suffering a problem endemic to healthcare: a failure of
predictive vision in an era of accelerating technological innovation. Joe
Flower, writing in Hospitals & Health Networks, says: “[W]e lack any
serious capacity to scan for future events beyond what is coming onto the market
in the next 12 months. We must develop that capacity, or suffer chaos.”
References: McNaughton, David (2003). “The
downside of innovation: Georgia’s makers of medical devices try to find cures
for ailing balance sheets.” Atlanta Journal-Constitution, November 5;
Flower, Joe (2003). “Defeating
Major Diseases–Part 3.” Hospitals & Health Networks, November 18.
In the Beginning Was the Word . . .
. . . and the word was twice times 5,386 letters long. It was the genetic
code of the phiX virus, which infects bacteria and was the first organism ever
to have its DNA decoded, in 1978. And on the 14th day of experimentation in the
year 2003, Dr. Craig Venter (the scientist who led the private-sector effort to
decode the human genome in the 1980s and ’90s) created phiX.
Though not the first artificial virus (that distinction belongs to polio) it
was the first to be produced so quickly and without genetic errors — that is,
to be an exact emulation of the real thing.* And making it was just the first
step towards making artificial micro-organisms for “many vital energy and
environmental purposes,” said Venter, and conceivably even more complex
organisms. Early applications under study are artificial bacteria to produce
hydrogen for fuel cells or absorb carbon dioxide to mitigate the effects of
global warming. US Energy Secretary Spencer Abraham imagines “in the
not-too-distant future, a colony of specially designed microbes living within
the emission-control system of a coal-fired plant, consuming its pollution and
its carbon dioxide, or employing microbes to radically reduce water pollution or
to reduce the toxic effects of radioactive waste.”
* An article in this month’s Therapeutics
section concerns a synthetic version of Ebola, now being tested. However, it is
missing the disease-causing components and is a simulation, not an
emulation.
Reference: Unknown (2003). “Scientists
use DNA to make virus.” BBC News, November 13.
Systems Biology
Systems biology embodies both reductionist and holistic biological
aspirations, which are: “to connect the dots of all of the body’s RNA, DNA,
genes, proteins, cells, and tissues, elucidating how they interact with each
other to create a breathing, blood-pumping, disease-fighting, food-processing,
problem-solving human,” writes Jon Cohen in Technology Review. It marks a
radical shift for biology made possible by the “explosion” of computer-assisted
molecular analysis, a “torrent” of new information from the Human Genome and
related Projects, and “ever increasing computational muscle.”
The roots of the formal discipline go back only as far as 1992, when Leroy
Hood, developer of the automated gene sequencer, started an interdisciplinary
molecular biotechnology program at the University of Washington. He and others
subsequently (in 1999) founded the independent Institute for Systems Biology,
which seriously believes it can “move medicine toward an era in which our life
spans increase by 10 to 30 years” — a claim Cohen thinks “may be more than just
a daydream.”
And that’s just one systems biology institution. Cohen describes several
others, and it is clear the field is blossoming, suggesting not only that the
discipline has academic credibility and traction, but also that its fruits will
also be rapid, exotic, and abundant. The explosion will grow combinatorial as
the institutions begin to collaborate — as some have already begun to do
through the NanoSystems Biology Alliance (between the Systems Biology Institute,
nanotechnologists at Caltech, and medical researchers at UCLA), which is
developing the “nanolab,” a
chip with “the power to outperform entire laboratories.”
Systems biology comes at considerable computational cost. The Institute has
its own supercomputing center and 300+ terabytes of data storage capacity. Such
horsepower is needed to reduce the complexity of data by converting it to visual
diagrams simple enough for the human mind to grasp for predicting a medicine’s
side effects on the human system: if a drug interferes with a specific protein,
researchers can see its effect on a connected pathway that controls critical
functions such as respiration or sugar metabolism, a capability that would take
“15 years and billions of dollars . . . in the terms of standard drug
development,” one of the Institute’s chief scientists told Cohen.
Now that’s acceleration.
Reference: Cohen, Jon (2003). “Big-Picture
Biotech: Systems biology aims to provide a clearer picture of how diseases
work-and how to prevent them.” Technology Review, December 2003/January
2004.
RNAi Gets Noticed
The rest of the world seems to be waking up to the astonishing potential of
RNAi (RNA interference) as a blockbuster biotech, which HFD readers knew about
at least as early as April (see “The
Next Big Thing” in the April issue.) RNA delivers blueprints from DNA to
proteins, the machines that actually make the insulin, red blood cells, or
whatever else the DNA instructs — including defective components, ordered by
defective DNA, that result in disease. But RNA can also interfere with the
process, effectively preventing defective DNA in specific genes from causing
disease, by almost literally throwing a custom-tailored spanner in the protein
machine, and genetic engineers know how to create such custom-tailored RNAi. The
main problem now being addressed is how to keep engineered RNAi, which is very
frail, from deteriorating before it can complete its therapeutic mission.
Investors, the public, and even companies involved in RNAi research have
perhaps been burned too often by premature publicity and hype to write home
about RNAi yet. Even so, reports Wired‘s Kristen Philipkoski, “scientists
and venture capitalists are comparing RNA interference, or RNAi, to the
recombinant DNA revolution that launched the entire biotechnology industry in
1976.” Fortune, she notes, has dubbed RNAi biotech’s next “billion dollar
breakthrough,” and Science named it the “breakthrough of the year” for 2002. And
while savvy investors and scientists are saying it will be a decade before we
know if the technology works, some are quietly backing RNAi startups.
Reference: Philipkoski, Kristen (2003). “Next Big Thing in
Biotech: RNAi.” Wired News, November 20.
Pharmacogenomics Speed Bump?
The US Food and Drug Administration has issued draft guidelines concerning
the use of genetic data in new drug applications. Though it may be years it
becomes final, the introduction of such a draft is evidence that
pharmacogenomics — the application of genomic data to the creation of drugs to
fit the precise genome of the individual, popularly known as “personalized
medicine” — is going mainstream.
The Wall Street Journal‘s Anna Wilde Matthews describes
pharmacogenomic tests that already exist for the breast cancer drug
Herceptin and attention deficit disorder drug Strattera, and notes
there are plans for a genetic test to determine the best treatment for irregular
heartbeat and an acute myelogenous leukemia drug under development that would
target patients with a specific genetic mutation.
Matthew Herper suggests in Forbes that drug companies that don’t have
big diagnostic testing businesses are concerned that pharmacogenomics will upset
the prevalent “blockbuster” business model that relies on one-size-fits-all
drugs. When one size fits only a few, the blockbuster era is over. However, a
company that makes the “gene chips” used in many pharmacogenomic drug tests
notes that pharmacogenomics will enable drug makers to dust off and market drugs
which — under “one drug must fit all” — have had to be shelved, along with
millions of dollars of sunk development costs.
Reference: Herper, Matthew (2003). “Gene Tests?
Not So Fast.” Forbes, November 4.
Reference: Matthews, Anna Wilde (2003). “FDA to
Issue Rules on New Era Of ‘Personalized Medicine.'” Wall Street Journal,
November 3.
Understanding and Enhancing the Human Mind
Once shunned as feebleminded, savants are now acknowledged to possess genius
in certain narrow domains: numerical and calendar calculation, artistic and
musical proficiency, mechanical aptitude, and feats of memorization. Their
secrets, and those of people afflicted with other abnormal mental syndromes and
diseases such as autism and frontotemporal dementia (FTD), are succumbing to the
modern microscope of genomics and PET scans, and in the process are shedding
light on the workings of everyone’s brain. Steve Silberman, writing at length
and with fascinating illustrative anecdotes in Wired, says
neuroscientists are discovering that as a result, many of their assumptions
about intelligence are wrong and that “the brain is much more capable of healing
itself than we thought.”
The research suggests our brains usually have multiple memory functions and
locations. One theorist suggests that in savants, associative memory systems
located in the higher regions of the cortex (the most recent evolutionary
development) fail, and that the brain tries to compensate by tuning up the basal
ganglia, seat of “habit memory . . . the memory that never forgets how to ride a
bike,” writes Silberman. It’s not just that savants remember everything; it’s
that they are unable to forget anything. But the most provocative part of this
theory holds that savants don’t have to learn the algorithms necessary to
reproduce a complex piece of music or calculate pi. “The software comes
[genetically] preinstalled” in everyone, but is activated only in savants. If
so, “we may all carry a savant inside us waiting to be born.” Along similar
lines, another researcher hypothesizes that dementia does not create artistic
powers in FTD patients; rather, it activates suppressed ones by switching off
inhibitory signals from the left temporal lobes to the right hemisphere, where
the suppressed talents are then expressed.
Tuning up ganglia and switching off inhibitory signals are examples of
neuroplasticity — the brain’s ability to reconfigure itself to compensate for
damage. But PET scans suggest that some people may be able to purposefully, if
subconsciously, reconfigure their brains to perform spectacular feats (of math
calculation, for example) in the same way that “computers extend the capacity of
RAM by using swap space on the hard drive to create a larger ‘virtual memory.'”
According to Silberman, “neuroscientists are discovering that the processing
centers in our heads swap resources all the time,” and are now starting to ask
“whether there might a way for the average person to switch on these hidden
skills.” A researcher in Australia has garnered considerable press, but little
peer support, by claiming to have demonstrated that transcranial magnetic
stimulation (TMS) is one way to switch them on.
Others believe there is a difference between computation and creativity, and
that memory (data) and a set of algorithms do not an artist make. Creativity,
they say, is also “the ability to imagine the world as it is not — to make
conceptual leaps and refine the raw data of experience into abstract ideas,
meaning, and insight. That is precisely the function of the most recently
evolved areas of the brain — the regions of the prefrontal cortex that are
damaged or impaired in many savants.” Silberman comments: “The mark of real
genius is that it leaves its own domain permanently changed. After a century of
dissecting the cortex into smaller and smaller bits, we’re learning that the
highest functions of the mind — creativity and imagination — are emergent
properties of a brain working as a whole.”
We would comment that artificial intelligence software has produced original
music in the style of Mozart, and abstract paintings that sell for thousands of
dollars. While it is true that they have not permanently changed the domains of
music and art (yet), it seems that systems biology and AI are converging on an
understanding of intelligence in its fullest sense, which may one day lead to
truly intelligent machines and maybe even truly intelligent people. This notion
is of course contentious, but it is no longer conjecture; and contention of all
sorts — scientific, ethical, social — is likely to increase as our
understanding of brain and mind accelerates. It seems to us the real message in
Silberman’s article is not in the theories mentioned, but in the accelerating
emergence of such theories — born of science as opposed to science fiction —
at all.
Reference: Silberman, Steve (2003). “The Key to
Genius.” Wired 11.12, December.
Toward the Universal Assembler
The Holy Grail of nanotechnology is the Universal Assembler (UA) — a
misleading name, since it implies a machine that accepts an external physical
input and assembles it into an external physical output. In fact, it refers to a
blob of nanoparticles that receives information input and can configure itself
into the desired physical output. In other words, it is a material object that
can assemble (and ultimately re-assemble) itself on command, rather as cartoon
characters do in movies. For example, you might turn your UA pickup truck into a
sports car and back again, as the need arises or the fancy takes you.
Even we are prepared to admit that that scenario is quite some way
off, but it takes little imagination to see that even at a very rudimentary
level of development, the UA would be fantastically useful — if it is possible.
Computer simulations at the University of Michigan show that indeed it is
possible. They show that structures such as sheets, wires, tubes, spheres, rods,
plates, and prisms can be made to spontaneously self-assemble from nanoparticles
with “long, tether-like” molecules attached, by causing the tethers and
nanoparticles to part.
It is not the first UA method to be devised, but it offers a wider variety of
self-assembled structures than previous methods. In two to five years from now,
it could be in practical use in electronic and photonic device manufacturing.
The UM researchers are already at work on a prototype.
Reference: Unknown (2003). “Model Leverages
Nano Tethers.” Technology Research News, November 17.
Thinking Outside the (FedEx) Box
Shipping giant FedEx has opened its own Institute of Technology to conduct
research not just in transportation, but also in the life sciences and other
subjects not usually, or at least not directly, associated with shipping. The
Institute is collaborating with the U.S. military, universities, medical
centers, and other corporations in researching and developing a broad array of
technologies.
For instance, computer engineers and psychologists are developing an
artificially intelligent “conscious software agent,” presumably to replace or
augment customer service agents; and AI routines for the cockpits of FedEx’s
large fleet of cargo planes, suggesting to Wired‘s Kristen Philipkoski
that FedEx pilots “might be in trouble, too.”
Other researchers at the Institute are developing frictionless “biosurfaces”
to reduce airplane drag coefficient — technology that also has applications for
missiles and human joint replacements. They are studying ways to improve upon
RFID tags, and to lessen the environmental impact of delivery vehicles by
finding a way to keep vehicles cool or warm (as the weather dicates) while
drivers leave their cabs (often leaving the motor running, at a high cost in
diesel consumption and pollutant emissions) to make a pickup or delivery. They
are also conducting research into groundwater decontamination, “advanced
educational approaches,” genomics, and cybersecurity.
One is bound to admire the vision of FedEx’s leadership, which clearly
recognizes that to win in the future, it is smart to help create it, and not be
merely a passive recipient of what others dole out. The only thing lacking, at
least as can be discerned from the Wired story, is any sign of concern
over what happens to the employees put out of work when the “conscious software
agents” and AI pilots take over. In the apparent absence of concern by political
leaders, it would be an act of beneficent corporate citizenship to factor the
unemployment issue into technology programs designed to cut labor costs.
Reference: Philipkoski, Kristen (2003). “FedEx Delivers
New Tech Lab.” Wired News, November 19.
Screeching Halt?
Moore’s Law (which says roughly that the number of transistors on a computer
chip doubles every couple of years) will cease to hold in about 20 years from
now, according to an Intel research paper. The limit will be reached soon after
2018, when the width of circuits etched in silicon is anticipated to be reduced
to 16 nanometers. Going any smaller is basically just not possible, they say.
(For reference, current chip production is just starting to use a 90-nanometer
process.)
Not all researchers agree that 16 nanometers will mean the end of Moore’s
Law; but for now, the physics is with Intel. In 16-nanometer circuits,
transistor gates will be only about five nanometers long, at which dimension
electrons will simply tunnel their way through the gate, making it useless.
Tunneling, they say, will occur regardless of the chemistry of the transistor
materials, so it is pointless to pin hopes on changing the materials and
structures inside semiconductors (although that has to be done in order to get
down to the 16-nanometer point.) Even if the tunneling problem were solved,
increasing energy consumption and heat dissipation would remain as major
barriers. And even if both the energy/heat and tunneling problems could be
solved, when the gate reaches less than 1.5 nanometers in length, electrons
could apparently not be extracted from their “wells.”
Approaches that in theory would sidestep the size limit and enable faster
computing include: recycling electrons, instead of using them once and then
discarding them, as is the case in all electronics today;* and making
transistors out of carbon nanotubes or silicon nanowires. An alternative is
simply to accept the limit on size and go the other way — build bigger chips,
or layer chips together in three-dimensional structures.
* See “Reversible Computing” in the
Computing section of this issue for an exception to this
statement.
Reference: Kanellos, Michael (2003). “Intel scientists find wall
for Moore’s Law.” CNET News.com, December 1. |