Computing and Communications

On March 13, 2009, in Computing

The announcement just a few days ago of a gene therapy drug which, in animal trials, reversed the effects of Alzheimer’s, supports our mantra that healthcare innovation is accelerating. That mantra is premised on continuing acceleration in the computing and communication power on which so much of modern healthcare relies.

 

Here, in no particular order, are some recent advances demonstrating that the acceleration continues:

 

  • Fiber Speeds Over Wireless: Twenty gigabits per second (20Gps) have been demonstrated using an unlicensed frequency range capable of carrying much more data than Wi-Fi and cellular networks. It is twenty times faster than the 1Gps fiber networks deployed today in many large organizations, and twice as fast as the current state of the fiber network art, which is 10Gbps. One reason organizations have not flocked to 10Gbps, besides cost, is that they simply don’t yet need more badwidth. But as sure as eggs is eggs, they will.

 

  • 20 Petaflops in 2012: IBM is to build the next world champion supercomputer, called Sequoia, for the Lawrence Livermore National Laboratory. When completed in 2012, it will be capable of 20 petaflops (20,000 trillion floating-point operations per second).

 

  • Microsoft in the Clouds: Microsoft is moving away from bloated Windows and towards where its rivals have been flying for some time: Cloud computing.

    A cloud is simply any number—potentially hundreds of thousands or more—of PCs networked to serve as if they constituted a single gigantic server. A cloud enables applications—office software, for instance—to be served to more people simultaneously, with less chance of failure, than a single server.

    Amazon.com and Google’s many web applications are already served up by clouds rumored to contain more than 100,000 nodes running on the open-source Linux operating system.

 

  • Spintronics: Recent success in directly measuring the spin of electrons in a material that exhibits the “Hall effect” means that the spin can be manipulated, which in turn means another advance towards fault-tolerant quantum computing. With the Hall effect, information can flow totally without loss and totally without external stimulus, even if there are slight irregularities in the material. No wonder the discovery of the Hall effect in 2007 was hailed by the journal Science as one of the ten most important scientific breakthroughs of that year.

 

 

  • 80-processor PC: Meanwhile, back in the real world, Dell hinted late last year that an 80-core, 1.01 teraflop Intel processor is in its plans for upcoming machines. Two years ago, Intel promised to deliver such a chip within the five years.

 

  • Self-Assembling Circuits: A simple way to make high-performance electronic circuits from self-assembling organic semiconductor molecules has brought us a step closer to low-cost, bendable plastic electronics. The technique has been demonstrated to produce hundreds of transistors and arrange them into complex circuits.

    Organic semiconductors are cheaper and more flexible than silicon. They will lead to large, cheap, bendable displays and a range of other inexpensive devices, such as better RFID tags.

 

The acceleration in computing and communication speed makes possible functions and features difficult to do with less power or at slower speeds. Recent examples of accelerating functionality include:

 

  • Holovideo: A new polymer-based, dye-infused material could, when perfected, lead to life-sized holographic video. High-resolution still holographic images can be made today, but cannot be refreshed fast enough for video. The new material can be manufactured at large sizes using existing techniques, and produces very bright images that can be preserved for hours.

 

  • Intelligent Assistants: Intelligent robotic assistants are about to become more real, although in software rather than in hardware form. “Siri,” a virtual personal assistant that helps users complete tasks rather than just collect information, is expected to be released this year. Siri was developed from a US military-funded artificial-intelligence project called CALO, for “cognitive assistant that learns and organizes.”

    The initial version will work on cellphones and have limited functions, such as helping make reservations at restaurants, check flight status, or plan weekend activities. Users type or speak commands in plain English, and Siri infers their intent then queries multiple online services. MIT Technology Review gives as an example: Finding and reserving a table at a midpriced Chinese restaurant in a specific part of town.

    The functional limitations will disappear as cellphone processing power increases and as network bandwidth grows, enabling some of the processing to be done on powerful computers at Siri’s headquarters.

    Siri’s cofounder and CEO believes that “in five years, everyone’s going to have a virtual assistant to which they delegate a lot of the menial tasks.”

 

  • New Model of AI: However, AI is not as advanced as it ought to be, and the reason may be that the brain theory on which much of it is based is wrong. The classic “connectionist” theory of the brain holds that the brain has no separate controllers for higher level brain functions and that control is distributed among all the parts. A new “controller” theory holds the opposing view and extends the “computational” theory of a rules-based, top-down processing hierarchy in the brain.

    The new theory’s developer, Dr. Arun Roy, notes that “Virtually all current approaches to machine learning typically require a human supervisor to design the learning architecture, select the training examples, design the form of the representation of the training examples, choose the learning algorithm, set the learning parameters, decide when to stop learning, and choose the way in which the performance of the learning algorithm is evaluated. This strong dependence on human supervision is greatly retarding the development and ubiquitous deployment of autonomous artificial learning systems. Although we are beginning to understand some of the learning systems used by brains, many aspects of autonomous learning have not yet been identified.”

    Dr. Roy hopes “to show … that it is feasible to construct machines that can learn on their own like humans.”

 

  • Semantic Web: Such machines will no doubt benefit from a semantic map under development for 30 years and now on the market. It enables software developers to have their programs “understand” almost every English word in the dictionary, based on tenses and sentence context. It will greatly facilitate natural language searches on the Internet and is already used in LexisNexis Concordance “e-discovery” software to sift through documents amassed during evidence phases of trials. On the developer’s website, www.cognition.com, one can use the semantic map to search MedLine.

    This takes us a step towards Sir Tim Berners-Lee’s “Semantic Web,” or Web 3.0—a Web that knows what it knows. Microsoft (again, belatedly and typically) has woken up to the possibilities and used its brute force to buy its way in (to the tune of a rumored US$100 million) with the acquisition last year of Powerset to enhance its laggard Live Search Internet search service. (To be fair, we are avid users of Microsoft’s Live Search Maps – a Google Earth-like application, still in beta, that offers pretty darn good 3D navigation of major cities.)

 

  • Augmented Reality for Cellphones: Microsoft has demonstrated augmented reality software that can superimpose computer-generated information on top of live video of the real world. For example: Point your cellphone camera at a restaurant and the software will recognize it, tell you how good it is, and show you the menu. Point the camera down the street, and the software could overlay a bus route and give the time of the next bus. If you are a civil engineer, it could instead overlay the underground utility pipes and cables.

    Most augmented-reality systems locate their position using GPS or by triangulating Wi-Fi signals, whereas Microsoft’s system relies on recognizing objects within a scene using computer-vision algorithms—a significant challenge whose defeat is a feather in Microsoft’s cap, though they are still working to improve it further.

 

  • DNA Computing & Drug Delivery: Molecular computers that self-assemble from strips of RNA within living cells have been demonstrated to work. Eventually, such biocomputers could form the brains of nanobots (nanoscale robots) able to manipulate biological functions within the cell, such as autonomously delivering the right amount of a drug at the right time and place. The recent creation was able to detect the presence or absence of molecules of the drugs tetracycline and theophylline within the cell. Animal studies are being planned.

 

This latter item impinges on a related topic: Healthcare devices that build upon the accelerating power of computing and communication and the accelerating functionality that power bestows. We will describe some recent developments in the (also belated) April issue, which we will send you in a few days.

 

Thanks for reading, and please send us feedback or comments in the box below.

 

Leave a Reply

Your email address will not be published. Required fields are marked *