Future of Computing?
Notes from a session held at the Luminous Green Hands-On Workshop, 4th of May 2007
Angelo Vermeulen and Patrick De Koning
DNA computing
DNA structure
The backbone of the DNA strand is made from alternating phosphate and sugar residues.[8] The sugar in DNA is 2-deoxyribose, which is a pentose (five carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix the direction of the nucleotides in one strand is opposite to their direction in the other strand. This arrangement of DNA strands is called antiparallel. The asymmetric ends of a strand of DNA bases are referred to as the 5' (five prime) and 3' (three prime) ends. One of the major differences between DNA and RNA is the sugar, with 2-deoxyribose being replaced by the alternative pentose sugar ribose in RNA.[6]
The DNA double helix is stabilized by hydrogen bonds between the bases attached to the two strands. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These four bases are shown below and are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate.
Very good description on http://arstechnica.com/reviews/2q00/dna/dna-1.html
Advantages:
- fast
- small
- available
The data density of DNA is impressive. Just like a string of binary data is encoded with ones and zeros, a strand of DNA is encoded with four bases, represented by the letters A, T, C, and G. The bases (also known as nucleotides) are spaced every 0.35 nanometers along the DNA molecule, giving DNA an remarkable data density of nearly 18 Mbits per inch. In two dimensions, if you assume one base per square nanometer, the data density is over one million Gbits per square inch. Compare this to the data density of a typical high performance hard drive, which is about 7 Gbits per square inch – a factor of over 100,000 smaller.
Another important property of DNA is its double stranded nature. The bases A and T, and C and G, can bind together, forming base pairs. Therefore every DNA sequence has a natural complement. For example if sequence S is ATTACGTCG, its complement, S', is TAATGCAGC. Both S and S' will come together (or hybridize) to form double stranded DNA. This complementarity makes DNA a unique data structure for computation and can be exploited in many ways. Error correction is one example.
In the cell, DNA is modified biochemically by a variety of enzymes, which are tiny protein machines that read and process DNA according to nature's design. There is a wide variety and number of these “operational” proteins, which manipulate DNA on the molecular level. For example, there are enzymes that cut DNA and enzymes that paste it back together. Other enzymes function as copiers, and others as repair units. Molecular biology, Biochemistry, and Biotechnology have developed techniques that allow us to perform many of these cellular functions in the test tube. It's this cellular machinery, along with some synthetic chemistry, that makes up the palette of operations available for computation. Just like a CPU has a basic suite of operations like addition, bit-shifting, logical operators (AND, OR, NOT NOR), etc. that allow it to perform even the most complex calculations, DNA has cutting, copying, pasting, repairing, and many others. And note that in the test tube, enzymes do not function sequentially, working on one DNA at a time. Rather, many copies of the enzyme can work on many DNA molecules simultaneously. This is the power of DNA computing, that it can work in a massively parallel fashion.
The idea of using DNA to store and process information took off in 1994 when a California scientist first used DNA in a test tube to solve a simple mathematical problem.
Think of DNA as software, and enzymes as hardware. Put them together in a test tube. The way in which these molecules undergo chemical reactions with each other allows simple operations to be performed as a byproduct of the reactions. The scientists tell the devices what to do by controlling the composition of the DNA software molecules. It's a completely different approach to pushing electrons around a dry circuit in a conventional computer.
To the naked eye, the DNA computer looks like clear water solution in a test tube. There is no mechanical device. A trillion bio-molecular devices could fit into a single drop of water. Instead of showing up on a computer screen, results are analyzed using a technique that allows scientists to see the length of the DNA output molecule.
“Once the input, software, and hardware molecules are mixed in a solution it operates to completion without intervention,” said David Hawksett, the science judge at Guinness World Records. “If you want to present the output to the naked eye, human manipulation is needed.”
In terms of speed and size, however, DNA computers surpass conventional computers. While scientists say silicon chips cannot be scaled down much further, the DNA molecule found in the nucleus of all cells can hold more information in a cubic centimeter than a trillion music CDs. A spoonful of Shapiro's “computer soup” contains 15,000 trillion computers. And its energy-efficiency is more than a million times that of a PC.
While a desktop PC is designed to perform one calculation very fast, DNA strands produce billions of potential answers simultaneously. This makes the DNA computer suitable for solving “fuzzy logic” problems that have many possible solutions rather than the either/or logic of binary computers. In the future, some speculate, there may be hybrid machines that use traditional silicon for normal processing tasks but have DNA co-processors that can take over specific tasks they would be more suitable for.
references
Quantum computing
- Paul Benioff is credited with first applying quantum theory to computers in 1981.
- Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.
- This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
- QuantumComputing
Optical computing
Advantages:
- optical data transport is faster than electricity
- optical transistors are faster than silicium transistors
- 3D optical data storage
Data transport:
- fotons travel 10 times faster than electrons
- optical fiber has a much wider bandwidth, no cueing, several frequencies can be used in a parallel way
- current hybrid systems: traditional CPU, RAM and hard drives connected with fiber optics
Optical transistors:
- a transistor functions like a gate
- silicium transistors are too slow for optical computing
- new concept of transistors: tiny mirror, can be positioned, different optic fibers can pick up the refracted light beam
- this transistor speed is 1000 times faster than a silicium based transistor
3D optical data storage:
- cube form factor
- data are accessed using three lasers that intersect
- enormous storage potential: 1 terabyte on 1 cubic cm