Today was the first time that I had heard of the possibility of DNA computers. I may have heard it in passing once before, but I probably figured it was more likely that I learn how to use the force or accompany a stout, boyish-looking man on a quest in middle earth to throw a ring into a volcano...I'm sure you get the idea.
According to Moore's Law, which describes a long-term trend in computing capacity, the number of transistors that can be placed on an integrated circuit doubles every two years; conversely, microprocessor size is halved every 18 months.
This means that by the year 2023 (roughly and apparently), the silicon chip will hit a physical roadblock- the microprocessor will be too small with too many transistors to allow passage of the very electrons that breathe life (so to speak) into computers. People will have to cope with sub-par processing speed (which would be quite more significant than today's PC), as the silicon chip will have reached it's maximum potential.
Now, aside from the simple solution of linking two chips together (which, perhaps by then or in the future of the future will be an option), some in the tech literature dream of a world where DNA computers exist. Computers that regenerate, grow, adapt, and perform processing tasks at enzymatic rates...all without the use of much raw material (DNA can essentially be amplified forever by techniques like PCR if enough reagents + buffer are available).
This prompts a very philosophical, but increasingly important question: can human beings create machines (or at this point, perhaps life itself) that are more intelligent than we are? Science fiction suggests 'maybe', while intuition and logic may suggest 'no.' Some would argue that computers are already smarter than us...yes, it is true that the processing capacity of computers may well exceed the computational capabilities of the entire human race combined. But this measure of computation does not imply that the computer is 'smarter'- this argument lacks content/face validity.
Perhaps the bio-technical sciences can borrow a lesson from the physicists: we cannot use current knowledge and theory to ascertain the effect of smarter-than-human intelligence because the conditions required for such a hypothetical world may approach a sort of social-singularity. Just as the laws of physics may break down when one tries to model a singularity at the centre of a black hole, perhaps the 'laws' of intelligence, and thus human society, would breakdown if such a singularity were modelled.
In sum, it may be impossible to imagine a world where computers are smarter than humans, because we are currently limited by our own intelligence through our creation of theories and our understanding of knowledge.
Other improvements in computing power may be next in line, such as fibre-optic-based computers, direct brain-computer interface, biological augmentation of the brain and genetic engineering. Whether these devices will be smarter than humans...I cannot be sure...perhaps I am both bound to, and limited by, my own intelligence.