Keywords:

Introduction

In some way, it may be in the nature of surgery itself to want to come to grips with the uncertainties and dilemmas of practical medicine. Surgery has become as high tech as medicine gets, but the best surgeons retain a deep recognition of the limitations of both science and human skill. Yet still they must act decisively.” [1] —Atul Gawande (2002)

There are many individuals who write about future technologies that are expected to transform our societies. Many of these are merely speculative in nature without considering the potential upheaval that these advanced technologies will have upon every aspect of our societies, including the way we treat urolithiasis. For instance, our last major historical upheaval was the Industrial Revolution which resulted in the fall of bladder stones but the rise in renal stone disease. During this time frame, we have gone from a “craftsman-oriented” economy to a “mass-produced” type of economy. We have gone from a “community-centric” environment to a “globally centric” one [2]. We have seen the rise and fall of innumerable hegemonies of government to a more pluralistic kind of control [3]. Stone disease has proliferated and rapidly increased in incidence and prevalence throughout much of the world.

Without question, the next major influence upon mankind, that enabling technology which will be capable of catapulting us again to the next level are intelligent technologies (IT). In order for us to prepare for tomorrow, or for that matter, know which pathways to follow, it is incumbent upon us to know what is happening with technology. For life in general, medicine more relevantly, and to urolithiasis specifically, there is a “growing mountain of research” threatening to engulf us. How can we as caring and compassionate physicians (sic assumed) continue to be masters of it all? Socioeconomically, there is no question that the financial forces currently at work around the globe are threatening to consume all of the gross national products of many countries.

We need to develop the historical background for you in this review to make the technologic leaps necessary to master the maelstrom to come. Starting with digital computer technologies, working through artificial intelligence, encompassing biotechnologies that remain at the core of medical practice, and ending with nanotechnologies, the intent will be to give a historical tour de force for the next decade, perhaps even the next century. By looking backwards as to how we arrived at this precipice on which we are now perched, it is hoped that you, like Winston Churchill, might see further into your futures.

Principles of Change

Someone once said that the only constant is change. Sir Karl Popper has written elegantly about the basics of scientific change: “The critical attitude may be described as the conscious attempt to make our theories, our conjectures, suffer in our stead in the struggle for the survival of the fittest. It gives us a chance to survive the elimination of an inadequate hypothesis- when a more dogmatic attitude would eliminate it by eliminating us” [4]. But how can anyone predict the future? In fact, the literature is strewn with the flotsam and prophesies gone awry. For instance, Bill Gate’s 1981 classic, “640,000 bytes of memory ought to be enough for anybody.” Ken Olson in 1997 stated, “There’s no reason for individuals to a computer in their home.” Better for our purposes is Yogi Berra’s quip, “It’s tough to make predictions, especially about the future” [5].

No one can accurately predict the future. New developments and new discoveries are always capable of rewriting what we think we know. Chance certainly plays a role and no one could have foreseen the impact of the World Wide Web on our society when Tim Burners-Lee wrote the code for CERN in 1991 (note: that’s not that long ago!). So why spend so much time on something as fickle as the future? You can think about what has happened to us by a lack of foresight as we’ve already alluded to, or you can consider the scenario of nuclear weapons as an example. Would you rather develop a monster before you consider how to make a cage? Long-term forecasting is not futile when carefully considered concerns remain the focus. To quote Drexler, “In a race toward the limits set by natural law, the finish line is predictable even if the path and pace of the runners are not” [6]. David Hume, the eighteenth-century Scottish philosopher, introduced in 1737 the notion that observations cannot logically prove a general rule; rationalism cannot be trusted [7]. By 1945, Bertrand Russell presented the twentieth century’s response to the “growth of unreason” and the death of empiricism to vault us into our modern era [8]. This thinking has led us to our recent predilection of future speculation and the scientists’ favorite science philosopher Karl Popper phenomenon of evolutionary process of development [4]. Technologic variations and advancements are quite often deliberate, plodding, and crude in comparison to the science that lies behind them. Edison tried virtually every product on the planet before settling upon tungsten as the filament for his incandescent light bulb [9]. Charles Goodyear tried everything to make unacceptable rubber into the moldable, durable substance we today use in tires by a chance drop of sulfurized rubber in his lab. The point is that engineering the marvels of tomorrow’s technology is methodical and predictable in a fashion in that it is evolving. Future breakthroughs result in rapid progress. Progress evolves through cycles of design, calculation, criticism, redesign, and construction. So, forced by competition and testing, science evolves towards more power and accuracy. Efforts to predict the engineering achievements started way back in classic Greek technology. Leonardo da Vinci in his collected works called the Codex Atlanticus made projections using detailed drawings regarding the ability of machines to improve upon motion and power control with machines [10]. He designed an earth-mover to make canals that were never built. He designed a robotic man at the age of 30. He utilized a design he envisioned at the age of 26 to power his automated soldier or knight. This device used a front wheel drive, rack-and-pinion automated cart to provide both the power and the mobility that his robot would require. Leonardo designed a chain-drive system that would remain unbuilt for almost three centuries and the bicycle. He failed to build an aircraft because of his inadequate understanding of modern aerodynamics and lift. But this lack of scientific knowledge certainly did not stop him from designing these machines [10].

Prior to concluding this section of the introduction, there are three key publications that are worth reviewing. The first is Vannevar Bush’s 1945 article that appeared in the Atlantic Monthly, entitled “As We May Think” [11]. Bush was an MIT-trained engineer with a particular aptitude for math. He was a young professor at Tufts when World War I broke out. Bush developed a device that would use magnetic fields to detect submarines. He traveled to Washington in May of 1917 to meet with the new director of a group of scientists advising the government, the National Research Council (NRC). After the war, Bush matriculated back to MIT’s electrical engineering department. He became interested in analog computers to solve complex equations and by 1931 he completed the first differential analyzer. He also proposed and built a machine for the FBI that could review 1,000 fingerprints per minute which he called a rapid selector. In 1937, Bush became the president of the Carnegie Institution with a then $1.5 million annual budget for research. His prestige rapidly increased and by 1940 Roosevelt called on him to create a new national organization for scientific military research called the National Defense Research Committee (NDRC). Bush was made the first chairman and given a direct line to the White House. By 1941 the Office of Scientific Research and Development (OSRD) was set up and Bush became its director [12]. Bush became intimately involved in advising Roosevelt about the Manhattan Project. Colliers magazine hailed Bush as the “man who may win or lose the war”. By war’s end, Bush dreamed about a national science group and the National Science Foundation was created in 1950 [12]. He published “As We May Think” in the Atlantic Monthly. This article describes a whole host of technologies that did not then exist. He describes a theoretical machine called a “memex.” This was to be a multipurpose intelligence extender. The memex would be a repository of general information that a user could call upon for facts and figures [11]. His description is hauntingly close to modern hypertext and the Internet. In 1960, Ted Nelson, who coined the term hypertext acknowledged his debt to Bush.

The second article was first presented on December 29, 1959, at the annual meeting of the American Physical Society at the California Institute of Technology. It was subsequently published in the February issue of Caltech’s Engineering Science by the Nobel laureate Richard P. Feynman. The title, typical of Feynman, was inauspiciously called “There’s Plenty of Room at the Bottom” [13]. He stated that he wished to talk about “the problem of manipulating and controlling things on a small scale” [13]. He goes on to state that an enormous number of technical applications would arise from such a technology and that there were no fundamental reasons of physics preventing its development. He proceeded in the ensuing 11 pages to recount the possibilities of molecular engineering, which heretofore have been unheard of. He then continued with the physics of such machines including miniaturization, lubrication, supply, and demand. He concluded with a discussion of “rearranging the atoms” themselves. He speculates the complexities involved with quantum physics at the atomic level, but concludes “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle that can be done; but in practice, it has not been done because we are too big” [13]. He stated that it is his opinion that such a nanotechnology (sic word mine) cannot be avoided in the future. And he created a cash award of $1,000 for school kids and engineers to start this technology: “I do not expect that such prizes will have to wait very long for claimants” [13]. But Feynman never saw the emergence of this technology in his lifetime. Tom Newman, a Stanford electrical engineering grad student, used electron-beam lithography to transcribe the first page of Charles Dickens’s A Tale of Two Cities onto a square 1/160th of a millimeter on the head of a pin [14]. In fact, it was not until the Bush Administration was leaving office that Clinton enacted the National Nanotechnology Initiative in order to speed basic research.

The final article which fittingly concludes this section while also introducing the next appeared in Electronics in April 19, 1965, by Gordon E. Moore. The article was called “Cramming more components onto integrated circuits” by the then director of Research and Development of Fairchild Semiconductor [15]. He started this article with the prophetic words: “The future of integrated electronics is the future of electronics itself” [15]. Moore talked about the future of intelligent technologies as no one before had; he speculated about ubiquitous computerization. He stated that computers of the future will be distributed, not centralized. He predicted that the machines of the future would be built at lower costs and with faster turnaround because of acceleration of power and capacity. He based his observations on 25 years of experience in the miniaturization of electronic components. He predicted that integrated electronics would become generally available throughout all of society, performing many functions that presently are done inadequately by high-cost systems. He predicted that silicon would most likely remain the basic material of semiconductors and the key to this expansion. He finally warmed to his topic in talking about graphs and curves he had generated. For simple circuits, he said, the cost per component was nearly inversely proportional to the number of components. He demonstrated cost curves from 1962, 1965, and 1970 showing a 1/10 reduction in cost but he didn’t stop there. He predicted that if trends hold, and he saw no reasons why they should not, then within 10 years an integrated circuit with 65,000 components could be achieved for minimum cost and the size will be reduced to about ¼ a square inch. He also predicted the heat generated by tens of thousands of components in a single silicon chip. He also noted that the shrinking dimensions on the integrated structure would result in operations at higher speeds for the same power per unit area [15]. Thus, in one neat and tidy paper, Moore predicted the coming era of intelligence technologies. All subsequent works on the future of these technologies continued Moore’s prescient observations and his doubling times have eroded, not held constant. Technology is advancing faster than Moore even conceded.

So the stage is set; the future is capable of being anticipated within the limits of constrained technologies. The only thing missing is the amalgamation of converging technologies. This will be the foundation of the next section.

The Law of Accelerating Returns

The goal of this section is to convince you that all of our previous recorded history regarding technology in general and digital intelligent technologies in particular are converging. Convergence is that phenomenon when the focal capacity in any given area meets at commonality. The commonality of all of these technologies is life itself. Lest you think that believable science is being stretched into science fiction, a wide variety of resources will be utilized that are available to you, the reader (disbeliever or believer), so that you can check on the sources’ validity of anything that this author is writing about. Ray Kurzweil, the entrepreneurial scientist who developed digital software for the music industry, has published several books on this subject. He has a website and technologic discussion blog with graphs regarding human technologic advancement as the fundamental preposition and he devised the law of accelerating returns. K. Eric Drexler published “Engines of Creation” that has already been mentioned. He suggested that mankind was approaching a new frontier of technologic advance based upon nanotechnology that would be fostered by advances in intelligent technologies [6]. In June 2002 our own National Science Foundation (introduced by the way by Vannevar Bush, recall “As We May Think”) met in Arlington, Virginia, and reported upon “Converging Technologies for Improving Human Performance” [16]. They focused on specific issues that are rapidly advancing in four areas of scientific research. These included nanotechnology, biotechnology, informational technology, and cognitive science. In addition, NASA (National Aeronautics and Space Administration) has now had yearly scientific workshops and meetings specifically addressing the convergence of technologies and the applications to space exploration. By reviewing each of the salient pieces of these science facts, it is hoped that you too will feel the growing excitement that has spawned so much negative or “conservative” scientific response by some serious researchers. Francis Fukuyama has publicized his views of the perils of converging technologies in “Our Posthuman Future: Consequences of the Biotechnology Revolution” [17].

Ray Kurzweil is a graduate of MIT and among the many innovations he has invented include reading machines for the blind, music synthesizers used by such performers as Stevie Wonder, and speech-recognition technologies. His alma mater, MIT, named him the inventor of the year in 1988. Carnegie Mellon bestowed their top science award to him in 1994, and he won the American Publisher’s Award for Most Outstanding Computer Science Book in 1990 for his “Age of Intelligent Machines” [18]. It was in fact Kurzweil’s thinking and findings that launched Sun Microsystems founder Bill Joy to write the “highly” thought-provoking article in the journal for technophiles Wired called “Why the Future Doesn’t Need Us” [19]. In this excellent observational science article, Joy argued that our technologies are becoming increasingly complex and that public participation in advancement has all but been eliminated. He outlined a scenario where our technology has become so sophisticated that it endangers the human species. This brings “shades” of Terminator to our consciousness and from a very gifted scientific insider. What has Joy so spooked is a well-fabricated and illustrated march to “massive leaps” in technologic advancement by the convergence of intelligent technologies [19]. Kurzweil in his writings took all of mankind’s technologic advancements and attempted to do what Gordon Moore had done for just microprocessors, but he overlaid them in graphic formats to see what is happening. He then explained that almost every aspect of modern technology is rapidly expanding at exponential growth rates. The exception is computer or intelligent technology itself which is expanding at a double exponential rate. That is to say, that its rate of growth is itself exponential!

Mankind’s first technologic steps, “sharp-edged tools, fire, and the wheel,” took tens of thousands of years to develop and master. By 1000 AD progress was much faster. By the nineteenth century, there were more inventions than all of recorded history previously. The first 20 years of the twentieth century saw more advancement than the entire nineteenth century. Now, huge technologic advances change the whole world in just a few years. No one, closely looking at the pace of technologic development, would argue that we could easily have a 1,000-fold advance very, very quickly.

Kurzweil uses example after example to argue that we have arrived at the precipice of a singularity based upon the study of exponential growth of our technologies [20]. Singularity, according to Ray, is a point where technologic change is so rapid and profound that a rupture in the fabric of human history is probable [21]. This folks is what has Bill Joy and a whole host of other very smart scientists concerned by the convergence of our technologies. John Von Neumann, one of the great mathematicians of our time and a founding father of artificial intelligence technologies, stated in the 1950s, “the ever accelerating progress of technology…gives the appearance of approaching some essential singularity in the history of the human race beyond which human affairs, as we know them, could not continue” [22].

This year, Oak Ridge’s Titan computer (Nvidia and Cray) blasted past IBM’s massively parallel processing supercomputer Sequoia at the Lawrence Livermore National Laboratory at 17.59 petaflops (Sequoia a paltry 16.32 petaflops); a petaflop is a quadrillion calculations per second [23]. Kurzweil believes we will achieve human brain capability with our computers for just $1,000 by 2023 and for 1 cent by 2037 (this is 1 × 1016 cps). Pooled human brainpower from every living human being on this planet should be achievable by 2045 for $1,000 and by 2059 for just a penny (i.e., 1 × 1024 cps) [20]. Titan has weighed in at just little over 1/10th as smart as a single human being. The question to be answered is: What will all of this intelligence technology be used to do? To quote from Drexler’s book:

In the last century we have developed aircraft, spacecraft, nuclear power, and computers. In the next we will develop assemblers, replicators, automated engineering, cheap spaceflight, cell repair machines, and much more. This series of breakthroughs may suggest that the technology race will advance without limit. In this view, we will break through all conceivable barriers, rushing off into the infinite unknown…” (Drexler, Chap. 10, Limits to Growth) [6].

What is happening in the real world of surgical practice in these regards? Stone disease has several surgical alternatives: endoscopic (retrograde ureteroscopic methods utilizing the holmium: YAG laser vs. percutaneous transrenal methods that can use either lasers or ultrasound or mechanical methods), shock wave lithotripsy (high-energy waves generated outside and beamed into the body to break up stones), or in complex scenarios the use of robots to perform what in the past were open operations. The methods that we’ve alluded to throughout this textbook have evolved and followed the trends that we have just been talking about. Celsus gave way to the Marian approach, which in turn gave way to the lateral perineal lithotomy, which gave way to Civiale’s pioneering lithotrity, which became obsolete with Bigelow’s litholapaxy, which has been antiquated by the holmium: YAG laser, and bladder stones themselves became a historical oddity by improved dietary nutrition, and open renal stone surgery is rarely indicated any longer, despite the rapid increase in new stone sufferers. The literature is nearly impossible to keep up with, the newer technologies are hard to place into perspective, but the abilities of our technology to change what we are doing currently is ever more likely [25]. But the technology itself is helping keep track of the literature; via the Internet a massive amount of previous necessary library work can be done at odd hours from home. Electronic books have made available the older literature, which in fact many of the books utilized in writing this textbook are on my smartphone as well as an iPad. One can literally be in touch with almost any older book, thanks to countless libraries that have scanned and put these treasures on the World Wide Web.

Information Technologies

So here we are, alive today at the dawn of the Information Age. What does it mean to have computers rapidly assimilating into the fabric of humanity? Twenty-three years ago the World Wide Web came into existence; prior to that all data had to be researched via classic book methods [26]. After Tim Berners-Lee’s methods to incorporate hypertext into codes for computer manipulation, everything changed. The following table lists some of the firsts from this digital media capacity, and recall that in 1945 it was all predicted by Vannevar Bush [11].

Computer technology has followed Moore’s Law with minor variation since Gordon Moore, cofounder of Intel, wrote of his observing the trends of microcircuits for the past 35 years. He actually stated that we could squeeze twice as many transistors on an integrated circuit every 24 months. The cost of that technology has almost halved in the same time period. In other words, the supercomputer of 1990 that cost $100,000 is today available in a $150 Nintendo system. Randall Tobias, former vice president of AT&T, is widely quoted as saying, “if we had similar progress in the automotive industry, a Lexus would cost $2, it would travel at the speed of sound, and go 600 miles on a thimble full of gas.” In other words, things are accelerating with increasing rapidity [5].

Computational prowess started many years ago, and some would estimate that we have passed through more than 20 doublings on the exponential scale. In a more obvious sense, when the Internet came out of “nowhere” during the early 1980s, 20,000 nodes increased to 80,000 nodes within 2 years and no one noticed. But in the late 1990s when it went from 20 million to 80 million nodes, the impact has been dramatic. It has been anticipated that Moore’s law should run out of physical possibility by 2019. But in retrospect, there have been other trends antecedent to Moore’s observations that are also exponential. Moore’s observations were based upon his observations on microchip manufacture. As one technology has ended its physical capacities in computational ability, another has arisen to take its place. Chips today are flat with no three-dimensional architecture, yet our brains massively parallel process in 3D. Computational models of the human brain are also rapidly expanding. The possibility of nearly limitless computational capacity also exists with quantum effects. Research in this area is rapidly progressing [24].

All of the other elements necessary for intelligent technology progression are also accelerating at exponential rates. Memory, for instance, which Moore did not include in his initial projections is advancing exponentially. The amount of memory utilized in the entire Apollo space program is readily available on a $150 game today. In fact, Oak Ridge’s Titan supercomputer runs on processors that were originally developed for gaming. Exponential growth has been observed in communications technology as well. The technology of fiber optics, optical switching, electromagnetic transmissions, and others are all converging to make communications faster and faster. The power of wireless communication is also doubling every 10–11 months. The Tokyo Tech laboratory has recently set the record of wirelessly transferring data at 6.3 Gb/s. Do you think that the speed of light is the limit? How about recent observations at CERN regarding quantum locking? Apparently two elementary particles separated in a large accelerator can communicate with one another faster than the speed of light. The phenomenon has now been confirmed with larger particles as well [27]. A mechanism of “instantaneous communication” might therefore be possible.

As mentioned previously, Titan supercomputer is capable of calculations of 17.59 quadrillion calculations per second. The human brain is estimated conservatively to perform at 20 quadrillion calculations per second. Yet, electric circuits already are ten million times faster than the fastest neurons.

The ability of our computational machines to emulate our own biological processes is also being investigated. Ted Berger and colleagues at Hedco Neurosciences have devised integrated circuits that precisely match the digital and analog processing characteristics of neurons and clusters of neurons. One step further along is the group from Caltech that microprocessors that now emulate the digital-analog characteristics of mammalian neural circuits. Much work is ongoing in what is called “chaotic computing” which parallels the human brain’s capability of processing patterns from the frenzied activity of entire networks of neural firing [28]. Eventually stable patterns emerge and a logical “decision” arises. All of this has been modeled mechanically. The question becomes: Is the human brain that much different from our mechanical computers?

Artificial Intelligence

Artificial intelligence (AI) uses computer technology to strive towards the goal of machine intelligence and considers implementation as the most important result; cybernetics uses epistemology (the limits to how we know what we know) to understand the constraints of any medium (technological, biological, or social) and considers powerful descriptions as the most important result. The computer chip comes from germanium or silicon solid-state transistors that were first of two Nobel Prizes in physics for John Bardeen (the only physicist to win two Nobel Prizes in physics) [29]. In 1950, ENIAC at the Moore School of Electrical Engineering at the University of Pennsylvania was the first modern electronic computer with the essential features found on current computers. By the early 1950s, microprocessors began to be conceptualized, and computers began to make their way into scientific and business accounting [30]. In the summer of 1956, John McCarthy who founded the Stanford Artificial Intelligence Laboratory (SAIL) along with Marvin Minsky (then at MIT), started a 6-week workshop at Dartmouth College on “Artificial Intelligence.” There were 12 original participants in the prophetic group. The field of AI came into being when the concept of universal computation, the cultural view of the brain as a computer, and the availability of digital computing machines were combined [31]. The field of cybernetics came into being when concepts of information, feedback, and control were generalized from specific applications (i.e., in engineering) to systems in general, including systems of living organisms, abstract intelligent processes, and language. Already mentioned were Vannevar Bush’s vital contributions with his view of the information revolution. Ted Nelson conceived and designed hypertext and the systems for storing and transferring information. Tim Berners-Lee followed by delivering the World Wide Web to his employers and built and placed it upon the nascent Internet of the early 1990s.

The exact beginning of cybernetics is perhaps difficult to ascertain, but the article “An essay on the origins of cybernetics” from a 1959 article by D.L. Stewart is the best place to start [32]. He notes that the word cybernetics was derived from the Greek kubernetes or steersman and was coined by Norbert Wiener a professor of mathematics at MIT. But like many things in history, everyone overlooked a little understood paper by James Clerk Maxwell from the Proceedings of the Royal Society of London in 1868 “On Governors” [33]. Wiener started meeting with other young scientists monthly at Vanderbilt Hall in the early 1940s. One of the first investigators he met was a Harvard Medical School professor of physiology Arturo Rosenblueth. This pair would later team up during the war years to investigate a machine’s ability to predict voluntary control (desperately needed for wartime anti-aircraft design systems). By 1943 these investigations were published in the Philosophy of Science called “Behavior, purpose and teleology” [32]. They specifically defined behavior as any change of an entity with respect to its surroundings. This began the scientific understanding of mechanized actions or the understanding of human behavior with mechanized processes. Their first classification separated active behavior, in which the object is itself the source of energy in the output, and non-active behavior or passive behavior in which all the energy in the output comes from the immediate output. The essence of their theories was based upon feedback loops for control; the mathematics was just beginning at this time. They stated, “the broad classes of behavior are the same in machines and in living organisms….while the behavioristic analysis of machines and living organisms is largely uniform, their functional study reveals deep differences” [32]. Wiener and Rosenbleuth’s ideas would begin to stimulate formal scientific investigation when the Josiah Macy Jr. Foundation organized a series of scientific meetings to fertilize new methods of investigation throughout the 1940s. By the 1950s the term “cybernetics” was increasingly utilized to describe much of the scientific investigation of control mechanisms, digital processing, and of course computer technologies and intelligent systems.

Artificial intelligence systems have been applied to medicine as neural networks. These networks were set up using self-organizing maps to become increasing powerful tools to evaluated complex data inputs and eliminating subjective basis of evaluation. The power of this method was clearly demonstrated when a computer beat the physicians in diagnosing meningitis in 1997. Now such artificial neural networks (ANNs) have been increasingly utilized in a wide and spectacular array of medical uses: diagnosis (echocardiography, brain mapping, lung scans, and prostate biopsy readings), therapy (gastroesophageal reflux algorithms), effect of treating (methadone in addiction), Alzheimer disease therapies, and modeling obesity outcomes [34]. But stone disease is also complex and such types of artificial intelligence is just beginning to be investigated, for instance, in the predictive possibility of ureteral stones passing with or without the aid of medications [35]. The promise for this technology in the care and management of patients with urolithiasis and perhaps in managing the literature itself has substantial promise for the use of ANNs [36].

Biotechnology

As physicians, the bottom line comes from the technologies that directly impact upon the way we practice medicine. Biotechnologies are dominated by those processes that the news media hypes, the headliners. The two most dominant headline biotechnologies recently are the Human Genome Project and cloning. The technology behind the Human Genome Project was DNA sequencing. About 15 years ago, when DNA sequencing was in its infancy, it was estimated that it would take thousands of years to sequence every base pair on the whole of human chromosomes. But the entire sequence was completed in just under 15 years at a cost of several millions of dollars. In fact, you can now purchase your very own DNA sequencer and perform this amazing feet of biotechnology yourself at home. Another example would be the 15 years it took to sequence the human immunodeficiency virus (HIV) but only 31 days to unravel the SARS virus. Sequencing is following the same exponential technologic growth pathway that applies to computers, intelligent technologies, and everything else we’ve used as examples (Table 32.1) [37]. Biotechnology-based gene therapies are in their infancy, but already there have been an estimated 350 spin-off products from the fruits of the Human Genome Project [38].

Table 32.1 DNA sequencing costs through the Human Genome Project till the current time [37]

Genetic manipulation itself is going to be the next major target of our advancing technologies. It is currently estimated that about 99 % of the drugs we use in medicine are found by the laborious pathways of classic drug development, manipulate one molecule and “see what happens.” Discovered in 1998, RNA interference (RNAi) is a normal biologic process that is used to regulate gene expression. Genetic technologies offer the potential for such things as RNA interference [39]. By blocking the fat insulin receptor in rats, they ate ravenously but remained lean. They did not develop diabetes, did not develop heart disease, and lived 20–25 % longer than non-blocked controls [40]. There are genes that control every aspect of our biological lives that are now open for pharmaceutical investigations. Complex genetic mechanisms associated with urolithiasis that we discussed in the chapter on Modern Science are beckoning to be turned off or suppressed. If the law of accelerating returns applies, and there is no reason to think that this industry will be immune, in 10–15 years mature gene therapies will be rapidly advancing in medicine. That folks is just one generation away!

Cloning is the next “headliner.” Though manipulation of the human genome in this fashion may have already occurred, it is likely that other converging technologies might reduce the necessity to even pursue this capability. There are companies that are already synthesizing nanofactories to make chromosomes. They have been photographed and show some capacity of functioning in biologic systems. It is possible that with the maturation of biotechnology, we might be able to dramatically alter major diseases such as atherosclerosis and malignancy which we’ve struggled against for centuries. The next frontier will be aging itself, but lest you think this has not achieved significant scientific and technologic interest, you would be quite wrong. Just 1,000 years ago, human life expectancy was about 23 years. By the beginning of the Industrial Revolution in England (200 years ago), it was 37. By the completion of this same revolution, again in England in 1900, it was 50 years. Currently, it is estimated to be about 79.2 years and rising [41]. What is the ultimate limit of human existence? No one actually knows, but we do know that some organisms do seem to be immortal. The genetic mechanisms that control for this phenomenon are only now just becoming unraveled. The genetic aspects of stone manipulation should be child’s play compared to human life prolongation.

Nanotechnology

The Nobel Prize physicist Richard Feynman predicted in a 1959 talk entitled “There’s plenty of room at the bottom” that there was the theoretical possibility of manipulating things on a molecular scale [13]. Prior to this prophetic lecture, Albert Einstein as part of his doctoral dissertation (1905) calculated that the size of a single sugar molecule was about a nanometer in diameter (for scale imagine that ten hydrogen atoms side by side, it is one thousandth the length of a typical bacterium, one millionth the size of a pinhead) [42]. The first living cells housing nanoscale biomachines evolved 3.5 billion years ago. In 400 BC Democritus coined the word “atom,” thought to be the basis of all matter. In 1931 Max Knoll and Ernst Ruska developed the electron microscope for subnanometer imaging. In 1959 Richard Feynman gave the prophetic lecture predicting the rise of nanotechnologies. In 1968 Alfred Y. Cho and John Arthur of Bell labs invented molecular-beam epitaxy to deposit single atomic layers on a surface. In 1974 Norio Taniguchi conceived the word “nanotechnology.” In 1981 Gerd Binnig and Heinrich Rohrer created a scanning tunneling microscope, which can image individual atoms [5]. By 1985 Robert F. Curl, Jr., Harold W. Kroto, and Richard E. Smalley discovered buckminsterfullerenes, also known as buckyballs, which measure about 1 nm in diameter (1996 Nobel Prize) [43]. These are 12 carbon compounds that are made from the vapors of carbon dust and form very structurally sound covalent bonds. The carbon buckminsterfullerenes provide almost 1,000 times the strength of steel and have the capacity to auto-organize themselves if damaged. They might also represent a unique delivery system of encapsulated genetic material to manipulate genetic defects.

D. Eric Drexler published his futuristic book Engines of Creation in 1986 that popularized nanotechnology. In 1989, Donald M. Eigler of IBM wrote the company’s name using individual xenon atoms. In 1991, Sumio Iijima of NEC in Tsukuba, Japan, discovered nanotubes (again described in lay terms). In 1993, Warren Robinett of the University of North Carolina and R. Stanley Williams of the University of Southern California at Los Angeles devised a virtual-reality system connected to a scanning tunneling microscope that lets the user see and touch atoms. In 1998 Cees Dekker’s group at the Delft University of Technology created a transistor from a carbon nanotube. In 1999, James M. Tour at Rice University and Mark A. Reed of Yale University demonstrated that single molecules can act as molecular switches. In 2000 the Clinton administration announced the National Nanotechnology Initiative, which provided a big boost in funding to nanoresearch. Later in that same year, Eigler and others devised a quantum mirage with a magnetic atom, proving a possible means of transmitting information without wires at a molecular level [5].

Currently there are several proposals to the National Nanotechnology Initiative for medical applications. Some are for diagnostic possibilities including the use of artificial magnetic crystals that detect particular biologic entities such as pathogens. Other applications include the use of semiconductor nanocrystals, a quantum “dot.” These dots owe their special properties to quantum mechanics and emit photons of light in only one specific wavelength. These quantum “dots” can be attached to DNA sequences which when scanned can act like a genetic bar code, looking for flaws. A dendrimer is a branching molecule roughly the size of a protein that has a large internal surface area. They can be created in a variety of sizes and might be able to transmit DNA sequences into cell’s nuclei much safer than virus particles [44]. Other dendrimers might be able to act as microdrug delivery vectors. Nanoshells are small beads of glass coated with gold that can absorb light, particularly near-infrared, which can be beamed into the body. These nanoshells could then be induced from an extracorporeal strong infrared source to be heated. Buckyballs can be made from just a few dozen carbon atoms. The potential for the future of nanotechnology like many other futuristic applications to medicine is unknown. But it is intriguing to speculate about the possibilities. Using artificial scaffolds that nanotechnology might conceive, cancerous tumors at the cellular range might be identifiable and destroyed. Using synthetic scaffolds, we might be able to regenerate bones, cartilage, skin, or more complex organs such as diseased kidneys.

To Err Is Human

Too much of the public- and certainly to lawyers and the media- medical error is fundamentally a problem of bad doctors. The way that things go wrong in medicine is normally unseen and, consequently, often misunderstood. Mistakes do happen. We tend to think of them as aberrant. They are, however, anything but.” [1] Atul Gwande

Kohn and colleagues have estimated that between 44,000 and 98,000 deaths annually have been attributed to medical error in their work To Err Is Human [45]. As the rise of stone surgery is almost entirely dependent upon the technology and skill of the surgeon, there has been increased emphasis on comparing the surgeon to the airline pilot. In aviation, the pilot is expected to perform with a risk of failure less than 0.0001 %. Complications in surgery occur in the range of 1–5 % or more, this is a factor of 100× more than in the airline industry. Many operative modifications have been instituted to control preventable errors in the operating room, but truthfully not many of these actually apply to stone patients. For instance, wrong site errors are not even a realistic probability in patients with stones; though patients can present with bilateral stones, there are no real reported cases of this phenomenon. The errors in stone disease management can come from lack of skill; a surgeon, for instance, elects to perform one procedure when another is likely to benefit the patient preferentially, or unexpected anatomical variation makes an approach risky. The most common scenario is a large renal stone that might best be managed with a percutaneous nephrostomy and an antegrade nephroscopic approach that the surgeon may not be comfortable performing. All too often, a shock wave lithotripsy is chosen because sadly the reimbursement level is higher than some more applicable method. Our system of reimbursement in the United States has evolved into a nightmare of complexity and one that no longer reflects upon reality [46]. These financial disincentives have to be considered in a realistic discussion of error and more often than not are disregarded or relegated to the backroom discussion [47]. Atul Gwande mentions the notion of professionalism that would normally be considered the check to such unprofessional behavior. He states, “All learned occupations have a definition of professionalism, a code of conduct. It is where they spell out their ideals and duties. The codes are sometimes stated, sometimes just understood. But they all have at least three common elements. First is an expectation of selflessness…second is an expectation of skill…third is an expectation of trustworthiness” [48]. These are truly great expectations for rules of professionalism in an era of spending cutbacks, limitations in hours, financial cliffs, and rising malpractice costs. But a patient is on the other end of this professionalism equation and that can never be forgotten.

Through all of human history, health caregivers have been respected individuals in society. Now with the Internet, consumerism, the Baby Boomers aging, risk adjustment, outcomes measurement, and quality metrics, blind trust in clinicians has begun to erode” [49]. It is hard to find confidence in a system that appears to be spiraling out of control. Two sentinel studies that were used by the Institute of Medicine to generate their pronouncements in To Err Is Human came from the 1991 Results of the Harvard Medical Practice Study I & II [50, 51]. The first study was a retrospective review of 30,121 randomly selected records from 51 randomly selected places in New York State in 1984. They found adverse events in 3.7 % of these hospitalizations and 27.6 % of these were secondary to negligence. They found that 70.5 % of these errors resulted in disability that persisted for less than 6 months, 2.6 % resulted in permanent disability, and 13.6 % led to the patient’s death [50]. Their second piece immediately followed looking at the nature of these errors. Drug complications were the most common adverse event (19 %), wound infections in 14 %, and technical complications in 13 %. The major area that was isolated was diagnostic errors in 75 % and what were called “noninvasive therapeutic mishaps” or errors of omission (77 %) [51]. The other major study utilized by the Institute of Medicine was Costs of Medical Injuries in Utah and Colorado published in 1999 [52]. This also was a largely Harvard School of Public Health endeavor. In this evaluation they identified 459 adverse events, of which 265 were preventable from 14,732 randomly selected discharges from 28 hospitals. The costs associated with adverse events equaled $348,081,000 and about ½ of that number from the preventable errors [52]. Clearly the technologies that we have presented could solve many, if not most, of these errors as time goes on and the cost savings in error prevention would pay for the technology. The professionalism of those involved was not evaluated, nor is there a method to evaluate this quality of the health caregivers. The Institute of Medicine followed their first publication with recommendations for fixing the trouble US healthcare system with a second, Crossing the Quality Chasm [53]. In this volume they begin with “The American health delivery system is in need of fundamental change. Many patients, doctors, nurses, and health care leaders are concerned that the care delivered is not, essentially, the care we should receive.” In nine chapters they proceed to relate how they think that a new and improved health system can be created.

How does all of this apply to the history of urolithiasis and what are the implications to the treatment of stone disease? In a related paper in the Journal of Urology in 2001, a survey of urologic medical malpractice cases was reported. They were able to identify 259 medical malpractice claims from 1995 to 1999. The average urologist gets sued for malpractice twice in his career and certain parts of the country were worse than others (Southeastern > North Central >South Central > New England > Mid-Atlantic > Western > New York) [54]. They also noted that the most common procedure-specific claim was for endoscopic procedures (22 %) with most being stone patients. In another study involving malpractice litigation in one state, New York, 469 urologic claims occurred between 1985 and 2004 with a remarkably constant 22 claims annually during this period. Claims based on endourologic procedures (mostly stones) were the second leading cause of malpractice claims in NYS, second only to oncologic operations (25 vs. 46) [55]. In a follow-up on this same group of malpractice claims, missed diagnosis led to malpractice claims in 75 cases, and only two of these were stone related, both kidney stones [56]. Now getting to the stone group and a different group of investigators but still looking at New York State from 2005 to 2010, we can gather even better information. There were 25/585 closed claims that were related to endourology (4.3 %). Sixteen of these cases were women and nine were men. Twenty-two of these cases involved stones; the remaining three were from ureteral obstruction. Cystoscopy and stent placement accounted for most of the suits (52 %) followed by ureteroscopic lithotripsy (32 %), shock wave lithotripsy (8 %), and percutaneous procedures (8 %). Sixteen patients (62 %) required secondary procedures following their complications and six (24 %) died, all from sepsis. Ureteral stones were the major culprits in about 80 % of these cases [57]. Things go wrong far more commonly than medical malpractice cases get filed in our tort system. With the emphasis on the historical perspective, in fact, things go wrong a lot less commonly now than at any previous time in surgical history. Yet our stone patients are still dying on our watch; complications are still occurring with some degree of regularity. Though communication with the patient and the family has been widely proclaimed along with excellent documentation to help minimize the threat of the lawsuit, what about preventing the errors that result in injury of unintended outcomes to begin with?

Six Sigma

In health care, building a safer system means designing processes of care to ensure that patients are safe from accidental injury.” [45] —To Err Is Human

The abilities of our truly spectacular explosion of knowledge and technology need to come home to roost at some point, and that point is to prevent further error. This is the point of this exercise in future aggrandizement from a historical perspective. We are everyday surrounded by the living palimpsest of our species historical fight against urolithiasis yet we blithely continue often oblivious to the sacrifices of the past on our headlong journey to the future. I would like to quote Gwande once more, not because he is a urologist’s son:

Here, then is our situation at the start of the twenty-first century: we have accumulated stupendous know-how. We have put it in the hands of some of the most highly trained, highly skilled, and hard working people in our society. And, with it, they have indeed accomplished extraordinary things. Nonetheless, that know-how is often unmanageable. Avoidable failures are common and persistent, not to mention demoralizing and frustrating, across many fields- from medicine to finance, business to government. And the reason is increasingly evident. The volume and complexity of what we know has exceeded our individual ability to deliver its benefits correctly, safely, or reliably. Knowledge has both saved us and burdened us.” [48]

Let us look as an example of a worrisome problem in stone disease, looming large on the horizon and that is the increasing prevalence of predominately calcium phosphate stone formation. In the past two decades an increase in calcium phosphate stones has been noted in the United States, with females being more common than males [58]. We have seen that brushite stone formation is often concurrent with apatite plugging of the papillary tubules leading to fibrosis and permanent renal injury. This is becoming an increasingly disconcerting trend in urolithiasis and should serve as a warning to those of us who see and treat many recurrent stone formers to be ever vigilant to this possible conversion to a worse scenario [59]. In addition, it appears that our interventions in stone formers by repeated shock wave lithotripsy in particular might indeed be damaging the kidney itself and also contributing to the transformation from calcium oxalate stones to calcium phosphate stones [60].

One might also argue that the advance in shock wave lithotripsy itself has not actually advanced, that, in fact, the methods and results have gotten worse. But our abilities to gauge success have improved, and CT scanning posttreatment is much better on finding pieces of stone than was the KUB. One thing is certain: if a shock wave lithotripsy fails to break up the stone, odds are given our current technologies that another method should be utilized in subsequent interventions. Multiple, serial shock wave lithotripsies should not be considered the method of choice for dealing with non-fragmentable stones [61]. In addition, in skilled hands the holmium:YAG (yttrium-aluminum-garnet) mid-infrared laser remains currently unsurpassed and the most effective lithotripter source in the surgical armamentarium in the urologist’s arsenal. With ureteroscopes evolving into smaller and improved optical capacity, there are virtually no regions within the kidney or ureter that can avoid these diminutive scopes’ abilities. But complications can and do occur, but with systems in place to carefully monitor performance and follow-up of patients, one could speculate that Six Sigma (or 1/million) might not be achievable soon, but the aviation standard of 1/10,000 just might. Anesthesia using Six Sigma tools has now dropped its serious complication rate down or “mishaps” to near 1/200,000 [62]. Let us focus our attention on this amazing and truly underappreciated bit of fact. Ellison Pierce was fixated on the notion that unacceptable numbers of serious complications occurred in anesthesia and when he was elected to be vice president of the American Society of Anesthesiology in 1982 he had opportunity to do something about it. He recruited an engineer named Jeffrey Cooper who utilized a technique referred to as “critical incident analysis” to begin a systematic approach to all aspects of the anesthesia/patient interaction [63]. The first in-depth analysis of 359 errors broke the whole process of anesthesia down in sections which were then attacked by solving or developing solutions to problems—utilizing pulse oximeters, placing end-tidal CO2 monitors on anesthesia machines, and standardization of anesthesia machines and even the dials on the gas cylinders [63]. Others are just beginning to follow this pathway, but with the evolution of technology occurring as it is by quantum leaps, why should stone disease management have to wait much longer?

The Future of Stones

Technology and microelectronics are revolutionizing every aspect of our society. Polymer science, microcomputerization, optical engineering, bioengineering, and many other technologic arenas are being focused upon advanced healthcare delivery. Surgery has not been immune to such technologic advancement. The precise extent and overall impact of new, minimal access urologic surgeries has almost certainly not yet achieved its limits. This logarithmic growth in minimally invasive stone procedures reflects both the clinician’s abilities to adapt to new technology and also the patient’s themselves desiring centers where such methods are being utilized. With the rapid dissemination of knowledge by mass media, an ever increasingly informed society is seeking alternative therapy for heretofore conventional open operations. Minimal access surgery is a redefinition of the term for technologically advanced surgery originally coined by J.E.A. Wickham in 1987, referred to as the “new surgery or minimally invasive surgery.” [64] These terms are just a method of quantifying the degree of surgical trauma inflicted upon patients. Laparoscopic surgery has been correctly pointed out by Cuschieri to produce minimal access trauma but still imparts surgical trauma. Minimal invasive surgery is the next echelon vis-à-vis further reduction of risk and trauma for patients [65]. This type of surgery does minimize trauma by eliminating direct organ dissection and cutting through the bodies walls via “classic” methods. Endoscopic surgeries next reduce the trauma by proceeding through natural orifices to gain access to the stones (in our focused case) in the urinary tract. Shock wave lithotripsy eliminates any invasion through the body or via any natural orifices and utilizes high-energy shock waves to comminute the calculus. But one might correctly assume that this technologic advance is not quite finished and that newer, safer alternatives might yet be invented.

Surgical progress has been a series of “quantum leaps” based upon technological advances. Wickham identifies five eras of surgery based upon these technologic advances: preanesthetic era, postanesthetic era, the era of supportive medicine, the era of conservative surgery, and the era of minimally invasive surgery [66]. Each era is characteristically diminishing in the timed constraints for progression to successive levels or the law of accelerating returns applied to surgery. Now robotic surgery has begun to replace the methods that were being discussed by Wickham and it has been less than a decade. The robotic systems will certainly evolved and have great potential. Perhaps the “human factor” of error can be programmed out of these robotic systems [67].

Urologists, more than any other surgical specialty, should be aware of the patients’ demand for alternative therapies. In the earliest surgical days of perineal lithotomy, mortality was age dependent and greater than 50 % of the patients died and probably many more suffered irreversible harm. Cheselden with his scrupulous separation of “observed” cases from his private practice (unobserved cases) reported statistics only on the former. Even with the development of transurethral lithotrity, there was an impressive mortality. Patients and surgeons were inured with death, suffering, and morbidity. We fortunately have evolved beyond this pain and suffering with great expectation to advance even further. Most of us have lived through two revolutionary eras of urologic practice: the endoscopic treatment of prostatic disease and the abdication of open stone surgery. Wickham has chosen the latter to represent the traumas inflicted upon our patients and the result of technology on reducing them. Progressing from open stone surgery through percutaneous nephrostolithotripsy and including first- and second-generation shock wave lithotripsy therapy, patients are subject to less and less interventional trauma [68]. The results are dramatically reduced hospital stays, faster return to normal activities, and patients who are very aware of their good fortune. In one decade, urologists have progressed in the treatment of stone disease further than any other surgical specialty though this too is changing.

Discussion

It is difficult to accept recurrent stone formation as incidental in any patient and allow it to continue without efforts to understand its causes and offer such treatments as seem appropriate.” [69] —Frederic Coe (2005)

Morbidity and mortality conferences, called M&M, began in hospitals and the practice of medicine early in the twentieth century. By 1901 a standardized method of case reporting had been developed at Johns Hopkins Medical School. This was an attempt by early health professionals, physicians, and nurses to investigate the outcomes of care (Osler with the blackboard). This became mandated in the United States by the Accreditation Council for Graduate Medical Education in 1983. Unexpended findings at autopsy also were historically significant in evaluating the cause of death. Lundberg noted 40 % discrepancy between antemortem and postmortem diagnoses in 1998 [70]. Autopsies have continuously declined throughout the twentieth century. In the 1940s autopsy rates were typically at 50 %, but currently they are performed in less than 9 % of hospital deaths. Stone disease no longer has the high mortality rates that were ascribed to the past, but they still rarely occur. It is the morbidity that is frequently associated with stone disease presently that is of concern and the rising rates of prevalence of this disease. The perfect storm situation that underscores the concern for this morbidity is the overuse of shock wave lithotripsy in some regions for kidney stones, possibly because the reimbursement for this modality is so much higher than equally or in some cases more effective therapies [46]. When reimbursement begins to have a factor in the decision for therapy, there are all kinds of ethical concerns that come into play, outcomes typically not being foremost in consideration. Our hope for the future of technology is that it will solve this dilemma as well as make safer, less invasive methods readily available to urolithiasis sufferers.

The integration and synergy of the four technologies (nano-bio-info-cogno) originate from the nanoscale, where the building blocks of matter are established. This picture symbolizes the confluence of technologies that now offers the promise of improving human lives in many ways, and the realignment of traditional disciplinary boundaries that will be needed to realize this potential. New and more direct pathways towards human goals are envisioned in working habits, in economic activity, and in the humanities” [16]—thus begins the first National Science Foundation/Department of Commerce-sponsored scientific meeting on technologic convergence on June 2002 in Arlington, Virginia. Called “Converging Technologies for Improving Human Performance,” this government-sponsored conference covered all aspects of rapidly expanding technologies [16]. We have spent a good bit of this chapter discussing the exponential growth of divergent technologies—nano, bio, info, and cogno. What happens when they start to blur and combine to achieve the same ends? This is indeed what is happening. The old constraints of specialization are being wiped out by the supercomputing systems currently being implemented. Computers are becoming so powerful and fast that autoengineering systems are not only capable; they are the only method that can create the computer chips that are being used by the computers. This technology is crossing over to other design systems, engineering, and research and development. In other words, as intelligent technologies rapidly become more intelligent, the pace of change is further accelerated. The technology that will be with us tomorrow is definitely not with us currently. As with buying a computer currently, you can wait until the next bigger, faster, more sophisticated system becomes available and you will end up waiting forever. Or, you can scratch your head and get into the technology and become inspired to seek and discover all you can that the technology can offer. Welcome to the Information Age, where “business as usual” simply does not apply.