Lithography
Joseph D. Martin and Cyrus C. M. Mody
This article appears in Joseph D. Martin and Cyrus C. M. Mody, eds., Between Making and Knowing: Tools in the History of Materials Research, WSPC Encyclopedia of the Development and History of Materials Science, vol. 1. Singapore: World Scientific, 2020.
Lithography became an essential tool for materials research during the post–World War II computing revolution. Increasing computing power required shrinking circuits and packing transistors more tightly together. Lithography made it possible to write small, precise circuits on a semiconducting surface, setting the stage for modern computing and fueling Moore’s law—the observation that transistor density on chips has tended to double every eighteen months.(2,12) But lithography was by no means a postwar development. It dates to the late eighteenth century and is notable as a technique borrowed for materials research from the storied and ostensibly distant craft practices of ink-based printing. What ties these disparate applications together—aside from their name—is their close relationship to the commercial incentives of the times in which they developed.
“Lithography,” literally “stone writing” when broken into its Greek roots, has become something of a misnomer. The name originated in the practice of rendering shapes on a flat limestone surface with a substance that would repel acid, so that soaking the stone in acid would leave the image in relief. A text or image prepared in this way could be reproduced effectively ad infinitum. The practice became a popular printing technique and artistic medium in the nineteenth century (figure 1), shortly after the German playwright Alois Senefelder developed it in the 1790s. It then quickly migrated to non-lithic surfaces, such as metal. Lithography has since come to encompass a collection of related techniques that involve creating patterns by changing the properties of some part of a chemically treated surface, whether on stone or not.
Senefelder invented lithography while experimenting with limestone as a replacement for expensive copper plates as an etching surface. According to his own account of the discovery, he scribbled a laundry list—for want of a piece of paper—on a stone surface he had just prepared for etching. This made him curious what would happen if he then treated the stone surface with acid. He discovered that his ink, composed of soap, wax, and lampblack, resisted the corrosive action of the acid enough to leave the inked portions of the stone in relief, providing a raised surface that was then easily re-inked for printing.
Senefelder had stumbled upon the concept of a resist, a substance that resists chemical action on a surface to which it is applied—just as his oily ink resisted his water-based acid. All subsequent lithographic techniques relied on identifying an appropriate resist material, the truly novel feature of Senefelder’s process. Etching, a similar technique, was older, but it required creating grooves in a surface by carefully coordinating the mechanical application of acid with the lines and shapes the etcher wanted. Lithography achieved similar ends, while separating chemical process from the design process, allowing more latitude for error and greater precision. That novel feature was important for Senefelder because of its commercial potential. He wrote of his discovery that “what was the most important for me, this method of printing was entirely new, and I might hope to obtain a franchise and even financial aid.”(14 p9)
Commercial potential was similarly the catalyst for the importation of lithographic techniques into materials manufacture. The lithographic process makes clever use of the properties of materials, but it was not until the mid-twentieth century that its utility for industrial development made it an essential technique for making materials themselves. Those materials were, most prominently, semiconductor systems, which were central to electronics and computing technologies. Semiconductors offered tantalizing potential for commercial exploitation, which incentivized the development of new tools to produce them more effectively. We will consider three varieties of lithographic technique that emerged as a result: photolithography, electron beam lithography, and X-ray (and extreme ultraviolet) lithography.
Photolithography
Some substances change their chemical properties when exposed to light, a fact well known by the early twentieth century. It was, after all, the basis for photography. But early photography required cumbersome glass plates that were fragile and difficult to transport, which incentivized companies to experiment with new photographic media. This line of research opened up the possibility of photolithography, which uses a beam of light to create a pattern on a surface to which a photosensitive resist has been evenly applied. The light changes the properties of some parts of the resist, making it either more or less soluble. This allows it to be developed, leaving the pattern behind.
Historians regard the interwar period as the beginning of a golden age of American industrial research, in which corporate laboratories effectively linked fundamental research with patentable developments, often with the goal of improving materials.(4) This strategy paid off handsomely for Eastman Kodak. Decades of chemical research bore fruit in the 1950s and 1960s in the form of a plethora of patents on photoresists. Just as Senefelder’s ink resist made it possible to etch letters or images on stone surface, a photoresist made it possible to etch circuits on a wafer of a semiconductor such as silicon, known as a substrate, by exposing portions of the resist with ultraviolet (UV) light (figure 2). Photoresists that were effective, easy to use, simple to manufacture, and, crucially, under patent, made photolithography commercially viable and encouraged its spread.
The late 1950s into the 1960s saw photolithography become a favored means by which to manufacture semiconductor devices. After a photoresist was applied to a doped semiconducting wafer, lithography could be used to create topographic patterns of hills and valleys on the wafer’s surface. The acid and resist would then be washed off and new materials, such as silicon dioxide (an insulating material) or copper (a conductor), would be added to the surface in a new pattern of hills and valleys using evaporation-condensation or other processes. Then a new layer of photoresist would be applied and the process would begin again. After several (today, several dozen) such steps, a complex three-dimensional circuit made of insulating, conducting, and semiconducting regions would take shape. Transistor effects would take place in some of these regions, and the circuit as a whole would include many such transistors (today, often more than a billion). Because the transistor is a powerful amplifier or an efficient on-off switch, circuits containing many transistors are useful in digital communications, signals processing, and computing.
The transistor had been invented in 1947, about a decade before photolithography was used to create “integrated circuits” containing many transistors. Its potential to replace the vacuum tubes in giant, unreliable, slow, power-intensive digital computers such as ENIAC was immediately obvious. That potential was difficult to realize in practice, however, because early transistors were difficult to manufacture at industrial scales.(10) The ability to easily write transistors onto a surface using photolithography changed that. Because the transistors could now be capped by an inert “passivation layer,” and because the connections between circuit components could now be written directly into the wafer (instead of soldering wires together), integrated circuits made with photolithography were much more reliable and lasted much longer than circuits made with vacuum tubes or discrete components. Reliability was paramount for the military customers who were the most important early customers for integrated circuits.
Almost from the first, though, manufacturers realized that integrated circuits manufactured using photolithography would also have tremendous commercial advantages for civilian markets. In particular, circuits written with photolithography could be made much smaller. Miniaturized circuits are faster and can be packaged into smaller devices such as today’s mobile telephones. Smaller circuits are also cheaper, because it costs roughly the same amount of money to process a given area of silicon—so the more components that can go into that given area, the cheaper each component is.(9) Photolithography was therefore an essential prerequisite for the rise of Silicon Valley and the rapid increase in computing power through the second half of the twentieth century.(13)
But photolithography ran up against some fundamental limits. Typically, the procedure involved an opaque mask into which the desired circuit pattern had been cut. The mask would be placed above a semiconducting surface (usually silicon) prepared with a photoresist and exposed to a burst of UV light, which would impart the pattern to the resist for developing. But the wavelength of UV light placed a lower-bound on the size of the elements photolithography could produce of a few microns across. Higher resolutions—that is, smaller components and denser circuitry—would require other lithographic techniques.(12 p124–125,17)
Electron Beam Lithography
By the 1960s, materials scientists had been putting electrons to work for some time in electron microscopes (see the contributions by Tom Vogt and Pedro Ruiz-Castell in the next section). It is therefore unsurprising that electron beam lithography (EBL), which emerged in the 1960s, did so as a spin-off of electron microscopy. EBL began, in fact, by turning a nuisance that plagued electron microscopy into an asset. Although electron beams offered unparalleled resolution for imaging samples, the electrons used to bombard those samples invariably contaminated them, changing their chemical properties and surface features. But whereas this is a problem if you are creating detailed images, it is precisely the point of a lithographic procedure.(18 p3–5) The story of how the bugbear of electron microscopy morphed into the basis for EBL takes us to Cambridge, England, and to the close university-industrial partnerships that grew after World War II.
Cambridge University in the 1960s was home to Charles Oatley (figure 3), a pioneer of the scanning electron microscope. In addition to being a virtuoso of electron beams, he was a prodigious graduate advisor. His doctoral students, known as “Oatley’s boys,” were among the first to see commercial potential in the electron microscope, which they pursued through the Cambridge Instrument Company.(13 p6,5) When Oatley’s group began to pursue electron beams as a lithographic tool, therefore, a ready commercialization pathway was already in place.
EBL was attractive because it promised to bring circuit elements smaller than one micron—around the limit of UV photolithography—within reach, and therefore to continue the miniaturization of microchips. Through the 1960s, groups in Europe, Asia, and the United States proposed procedures that could repurpose the electron beams used in electron microscopes for lithography, but they were limited by the lack of a suitable resist—photoresists did not respond well to electron bombardment. The first breakthrough came in 1967, when a group at IBM developed poly-methyl methacrylate (PMMA), the first successful electron beam resist.(17 p8) In EBL, the properties of the resist rather than the size of the electron set the resolution, since electrons, although very small, tend to scatter through any material they bombard. PMMA contained electron scattering sufficiently to allow circuit elements on the order of tens of nanometers.(1, p134)
But resist materials were not the only bottleneck. Drawing circuits accurately at the sub-micron scale required exacting control standards, which could only be reliably achieved by computerizing the lithograph’s control systems. The collaboration between Oakley’s group at Cambridge University and the Cambridge Instrument Company facilitated the development of the first computer-controlled commercial electron beam lithograph in 1969.(17 p9,19) This and later commercial EBL systems were expensive, especially when compared with photolithography systems, but the computing and telecommunications industries were willing to bear the expense in order to ensure that circuitry miniaturization proceeded apace. Two of Oatley’s students, Fabian Pease at AT&T Bell Laboratories and Alec Broers at IBM, were particularly important in those two companies’ in-house development of EBL systems used for chip manufacturing. AT&T’s technology was then licensed to firms such as Perkin-Elmer, Etec, and Varian, which then made it available to the semiconductor industry.
Many proponents of EBL dreamed that it would replace optical lithography in mass production of the chips used in consumer products such as personal computers, telephones, and televisions. The problem, though, is that EBL is a serial process, in which the beam writes only one part of the circuit pattern at a time, whereas optical lithography is a parallel process in which the pattern is written all at once. EBL is therefore much slower than optical lithography, and increasingly so as chips have become more complex. EBL is therefore only used to “direct-write” very specialized chips, for instance for military or space applications. Its more common niche today is in making the masks which are then used in optical lithography. In other words, the different varieties of lithography are partly competitors, but are also mutually-reinforcing parts of a single lithography complex.
Extreme UV and X-ray Lithography
Another way to overcome the resolution limitations of UV photolithography was to use shorter-wavelength electromagnetic radiation. This approach would lead to the advent of extreme ultraviolet (EUV) and X-ray lithography (XRL) in the 1970s (ultraviolet and X-ray frequencies lie next to each other on the electromagnetic spectrum, so extreme ultraviolet lithography converges on “soft X-ray” lithography). The path-breaking XRL research was conducted at Lincoln Laboratory, a US government-funded defense lab associated with the Massachusetts Institute of Technology. In the heart of the Cold War, microelectronic components had become critical for the US military, which used them in aircraft, submarines, and missile guidance systems. The defense establishment therefore generously funding research into materials science, in the process doing a considerable amount to define the field as we know it.(11 p13–16) Indeed, defense patronage benefited materials research not just through the direct stimulation of military-relevant fields such as microelectronics, but also through indirect cross-fertilization from one military-sponsored field to another. Development of EUV, for instance, has borrowed heavily from laser technology, which the military sponsored for communications, anti-ballistic missile defense, and other applications. Naturally, once EUV and XRL moved beyond the proof-of-concept stage, they were also quickly adopted at sites like IBM and Bell Laboratories, which were always on the lookout for new ways to improve their microelectronics production processes.(16)
XRL, like EBL, required new resists. It also posed the additional challenges of developing masks that could withstand highly penetrating X-rays. But with these technical challenges solved, the technique offered several advantages. EBL was limited to negative resists, whereas XRL could be used with both positive and negative resists. Rather than relying on the precise computer control needed for EBL, XRL could be executed with a single, broad area exposure.(15) It therefore developed alongside EBL and ion beam lithography, a technique that, as the name implies, used ions in the place of electrons. Each technique had its advantages and drawbacks, and preferences for one over the others were often driven by local considerations, leading to what were known as the “beam wars.”(17)
XRL would tie lithography, previously very much a lab-bench practice, to big science. It developed in parallel with synchrotron radiation sources. Synchrotron radiation occurs when charged particles are accelerated in curved trajectories, which causes them to give off radiation in the form of photons. The effect was problematic for high-energy particle accelerators, since the emitted radiation sapped beam energy. But it proved ideal for materials research applications (much like the damage caused by electron microscope beams ended up being ideal for EBL). Synchrotron sources could provide highly coherent X-ray beams—in which the photons in the beam all fall into a precise frequency range—making them ideal for lithography, where better coherence translated into finer control. In the 1970s, several countries dedicated new facilities producing synchrotron radiation, including the United States, which invested in the National Synchrotron Light Source at Brookhaven National Laboratory (figure 4).(6)
X-ray lithography represents the transition of a mature technique into a new historical context. Robert P. Crease and Catherine Westfall have called the type of large-scale research that began to dominate in the 1980s and 1990s the New Big Science.(7) Like the big science projects of nuclear and high energy physics, synchrotron sources were large, expensive installations, but they differed in a number of ways, including being designed for outside users rather than the laboratory staff. Industry therefore played a much more prominent role in the New Big Science, and XRL was one of industry’s most prominent routes into it. In fact, in the 1980s, many observers, particularly in Germany and Japan, were hopeful that commercial synchrotrons would soon be built next to chip factories to facilitate mass-production XRL.(8)
Yet commercial synchrotrons have not become commonplace, and the optimistic predictions for EUV and XRL, just as for EBL, have not come true. Instead, optical lithography remains dominant, thanks to its continued improvement. The ability to use optical lithography to make commercial circuit components with features smaller than the diffraction limit of employed wavelengths is truly astounding, and quite surprising to many industrial and academic scientists and engineers who have staked their careers on the eventual demise of optical lithography. No single factor explains the surprising obduracy of optical lithography. Rather, optical lithography has survived because firms have been ingenious in coming up with incremental innovations to every single aspect of this technological complex. Photoresists, for instance, are no longer the simple lacquers of the early days. Instead, firms today use “chemically-amplified resists,” initially developed at IBM in the 1990s, to sharpen the edges of the patterns cut into the wafer.(3) Materials for polishing wafers, interconnecting circuit components, packaging materials, and so on have all vastly improved. EBL, EUV, and XRL are therefore unlikely to replace photolithography, or indeed to become widespread for more than specialized applications, until their technological complexes are capable of similar rapid, incremental change along multiple dimensions simultaneously.
Conclusions
Lithography for semiconductor production developed at the confluence of a remarkable diversity of materials science techniques. It required the ability to generate and control photon, electron, and ion beams. It demanded the sensitive command of crystal growing necessary to produce substrates with the right properties. It depended upon vast chemical knowledge of resists, solvents, and washes. Moreover, mature systems were coordinated using computing technologies driven by use of the very microelectronics earlier systems had been used to manufacture. These ingredients were mixed in the crucible of commerce. Much like Alois Senefelder, who was motivated to pursue print lithography because he recognized its potential to attract investment and ultimately make money, the pioneers of semiconductor lithography proceeded with one eye squarely on the computing industry and its relentless push to miniaturize electronic components.
References
1. Broers AN, Hoole ACF, Ryan JM. Electron beam lithography—resolution limits. Microelectronic Engineering. 1996;32:131–142.
2. Brock DC, editor. Understanding Moore’s law: four decades of innovation. Philadelphia: Chemical Heritage Foundation; 2006.
3. Brock DC. Patterning the world: the rise of chemically-amplified photoresists. Philadelphia: Chemical Heritage Foundation; 2009.
4. Cerveaux A. Taming the microworld: DuPont and the interwar rise of fundamental industrial research. Technology and Culture. 2013;54(2):262–88.
5. Cattermole MJG, Wolfe AF. Horace Darwin’s shop: a history of the Cambridge Scientific Instrument Company, 1878–1968. Bristol: Adam Hilger; 1987.
6. Crease RP. The National Synchrotron Light Source, part I: bright idea. Physics in Perspective. 2007;10(4): 438–67.
7. Crease RP, Westfall C. The new big science. Physics Today 2016;65(5):30–36.
8. Heuberger A. X-ray lithography. Microelectronic Engineering 1986;5(1-4):3-38.
9. Lathrop JW. The Diamond Ordinance Fuze Laboratory’s photolithographic approach to microcircuits. IEEE Annals of the History of Computing. 2013;35(1):48–55.
10. Leslie SW. Blue collar science: bringing the transistor to life in the Lehigh Valley. Historical Studies in the Physical and Biological Sciences. 2001;32(1):71–113.
11. Martin JD. What’s in a name change?: solid state physics, condensed matter physics and materials science. Physics in Perspective. 2015;17(1):3–32.
12. Mody CCM. The long arm of Moore’s law: microelectronics and American science. Cambridge, MA: MIT Press, 2017.
13. Schattenburg ML. History of the “three beams” conference, the birth of the information age and the era of lithography wars. 2007. Available from: http://eipbn.org/wp-content/uploads/2015/01/EIPBN_history.pdf. [Accessed 2017 Jun 28].
14. Senefelder A. The invention of lithography. Muller JW, trans. New York: Fuchs & Lang; 1911.
15. Smith HI, Spears DL, Bernard SE. X-ray lithography: a complementary technique to electron beam lithography. Journal of Vacuum Science and Technology. 1973;10(6):913–17.
16. Spiller E. Early history of X-ray lithography at IBM. IBM Journal of Research and Development. 1993;37(4):291–97.
17. Thompson LF. An introduction to lithography. In Thompson LF, Willson CG, Bowden MJ, editors. Introduction to microlithography. Washington, DC: American Chemical Society; 1983.
18. Utke I, Moshkalev S, Russell P. Nanofabrication using focused ion and electron beams: principles and applications. Oxford: Oxford University Press; 2012.
19. Wallman BA. From microscopy to lithography. In Breton BC, Dennis M, Smith KCA, editors. Advances in imaging and electron physics, vol. 133, Sir Charles Oatley and the scanning electron microscope, 350–86. Amsterdam: Elsevier; 2004.