Saturday, October 15, 2016

Quantum Entanglement and Quantum Computing

Linq: http://www.caltech.edu/news/quantum-entanglement-and-quantum-computing-39090
Watson Lecture Preview
News Writer: 
Douglas Smith

John Preskill, the Richard P. Feynman Professor of Theoretical Physics, is himself deeply entangled in the quantum world. Different rules apply there, and objects that obey them are now being made in our world, as he explains at 8:00 p.m. on Wednesday, April 3, 2013, in Caltech's Beckman Auditorium. Admission is free.

 

Q: What do you do?

A: I'm trying to understand what a quantum computer would be capable of, how we could build one, and whether it would really work. My background is in particle theory, a subject I still love, but in the spring of 1994 a mathematician at Bell Labs named Peter Shor [BS 1981] discovered an algorithm for factoring large numbers with a quantum computer. I got really excited by this, because it moved the boundary separating "easy" problems, which we can eventually expect to solve with advanced technologies, from truly hard problems that we may never be able to solve. There are problems we can solve using quantum physics that we couldn't solve otherwise. The crucial problem is protecting a quantum computer from the various kinds of "noise" that could destroy quantum entanglement, and we've made a lot of progress on that.

 

Q: OK, so what's "entanglement?"

A: It's the correlations between the parts of a system. Suppose you have a 100-page book with print on every page. If you read 10 pages, you'll know 10 percent of the contents. And if you read another 10 pages, you'll learn another 10 percent. But in a highly entangled quantum book, if you read the pages one at a time—or even 10 at a time—you'll learn almost nothing. The information isn't written on the pages. It's stored in the correlations among the pages, so you have to somehow read all of them at once.

There's another important difference: If Alice and Bob both read this morning's New York Times, they will have perfectly correlated information. And if Charlie comes along and reads the same paper later on, he will be just as strongly correlated with Alice as Alice is with Bob, and Bob will be just as correlated with Charlie as he is with Alice. But if Alice reads her quantum newspaper and Bob reads his, they will learn almost nothing until they get together and share their information. Now, when Charlie comes along, Alice and Bob have already used up all their ability to be entangled, and he's completely left out. Entanglement is monogamous—if Alice and Bob are as entangled as they can be, neither of them can entangle with Charlie at all. So if Alice wants to be entangled with both Bob and Charlie, there's a limit to how entangled she can be with either one. They have to work out some sort of compromise.

 

Q: What gets you excited about this?

A: The technology is emerging to make it possible to do things we've never done before. We were taught in school that classical physics applies to things you can see, and quantum physics applies to the world at the scale of atoms and below. We're rebelling against that by making systems that are big enough to see, yet still exhibit quantum behavior. For example, Professor of Applied Physics Oskar Painter [MS 1995, PhD 2001] has made a tiny silicon bar that's suspended in space, and he's successfully cooled it all the way down to its quantum-mechanical ground state. It vibrates in a mode that corresponds to its lowest quantum state. He hasn't entangled such bars yet, but he knows how to do it.

We're exploring a new frontier of physics. It's not the frontier of short distances, like in particle physics; or of long distances, like in cosmology. It's what you might call the entanglement frontier.

 

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Notes from the Back Row: "Quantum Entanglement and Quantum Computing"

Linq: http://www.caltech.edu/news/notes-back-row-quantum-entanglement-and-quantum-computing-39378
News Writer: 
Douglas Smith

John Preskill, the Richard P. Feynman Professor of Theoretical Physics, is hooked on quanta. He was applying quantum theory to black holes back in 1994 when mathematician Peter Shor (BS '81), then at Bell Labs, showed that a quantum computer could factor a very large number in a very short time. Much of the world's confidential information is protected by codes whose security depends on numerical "keys" large enough to not be factorable in the lifetime of your average evildoer, so, Preskill says, "When I heard about this, I was awestruck." The longest number ever factored by a real computer had 193 digits, and it took "several months for a network of hundreds of workstations collaborating over the Internet," Preskill continues. "If we wanted to factor a 500-digit number instead, it would take longer than the age of the universe." And yet, a quantum computer running at the same processor speed could polish off 193 digits in one-tenth of a second, he says. Factoring a 500-digit number would take all of two seconds.

While an ordinary computer chews through a calculation one bite at a time, a quantum computer arrives at its answer almost instantaneously because it essentially swallows the problem whole. It can do so because quantum information is "entangled," a state of being that is fundamental to the quantum world and completely foreign to ours. In the world we're used to, the two socks in a pair are always the same color. It doesn't matter who looks at them, where they are, or how they're looked at. There's no such independent reality in the quantum world, where the act of opening one of a matched pair of quantum boxes determines the contents of the other one—even if the two boxes are at opposite ends of the universe—but only if the other box is opened in exactly the same way. "Quantum boxes are not like soxes," Preskill says. (If entanglement sounds like a load of hooey to you, you're not alone. Preskill notes that Albert Einstein famously derided it back in the 1930s. "He called it 'spooky action at a distance,' and that sounds even more derisive when you say it in German—'Spukhafte Fernwirkungen!'")

An ordinary computer processes "bits," which are units of information encoded in batches of electrons, patches of magnetic field, or some other physical form. The "qubits" of a quantum computer are encoded by their entanglement, and these entanglements come with a big Do Not Disturb sign. Because the informational content of a quantum "box" is unknown until you open it and look inside, qubits exist only in secret, making them ideal for spies and high finance. However, this impenetrable security is also the quantum computer's downfall. Such a machine would be morbidly sensitive—the slightest encroachment from the outside world would demolish the entanglement and crash the system.

Ordinary computers cope with errors by storing information in triplicate. If one copy of a bit gets corrupted, it will no longer match the other two; error-detecting software constantly checks the three copies against one another and returns the flipped bit to its original state. Fixing flipped bits when you're not allowed to look at them seems an impossible challenge on the face of it, but after reading Shor's paper Preskill decided to give it a shot. Over the next few years, he and his grad student Daniel Gottesman (PhD '97) worked on quantum error correction, eventually arriving at a mathematical procedure by which indirectly measuring the states of five qubits would allow an error in any one of them to be fixed.

This changed the barriers facing practical quantum computation from insurmountable to merely incredibly difficult. The first working quantum computers, built in several labs in the early 2000s, were based on lasers interacting with what Preskill describes as "a handful" of trapped ions to perform "a modest number of [logic] operations." An ion trap is about the size of a thermos bottle, but the laser systems and their associated electronics take up several hundred square feet of lab space. With several million logic gates on a typical computer chip, scaling up this technology is a really big problem. Is there a better way? Perhaps. According Preskill, his colleagues at Caltech's Institute for Quantum Information and Matter are working out the details of a "potentially transformative" approach that would allow quantum computers to be made using the same silicon-based technologies as ordinary ones.

 "Quantum Entanglement and Quantum Computing" is available for download in HD from Caltech on iTunesU. (Episode 19)

Fu, Harrison, and Preskill Elected to the National Academy of Sciences

Linq: http://www.caltech.edu/news/fu-harrison-and-preskill-elected-national-academy-sciences-42717
News Writer: 
Cynthia Eller
Seal of the National Academy of Sciences.

Three professors at Caltech have been elected to the prestigious National Academy of Sciences. The announcement was made Tuesday, April 29, in Washington D.C.

The new Caltech electees are Gregory C. Fu, Altair Professor of Chemistry; Fiona A. Harrison, Benjamin M. Rosen Professor of Physics; and John P. Preskill, Richard P. Feynman Professor of Theoretical Physics.

Fu is a synthetic organic chemist focusing on transition-metal catalysis and nucleophilic catalysis. He is currently developing enantioselective reactions and exploring the use of copper and nickel catalysts. In 2012, Fu won the Award for Creative Work in Synthetic Organic Chemistry from the American Chemical Society. He is a fellow of both the American Academy of Arts and Sciences (2007) and the Royal Society of Chemistry (2005).

Harrison specializes in observational and experimental high-energy astrophysics. She is the principal investigator for NASA's NuSTAR Explorer Mission. Harrison is recognized for her leadership in the design, development and launch of NuSTAR, as well as leading the team in the mission's scientific return.  As a result of almost two decades of technology development, NuSTAR is revolutionizing our view of the high-energy X-ray sky. Harrison was elected to the American Academy of Arts and Sciences in 2014, was elected as a fellow of the American Physical Society in 2012, and won a NASA Outstanding Public Leadership Medal in 2013.

Preskill is a theoretical physicist who began his career in particle physics (in particular, the interface between particle physics and cosmology) before moving to a specialization in quantum information and quantum computing. In 2000, Preskill founded the Institute for Quantum Information with the aim of harnessing principles of quantum mechanics to aid in particularly challenging information-processing tasks. He is a fellow of the American Physical Society.

The National Academy of Sciences is a private organization of scientists and engineers dedicated to the furtherance of science and its use for the general welfare. It was established in 1863 by a congressional act of incorporation signed by Abraham Lincoln that calls on the academy to act as an official adviser to the federal government, upon request, in any matter of science or technology.

The election of Fu, Harrison, and Preskill brings the total Caltech membership to 75 faculty and three trustees.

Frederick B. Thompson

Linq: http://www.caltech.edu/news/frederick-b-thompson-43160
1922–2014
News Writer: 
Douglas Smith
Caltech Professor of Applied Science and Philosophy Frederick B. Thompson
Professor of Applied Science and Philosophy Frederick B. Thompson in a 1972 photograph.
Credit: Don Ivers/Caltech

 

Frederick Burtis Thompson, professor of applied philosophy and computer science, emeritus, passed away on May 27, 2014. The research that Thompson began in the 1960s helped pave the way for today's "expert systems" such as IBM's supercomputer Jeopardy! champ Watson and the interactive databases used in the medical profession. His work provided quick and easy access to the information stored in such systems by teaching the computer to understand human language, rather than forcing the casual user to learn a programming language.

Indeed, Caltech's Engineering & Science magazine reported in 1981 that "Thompson predicts that within a decade a typical professional [by which he meant plumbers as well as doctors] will carry a pocket computer capable of communication in natural language."

"Natural language," otherwise known as everyday English, is rife with ambiguity. As Thompson noted in that same article, "Surgical reports, for instance, usually end with the statement that 'the patient left the operating room in good condition.' While doctors would understand that the phrase refers to the person's condition, some of us might imagine the poor patient wielding a broom to clean up."

Thompson cut through these ambiguities by paring "natural" English down to "formal" sublanguages that applied only to finite bodies of knowledge. While a typical native-born English speaker knows the meanings of 20,000 to 50,000 words, Thompson realized that very few of these words are actually used in any given situation. Instead, we constantly shift between sublanguages—sometimes from minute to minute—as we interact with other people.

Thompson's computer-compatible sublanguages had vocabularies of a few thousand words—some of which might be associated with pictures, audio files, or even video clips—and a simple grammar with a few dozen rules. In the plumber's case, this language might contain the names and functions of pipe fittings, vendors' catalogs, maps of the city's water and sewer systems, sets of architectural drawings, and the building code. So, for example, a plumber at a job site could type "I need a ¾ to ½ brass elbow at 315 South Hill Avenue," and, after some back-and-forth to clarify the details (such as threaded versus soldered, or a 90-degree elbow versus a 45), the computer would place the order and give the plumber directions to the store.

Born on July 26, 1922, Thompson served in the Army and worked at Douglas Aircraft during World War II before earning bachelor's and master's degrees in mathematics at UCLA in 1946 and 1947, respectively. He then moved to UC Berkeley to work with logician Alfred Tarski, whose mathematical definitions of "truth" in formal languages would set the course of Thompson's later career.

On getting his PhD in 1951, Thompson joined the RAND (Research ANd Development) Corporation, a "think tank" created within Douglas Aircraft during the war and subsequently spun off as an independent organization. It was the dawn of the computer age—UNIVAC, the first commercial general-purpose electronic data-processing system, went on sale that same year. Unlike previous machines built to perform specific calculations, UNIVAC ran programs written by its users. Initially, these programs were limited to simple statistical analyses; for example, the first UNIVAC was bought by the U.S. Census Bureau. Thompson pioneered a process called "discrete event simulation" that modeled complex phenomena by breaking them down into sequences of simple actions that happened in specified order, both within each sequence and in relation to actions in other, parallel sequences.

Thompson also helped model a thermonuclear attack on America's major cities in order to help devise an emergency services plan. According to Philip Neches (BS '73, MS '77, PhD '83), a Caltech trustee and one of Thompson's students, "When the team developed their answer, Fred was in tears: the destruction would be so devastating that no services would survive, even if a few people did. . . . This kind of hard-headed analysis eventually led policy makers to a simple conclusion: the only way to win a nuclear war is to never have one." Refined versions of these models were used in 2010 to optimize the deployment of medical teams in the wake of the magnitude-7.0 Haiti earthquake, according to Neches. "The models treated the doctors and supplies as the bombs, and calculated the number of people affected," he explains. "Life has its ironies, and Fred would be the first to appreciate them."

In 1957, Thompson joined General Electric Corporation's computer department. By 1960 he was working at GE's TEMPO (TEchnical Military Planning Operation) in Santa Barbara, where his natural-language research began. "Fred's first effort to teach English to a computer was a system called DEACON [for Direct English Access and CONtrol], developed in the early 1960s," says Neches.

Thompson arrived at Caltech in 1965 with a joint professorship in engineering and the humanities. "He advised the computer club as a canny way to recruit a small but dedicated cadre of students to work with him," Neches recalls. In 1969, Thompson began a lifelong collaboration with Bozena Dostert, a senior research fellow in linguistics who died in 2002. The collaboration was personal as well as professional; their wedding was the second marriage for each.

Although Thompson's and Dostert's work was grounded in linguistic theory, they moved beyond the traditional classification of words into parts of speech to incorporate an operational approach similar to computer languages such as FORTRAN. And thus they created REL, for Rapidly Extensible Language. REL's data structure was based on "objects" that not only described an item or action but allowed the user to specify the interval for which the description applied. For example:

                        Object: Mary Ann Summers

                        Attribute: driver's license

                        Value: yes

                        Start time: 1964

                        End time: current

"This foreshadowed today's semantic web representations," according to Peter Szolovits (BS '70, PhD '75), another of Thompson's students.

In a uniquely experimental approach, the Thompsons tested REL on complex optimization problems such as figuring out how to load a fleet of freighters—making sure the combined volumes of the assorted cargoes didn't exceed the capacities of the holds, distributing the weights evenly fore and aft, planning the most efficient itineraries, and so forth. Volunteers worked through various strategies by typing questions and commands into the computer. The records of these human-computer interactions were compared to transcripts of control sessions in which pairs of students attacked the same problem over a stack of paperwork face-to-face or by communicating with each other from separate locations via teletype machines. Statistical analysis of hundreds of hours' worth of seemingly unstructured dialogues teased out hidden patterns. These patterns included a five-to-one ratio between complete sentences—which had a remarkably invariant average length of seven words—and three-word sentence fragments. Similar patterns are heard today in the clipped cadences of the countdown to a rocket launch.

The "extensible" in REL referred to the ease with which new knowledge bases—vocabulary lists and the relationships between their entries—could be added. In the 1980s, the Thompsons extended REL to POL, for Problem Oriented Language, which had the ability to work out the meanings of words not in its vocabulary as well as coping with such human frailties as poor spelling, bad grammar, and errant punctuation—all on a high-end desktop computer at a time when other natural-language processors ran on room-sized mainframe machines.

"Fred taught both the most theoretical and the most practical computer science courses at the Institute long before Caltech had a formal computer science department. In his theory class, students proved the equivalence of a computable function to a recursive language to a Turing machine. In his data analysis class, students got their first appreciation of the growing power of the computer to handle volumes of data in novel and interesting ways," Neches says. "Fred and his students pioneered the arena of 'Big Data' more than 50 years ahead of the pack." Thompson co-founded Caltech's official computer science program along with professors Carver Mead (BS '56, MS '57, PhD '60) and Ivan Sutherland (MS '60) in 1976.

Adds Remy Sanouillet (MS '82, PhD '94), Thompson's last graduate student, "In terms of vision, Fred 'invented' the Internet well before Al Gore did. He saw, really saw, that we would be asking computers questions that could only be answered by fetching pieces of information stored on servers all over the world, putting the pieces together, and presenting the result in a universally comprehensible format that we now call HTML."

Thompson was a member of the scientific honorary society Sigma Xi, the Association for Symbolic Logic, and the Association for Computing Machinery. He wrote or coauthored more than 40 unclassified papers—and an unknown number of classified ones.

Thompson is survived by his first wife, Margaret Schnell Thompson, and his third wife, Carmen Edmond-Thompson; two children by his first marriage, Mary Ann Thompson Arildsen and Scott Thompson; and four grandchildren.

Plans for a celebration of Thompson's life are pending.

Big Data Summer School Is in Session—Virtually

Linq: http://www.caltech.edu/news/big-data-summer-school-session-virtually-43622
News Writer: 
Kathy Svitil
MOOCs Infographic
Credit: Lance Hayashida/Caltech Office of Strategic Communications; Reference materials provided by AMT and CTLO

Beginning September 2, Caltech and JPL will be offering an unusual take on the massive open online course (MOOC) model: a two-week-long "virtual summer school" class, providing advanced instruction by experts at Caltech and JPL on the computational skills and methods used in the analysis of complex data sets—that is, of "big data."

Why big data? "Science in the 21st century is becoming increasingly data-driven, and we need new tools for extracting knowledge from massive and complex data sets," says Caltech professor of astronomy George Djorgovski, one of the organizers of the summer school. "Our students and postdocs need to master such skills in order to be effective researchers today."

According to Richard Doyle, program manager of JPL's Information and Data Science Program Office and co-organizer of the course along with Djorgovski and JPL's Dan Crichton, "the challenges of distributed data analytics in the big data era are on the critical path to our future success in conceiving, designing, operating and, most importantly, extracting scientific results from NASA science missions. By joining with Caltech, we combine the intellectual strengths of a leading research institution with JPL's established science, engineering, and technology leadership in accomplishing NASA science missions."

"It is imperative that we begin now to educate our workforce on the nature of the challenges, along with the best available ideas for achieving technical solutions," adds Crichton, director of JPL's Center for Data Science and Technology. "We will be impressing on the students the importance of taking a full life-cycle approach to data-intensive science, from the point of data collection—which may be at Mars, Jupiter or beyond—to grappling with the daunting realities of massive, heterogeneous, highly distributed archived data sets to extract reproducible scientific understanding of Earth, astrophysical, and planetary data. These solutions can apply to many other important fields, such as medicine, health care, and bioinformatics."

"Caltech and JPL are starting a joint research venture in the arena of big data science, and this is our first joint educational offering," says Djorgovski. "It is fittingly both timely and innovative in its approach."

The course has a unique two-tiered format for student enrollment. The first tier consists of a group of 36 official students chosen from a pool of hundreds of applicants. The group includes graduate students, postdocs, and staff scientists from Caltech, JPL, and other institutions in the United States and around the world who already have a strong background in data-driven computing and statistics as well as research experience. Each weekday during the two-week term, these students will watch prerecorded video modules prepared by the course's 11 instructors and then perform hands-on computational exercises to practice what they have learned. Instructors will be available for interactive online sessions.

The second tier is for anyone, anywhere, who wants to take the course, free of charge (but for no credit), through the online learning platform Coursera. The Caltech-JPL Summer School on Big Data Analytics—the first professional summer class offered by Coursera—will be posted at the same time as the regular session, although these students will have no promise of instructor interaction. However, in a twist on the traditional MOOC, which is structured to match an actual classroom learning experience, students will be able to proceed entirely at their own pace. "You can sign up whenever you want. You can go through it at your own pace; take only some of it, or all of it," explains Djorgovski.

At the end of the two-week term, all of the developed content will migrate to Coursera's new On-Demand course platform.

"This is the first Caltech Coursera MOOC using this model, and it is new to Coursera, too," says Leslie Maxfield, director of Academic Media Technologies at Caltech, which supports the Institute's Coursera and edX online courses—now totaling eight in all, with more under development—in collaboration with the Center for Teaching, Learning, and Outreach. "Offering courses as on-demand allows students to fit online education into their busy schedules, and will hopefully increase completion rates," she says. (For a list of Caltech's MOOCs and links to registration for upcoming sessions, go to https://online.caltech.edu/courses).

Finally, in addition to being available indefinitely on Coursera as a stand-alone course, the summer school materials will be used in Djorgovski's spring 2015 Caltech course Methods of Computational Science, which will be offered as a MOOC and used for a "flipped" classroom approach. "A flipped classroom reverses, or 'flips,' when students passively and actively learn," Maxfield explains. "Instead of passively listening to a lecture during class time, students watch online pre-recorded videos and take instant-feedback assessments beforehand. This allows for active, in-class collaborative and creative interactions, such as group problem solving and discussions, directed by their professors."

For more information, visit the course website at http://bigdata.astro.caltech.edu/Home.html.

New Center Supports Data-Driven Research

Linq: http://www.caltech.edu/news/new-center-supports-data-driven-research-44589
News Writer: 
Jessica Stoller-Conrad
Visualization of multi-dimensional data is a general challenge for all data-intensive fields. Scientists at the CD3 are exploring novel methods for data visualization that involve spatial dimensions, colors, and shapes.
Credit: C. Donalek and S.G. Djorgovski/Caltech

With the advanced capabilities of today's computer technologies, researchers can now collect vast amounts of information with unprecedented speed. However, gathering information is only one half of a scientific discovery, as the data also need to be analyzed and interpreted. A new center on campus aims to hasten such data-driven discoveries by making expertise and advanced computational tools available to Caltech researchers in many disciplines within the sciences and the humanities.

The new Center for Data-Driven Discovery (CD3), which became operational this fall, is a hub for researchers to apply advanced data exploration and analysis tools to their work in fields such as biology, environmental science, physics, astronomy, chemistry, engineering, and the humanities.

The Caltech center will also complement the resources available at JPL's Center for Data Science and Technology, says director of CD3 and professor of astronomy George Djorgovski.

"Bringing together the research, technical expertise, and respective disciplines of the two centers to form this joint initiative creates a wonderful synergy that will allow us opportunities to explore and innovate new capabilities in data-driven science for many of our sponsors," adds Daniel Crichton, director of the Center for Data Science and Technology at JPL.

At the core of the Caltech center are staff members who specialize in both computational methodology and various domains of science, such as biology, chemistry, and physics. Faculty-led research groups from each of Caltech's six divisions and JPL will be able to collaborate with center staff to find new ways to get the most from their research data. Resources at CD3 will range from data storage and cataloguing that meet the highest "housekeeping" standards, to custom data-analysis methods that combine statistics with machine learning—the development of algorithms that can "learn" from data. The staff will also help develop new research projects that could benefit from large amounts of existing data.

"The volume, quality, and complexity of data are growing such that the tools that we used to use—on our desktops or even on serious computing machines—10 years ago are no longer adequate. These are not problems that can be solved by just buying a bigger computer or better software; we need to actually invent new methods that allow us to make discoveries from these data sets," says Djorgovski.

Rather than turning to off-the-shelf data-analysis methods, Caltech researchers can now collaborate with CD3 staff to develop new customized computational methods and tools that are specialized for their unique goals. For example, astronomers like Djorgovski can use data-driven computing in the development of new ways to quickly scan large digital sky surveys for rare or interesting targets, such as distant quasars or new kinds of supernova explosions—targets that can be examined more closely with telescopes, such as those at the W. M. Keck Observatory, he says.

Mary Kennedy, the Allen and Lenabelle Davis Professor of Biology and a coleader of CD3, says that the center will serve as a bridge between the laboratory-science and computer-science communities at Caltech. In addition to matching up Caltech faculty members with the expertise they will need to analyze their data, the center will also minimize the gap between those communities by providing educational opportunities for undergraduate and graduate students.

"Scientific development has moved so quickly that the education of most experimental scientists has not included the techniques one needs to synthesize or mine large data sets efficiently," Kennedy says. "Another way to say this is that 'domain' sciences—biology, engineering, astronomy, geology, chemistry, sociology, etc.—have developed in isolation from theoretical computer science and mathematics aimed at analysis of high-dimensional data. The goal of the new center is to provide a link between the two."

Work in Kennedy's laboratory focuses on understanding what takes place at the molecular level in the brain when neuronal synapses are altered to store information during learning. She says that methods and tools developed at the new center will assist her group in creating computer simulations that can help them understand how synapses are regulated by enzymes during learning.

"The ability to simulate molecular mechanisms in detail and then test predictions of the simulations with experiments will revolutionize our understanding of highly interconnected control mechanisms in cells," she says. "To some, this seems like science fiction, but it won't stay fictional for long. Caltech needs to lead in these endeavors."

Assistant Professor of Biology Mitchell Guttman says that the center will also be an asset to groups like his that are trying to make sense out of big sets of genomic data. "Biology is becoming a big-data science—genome sequences are available at an unprecedented pace. Whereas it took more than $1 billion to sequence the first genome, it now costs less than $1,000," he says. "Making sense of all this data is a challenge, but it is the future of biomedical research."

In his own work, Guttman studies the genetic code of lncRNAs, a new class of gene that he discovered, largely through computational methods like those available at the new center. "I am excited about the new CD3 center because it represents an opportunity to leverage the best ideas and approaches across disciplines to solve a major challenge in our own research," he says.

But the most valuable findings from the center could be those that stem not from a single project, but from the multidisciplinary collaborations that CD3 will enable, Djorgovski says. "To me, the most interesting outcome is to have successful methodology transfers between different fields—for example, to see if a solution developed in astronomy can be used in biology," he says.

In fact, one such crossover method has already been identified, says Matthew Graham, a computational scientist at the center. "One of the challenges in data-rich science is dealing with very heterogeneous data—data of different types from different instruments," says Graham. "Using the experience and the methods we developed in astronomy for the Virtual Observatory, I worked with biologists to develop a smart data-management system for a collection of expression and gene-integration data for genetic lines in zebrafish. We are now starting a project along similar methodology transfer lines with Professor Barbara Wold's group on RNA genomics."

And, through the discovery of more tools and methods like these, "the center could really develop new projects that bridge the boundaries between different traditional fields through new collaborations," Djorgovski says.

Caltech, JPL Team Up to Take On Big-Data Projects

Linq: http://www.caltech.edu/news/caltech-jpl-team-take-big-data-projects-47037
News Writer: 
Kimm Fesenmaier
An image processing tool developed at CD3 helped biologists Yuling Jiao and Elliot Meyerowitz discover that a signaling molecule departs from the primordial to the meristem to promote robustness in leaf patterning in the Arabidopsis thaliana plant. The tool accurately separated co-localized signals (shown in red and green in the image) allowing for the clear visualization of polarity in each cell toward the meristem center.
Credit: Alexandre Cunha/Caltech and Jiyan Qi and Yuling Jiao/Chinese Academy of Sciences

Acknowledging not only the growing need among scientists and engineers for resources that can help them handle, explore, and analyze big data, but also the complementary strengths of Caltech's Center for Data-Driven Discovery (CD3) and JPL's Center for Data Science and Technology (CDST), the two centers have formally joined forces, creating the Joint Initiative on Data Science and Technology.

A kickoff event for the collaboration was held at the end of April at Caltech's Cahill Center for Astronomy and Astrophysics.

"This is a wonderful example of a deep cooperation between Caltech and JPL that we think will serve to strengthen connections between the campus and the lab," says George Djorgovski, professor of astronomy and director of CD3. "We believe the joint venture will enable and stimulate new projects and give both campus and JPL researchers a new competitive advantage."

Individually, each center strives to provide the intellectual infrastructure, including expertise and advanced computational tools, to help researchers and companies from around the world analyze and interpret the massive amounts of information they now collect using computer technologies, in order to make data-driven discoveries more efficient and timely.

"We've found a lot of synergy across disciplines and an opportunity to apply emerging capabilities in data science to more effectively capture, process, manage, integrate, and analyze data," says Daniel Crichton, manager of the CDST. " JPL's work in building observational systems can be applied to several disciplines from planetary science and Earth science to biological research."

The Caltech center is also interested in this kind of methodology transfer—the application of data tools and techniques developed for one field to another. The CD3 recently collaborated on one such project with Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology at Caltech. They used tools based on machine learning that were originally developed to analyze data from astronomical sky surveys to process neurobiological data from a study of autism.

"We're getting some promising results," says Djorgovski. "We think this kind of work will help researchers not only publish important papers but also create tools to be used across disciplines. They will be able to say, 'We've got these powerful new tools for knowledge discovery in large and complex data sets. With a combination of big data and novel methodologies, we can do things that we never could before.'"

Both the CD3 and the CDST began operations last fall. The Joint Initiative already has a few projects under way in the areas of Earth science, cancer research, health care informatics, and data visualization.

"Working together, we believe we are strengthening both of our centers," says Djorgovski. "The hope is that we can accumulate experience and solutions and that we will see more and more ways in which we can reuse them to help people make new discoveries. We really do feel like we're one big family, and we are trying to help each other however we can."

Caltech Theoretical Physicist Receives the 2015 Dirac Medal and Prize

Linq: http://www.caltech.edu/news/caltech-theoretical-physicist-receives-2015-dirac-medal-and-prize-47534
News Writer: 
Lori Dajose
photo of Alexei Kitaev
Alexei Kitaev
Credit: Lance Hayashida/Caltech

Alexei Kitaev, the Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics, has been awarded the 2015 Dirac Medal and Prize from the Abdus Salam International Centre for Theoretical Physics (ICTP). The prize, named after the esteemed theoretical physicist and Nobel Laureate Paul Dirac, is one of the most prestigious honors in theoretical physics. This year it was awarded jointly to Kitaev, Gregory W. Moore of Rutgers University, and Nicholas Read of Yale University for their work on condensed matter research.

The work of Kitaev, Moore, and Read has "played a fundamental role in recent advances in our understanding of the quantum states of matter and quantum entanglement theory," according to the ICTP's press release.

Kitaev is cited for proposing an innovative method of computation, called topological quantum computation, which builds upon Moore and Read's theory of non-Abelian anyons. Anyons are special quasiparticles that exist under the conditions of the fractional quantum Hall effect (FQHE). The FQHE is observed in semiconductor structures that contain a thin layer of mobile electrons. When such systems are cooled to very low temperatures and immersed in a strong magnetic field, the electrons form a collective state analogous to a liquid.

"Anyons are like bubbles and lumps in that liquid, which can move around, fuse, or annihilate," Kitaev explains. "However, these quasiparticles have very strange properties: they carry a fractional electron charge and defy the textbook classification into bosons and fermions. For bosons, such as photons, switching the places of two identical particles has no effect, while for fermions like electrons or protons, the particle exchange introduces a minus sign into the calculation. Switching the places of two identical anyons results in an extra factor other than 1 or -1."

"Non-Abelian anyons are even weirder because their state is not determined by where they are spatially; there is also some hidden state in the liquid between them. Exchanging two particles or moving one around the other alters that state."

As part of his proposed method of topological quantum computation, Kitaev suggested that using this hidden state as a quantum computer memory could make such computation more stable and error proof.

"Kitaev's work on fault-tolerant quantum computation using topological quantum phases with non-Abelian quasiparticles has had profound implications in quantum information theory," the award citation notes.

Caltech faculty who have previously been awarded the Dirac Medal are John H. Schwarz, the Harold Brown Professor of Theoretical Physics, Emeritus, in 1989; and John Hopfield, the Roscoe G. Dickinson Professor of Chemistry and Biology, Emeritus, in 2001.

Atomic Fractals in Metallic Glasses

Linq: http://www.caltech.edu/news/atomic-fractals-metallic-glasses-47917
News Writer: 
Lori Dajose
An illustration of what the atomic fractal percolation structure may look like.
Credit: Courtesy David Chen/Greer Laboratory/Caltech

Metallic glasses are very strong and elastic materials that appear with the naked eye to be identical to stainless steel. But metallic glasses differ from ordinary metals in that they are amorphous, lacking an orderly, crystalline atomic arrangement. This random distribution of atoms, which is the primary characteristic of all glass materials (such as windowpanes and tableware), gives metallic glasses unique mechanical properties but unpredictable internal structure. Researchers in the Caltech lab of Julia Greer, professor of materials science and mechanics in the Division of Engineering and Applied Science, have shown that metallic glasses do have an atomic-level structure—if you zoom in closely enough—although it differs from the periodic lattices that characterize crystalline metals.

If you looked at a metallic glass on a scale larger than a few atomic diameters, you would see tightly packed, jumbled clusters of atoms. A new study from the Greer group—published in the September 18, 2015 issue of the journal Science—shows that inside each of these clusters, on a scale of about two to three atomic diameters, atoms have a predictable arrangement called a fractal.

Fractals are patterns that are self-similar on different scales, and they can occur quite naturally.

"Take for example a piece of paper crumpled into a ball. If you look at the folds of the paper when it is flattened back after crumpling, it will look qualitatively the same if you zoom in on a smaller portion of the same paper. The scale that you use to examine the paper more or less does not change the way it looks," says David Chen, a fourth-year graduate student in the Greer lab and first author on this new paper.

The group did simulations and experiments to probe the atomic structure of metallic glass alloys of copper, zirconium, and aluminum. In crystalline solids like diamond or gold, atoms or molecules are arranged in an orderly lattice pattern. As a result, the local neighborhood around an atom in a crystalline material is identical to everywhere else in the material. In amorphous metals, every location within the material looks different—except, Greer and her colleagues found, when you zoom in to look at the distribution of atoms at the scale of two to three atomic diameters—about one nanometer. At this level, the same fractal pattern is present, regardless of location within the material. "Within the clusters of atoms that make up a metallic glass, atoms are arranged in a particular kind of fractal pattern called percolation," Chen says.

Other scientists have previously hypothesized that the atoms in metallic glasses are distributed fractally. However, this creates an apparent paradox: When atoms are distributed fractally, there should be empty space between them. However, metallic glasses—just like regular metals—are fully dense, meaning that they lack significant pockets of empty space.

"Our group has solved this paradox by showing that atoms are only arranged fractally up to a certain scale," Greer says. "Larger than that scale, clusters of atoms are packed randomly and tightly, making a fully dense material, just like a regular metal. So we can have something that is both fractal and fully dense."

The discovery was made with metallic glasses, but the group's conclusions about fractally arranged atomic structures can be applied to essentially any rigid amorphous material, like the glass in a windowpane or a frozen piece of chewing gum. "Amorphous metals can exhibit unique properties, like unusual strength and elasticity," Chen says. "Now that we know the structure of these materials, we can start studying how their atomic-level arrangement affects their large-scale properties."

In addition to applications within materials science, studies of naturally occurring fractal distributions are of high interest within the fields of mathematics, physics, and computer science. Fractals have been studied for centuries by mathematicians and physicists. Showing how they emerge in a metallic alloy provides a physical foundation for something that has only been studied theoretically.

Other Caltech co-authors on the paper, titled "Fractal atomic-level percolation in metallic glasses," include Qi An, a theoretical and computational materials scientist, and Professor William Goddard, the Charles and Mary Ferkel Professor of Chemistry, Materials Science, and Applied Physics.

Toward a Smarter Grid

Linq: http://www.caltech.edu/news/toward-smarter-grid-48410
News Writer: 
Kimm Fesenmaier

Steven Low, professor of computer science and electrical engineering at Caltech, says we are on the cusp of a historic transformation—a restructuring of the energy system similar to the reimagining and revamping that the communication and computer networks experienced over the last two decades, making them layered, with distributed and interconnected intelligence everywhere.

The power network of the future—aka the smart grid—will have to be much more dynamic and responsive than the current electric grid, handling tremendous loads while incorporating intermittent energy production from renewable resources such as wind and solar, all while ensuring that when you or I flip a switch at home or work, the power still comes on without fail.

The smart grid will also be much more distributed than the current network, which controls a relatively small number of generators to provide power to millions of passive endpoints—the computers, machines, buildings, and more that simply consume energy. In the future, thanks to inexpensive sensors and computers, many of those endpoints will become active and intelligent loads like smart devices, or distributed generators such as solar panels and wind turbines. These endpoints will be able to generate, sense, communicate, compute, and respond.

Given these trends, Low says, it is only reasonable to conclude that in the coming decades, the electrical system is likely to become "the largest and most complex cyberphysical system ever seen." And that presents both a risk and an opportunity. On the one hand, if the larger, more active system is not controlled correctly, blackouts could be much more frequent. On the other hand, if properly managed, it could greatly improve efficiency, security, robustness, and sustainability.

At Caltech, Low and an interdisciplinary group of engineers, economists, mathematicians, and computer scientists pulled together by the Resnick Sustainability Institute, along with partners like Southern California Edison and the Department of Energy, are working to develop the devices, systems, theories, and algorithms to help guide this historic transformation and make sure that it is properly managed.

In 2012, the Resnick Sustainability Institute issued a report titled Grid 2020: Towards a Policy of Renewable and Distributed Energy Resources, which focused on some of the major engineering, economic, and policy issues of the smart grid. That report led to a discussion series and working sessions that in turn led to the publication in 2014 of another report called More Than Smart: A Framework to Make the Distribution Grid More Open, Efficient and Resilient.

"One thing that makes the smart grid problem particularly appealing for us is that you can't solve it just as an engineer, just as a computer scientist, just as a control theorist, or just as an economist," says Adam Wierman, professor of computer science and Executive Officer for the Computing and Mathematical Sciences Department. "You actually have to bring to bear tools from all of these areas to solve the problem."

For example, he says, consider the problem of determining how much power various parts of the grid should generate at a particular time. This requires generating an amount of power that matches or closely approximates the amount of electricity demanded by customers. Currently this involves predicting electricity demand a day in advance, updating that prediction several hours before it is needed, and then figuring out how much nuclear power, natural gas, or coal will be produced to meet the demand. That determination is made through markets. In California, the California Independent System Operator runs a day-ahead electricity market in which utility companies and power plants buy and sell power generation for the following day. Then any small errors in the prediction are fixed at the last minute by engineers in a control office, with markets completely out of the picture.

"So you have a balance between the robustness and certainty provided by engineered control and the efficiency provided by markets and economic control," says Wierman. "But when renewable energy comes onto the table, all of a sudden the predictions of energy production are much less accurate, so the interaction between the markets and the engineering is up in the air, and no one knows how to handle this well." This, he says, is the type of problem the Caltech team, with its interdisciplinary approach, is uniquely equipped to address.

Indeed, the Caltech smart grid team is working on projects on the engineering side, projects on the markets side, and projects at the interface.

On the engineering side, a major project has revolved around a complex mathematical problem called optimal power flow that underlies many questions dealing with power system operations and planning. "Optimal power flow can tell you when things should be on or conserving energy, how to stabilize the voltage in the network as solar or wind generation fluctuates, or how to set your thermostat so that you maintain comfort in your building while stabilizing the voltage on the grid," explains Mani Chandy, the Simon Ramo Professor of Computer Science, Emeritus. "The problem has been around for 50 years but is extremely difficult to solve."

Chandy worked with Low; John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering; and a number of Caltech students to devise a clever way to solve the problem, allowing them, for the first time, to compute a solution and then check whether that solution is globally optimal.

"We said, let's relax the constraints and optimize the cost over a bigger set that we can design to be solvable," explains Low. For example, if a customer is consuming electricity at a single location, the problem might ask how much electricity that individual is actually consuming; a relaxation would say that that person is consuming no more than a certain amount—it is a way of adding flexibility to a problem with tight constraints. "Almost magically, it turns out that if I design my physical set in a clever way, the solution for this larger simple set turns out to be the same as it would be for the original set."

The new approach produces a feasible solution for almost all distribution systems—the low-voltage networks that take power from larger substations and ultimately deliver it to the houses, buildings, street lights, and so on in a region. "That's important because many of the innovations in the energy sector in the coming decade will happen on distribution systems," says Low.

Another Caltech project attempts to predict how many home and business owners are likely to adopt rooftop solar panels over the next 5, 10, 20, or 30 years. In Southern California, the number of solar installations has increased steadily for several years. For planning purposes, utility companies need to anticipate whether that growth will continue and at what pace. For example, Low says, if the network is eventually going to comprise 15 or 20 percent renewables, then the current grid is robust enough. "But if we are going to have 50 or 80 percent renewables," he says, "then the grid will need huge changes in terms of both engineering and market design."

Working with Chandy, graduate students Desmond Cai and Anish Agarwal (BS '13, MS '15) developed a new model for predicting how many homes and businesses will install rooftop solar panels. The model has proven highly accurate. Researchers believe that whether or not people "go solar" depends largely on two factors: how much money they will save and their confidence in the new technology. The Caltech model, completed in 2012, indicates that the amount of money that people can save by installing rooftop solar has a huge influence on whether they will adopt the technology. Based on their research, the team has also developed a web-based tool that predicts how many people will install solar panels using a utility company's data. Southern California Edison's planning department is actively using the tool.

On the markets side, Caltech researchers are doing theoretical work looking at the smart grid and the network of markets it will produce. Electricity markets can be both complicated and interesting to study because unlike a traditional market—a single place where people go to buy and sell something—the electricity "market" actually consists of many networked marketplaces interacting in complicated ways.

One potential problem with this system and the introduction of more renewables, Wierman says, is that it opens the door for firms to manipulate prices by turning off generators. Whereas the operational status of a normal generator can be monitored, with solar and wind power, it is nearly impossible to verify how much power should have been produced because it is difficult to know whether it was windy or sunny at a certain time. "For example, you can significantly impact prices by pushing—or not pushing—solar energy from your solar farm," Wierman says. "There are huge opportunities for strongly manipulating market structure and prices in these environments. We are beginning to look at how to redesign markets so that this isn't as powerful or as dangerous."

An area of smart grid research where the Caltech team takes full advantage of its multidisciplinary nature is at the interface of engineering and markets. One example is a concept known as demand response, in which a mismatch between energy supply and demand can be addressed from the demand side (that is, by involving consumers), rather than from the power-generation side.

As an example of demand response, some utilities have started programs where participants, who have smart thermostats installed in their homes in exchange for some monetary reward, allow the company to turn off their air conditioners for a short period of time when it is necessary to reduce the demand on the grid. In that way, household air conditioners become "shock absorbers" for the system.

"But the economist says wait a minute, that's really inefficient. You might be turning the AC off for people who desperately want it on and leaving it on for people who couldn't care less," says John Ledyard, the Allen and Lenabelle Davis Professor of Economics and Social Sciences. A counter proposal is called Prices to Devices, where the utility sends price signals to devices, like thermostats, in homes and offices, and customers decide if they want to pay for power at those prices. Ledyard says while that is efficient rationing in equilibrium, it introduces a delay between the consumer and the utility, creating an instability in the dynamics of the system.

The Caltech team has devised an intermediate proposal that removes the delay in the system. Rather than sending a price and having consumers react to it, their program has consumers enter their sensitivity to various prices ahead of time, right on their smart devices. This can be done with a single number. Then those devices deliver that information to the algorithm that operates the network. For example, a consumer might program his or her smart thermostat, to effectively say, "If a kilowatt of power costs $1 and the temperature outside is 90 degrees, I want you to keep the air conditioner on; if the price is $5 and the temperature outside is 80 degrees, go ahead and turn it off."

"The consumer's response is handled by the algorithm, so there's no lag," says Ledyard.

Currently, the Caltech smart grid team is working closely with Southern California Edison to set up a pilot test in Orange County involving several thousand households. The homes will be equipped with various distributed energy resources including rooftop solar panels, electric vehicles, smart thermostats for air conditioners, and pool pumps. The team's new approach to the optimal power flow problem and demand response will be tested to see whether it can keep stable a miniature version of the future smart grid.

Such experiments are crucial for preparing for the major changes to the electrical system that are certainly coming down the road, Low says. "The stakes are high. In the face of this historic transformation, we need to do all that we can to minimize the risk and make sure that we realize the full potential."

Peters Named New Director of Resnick Sustainability Institute

Linq: http://www.caltech.edu/news/peters-named-new-director-resnick-sustainability-institute-48549
News Writer: 
Tom Waldman
Jonas Peters, Bren Professor of Chemistry

Jonas C. Peters, the Bren Professor of Chemistry, has been appointed director of the Resnick Sustainability Institute. Launched in 2009 with an investment from philanthropists Stewart and Lynda Resnick and located in the Jorgenson Laboratory on the Caltech campus, the Resnick Institute concentrates on transformational breakthroughs that will contribute to the planet's sustainability over the long term.

The Resnick Sustainability Institute, which involves both the Chemistry and Chemical Engineering and Engineering and Applied Science divisions, serves as a prime example of the multidisciplinary approach prized by Caltech.

"Some of the most important challenges in sustainability are also among the most complex," says Peters, who has been a member of the Caltech faculty since 1999. "We are committed to working on problems that are uniquely suited to the Caltech environment. This means starting with fundamentals and leveraging the cross-catalysis of ideas and creativity of this campus to come up with ways to have substantial impact."

Because the world's natural resources are dwindling, Peters wants to continue focusing the Resnick Institute's efforts on efficient energy generation, storage, and use. Some current projects include development of advanced photovoltaics, photoelectrochemical solar fuels and cellulosic biofuels; energy conversion work on batteries and fuel cells; and efficiency in industrial catalysis and advanced research on electrical grid control and distribution.

In addition, the Resnick Institute is exploring new opportunities in the area of water sustainability. In September, the institute hosted a workshop entitled "Water Resilience and Sustainability: Can We Make LA Water Self-Sufficient?" The workshop examined the long-term potential for sustainable water use in urban environments, using the Los Angeles area as a case study.

"The Resnick Sustainability Institute is continuing to build one of the great centers for sustainability research," says Peters. "We are doing this by supporting the most talented young scientists and engineers committed to tackling the fascinating, critical, and yet very difficult challenges of this field."

When Harry Met Arnold

Linq: http://www.caltech.edu/news/when-harry-met-arnold-48752
A Milestone in Chemistry
News Writer: 
Douglas Smith
Caltech chemist Harry Gray
Harry B. Gray, Caltech's Arnold O. Beckman Professor of Chemistry and founding director of the Beckman Institute.
Credit: Lance Hayashida/Caltech

On November 12 and 13, the Beckman Institute at Caltech hosted a symposium on "The Shared Legacy of Arnold Beckman and Harry Gray." The two began a close working relationship in the late 1960s, when Gray arrived at Caltech. In this interview, Gray provides some background.

How did you come to Caltech?

I grew up in southern Kentucky. I got my BS in chemistry in 1957, and my professors told me to go to grad school at Northwestern University in Evanston, Illinois, to continue my studies in synthetic organic chemistry. They didn't give me a choice. Western Kentucky College had physical chemistry, analytical chemistry, organic chemistry, and that was it.

When I got to Northwestern I met Fred Basolo, who became my mentor. He did inorganic chemistry, which I was very surprised to discover even existed as a research field. I was so excited by his work, which was studying the mechanisms of inorganic reactions, that I decided to switch fields and do what he did. I got my PhD in 1960 from work on the syntheses and reaction mechanisms of platinum, rhodium, palladium, and nickel complexes. A complex has a metal atom sitting in the middle of as many as six ions or molecules called ligands. The metal has empty orbitals that it wants to fill with paired-up electrons, and the ligands have electron pairs they aren't using, so the metal and its ligands form stable bonds.

I had gotten into chemistry in the first place because I'd always been interested in colors. Even when I was a little kid, colors fascinated me. I really wanted to understand them, and many complexes have brilliant, beautiful colors. At Northwestern I heard about crystal-field theory, which was the first attempt to explain how metal complexes got their colors. All the crystal-field theory's big shots were in Copenhagen, so I decided to go there as a postdoc. Which I did.

I soon found out that crystal-field theory didn't go far enough. It only explained the colors of a limited set of metal ions in solution, and it couldn't explain charge transfers and a lot of other things. All the atoms were treated as point charges, with no provision for the bonds between the metal and the ligands. There weren't any bonds. So I helped develop a new theory, called ligand-field theory, which put the bonds back in the complexes. Carl Ballhausen, a professor at the University of Copenhagen, and I wrote a paper on a "metal-oxo" complex in which an oxygen atom was triple-bonded to a vanadium ion. The triple bond in our theory was required to account for the blue color of the vanadium-oxo complex. We also could explain charge transfers in other oxo complexes. Bonds were back in metal complexes!

Metal-oxo bonds are very important in biology. They are crucial in a lot of reactions, such as the oxygen-producing side of photosynthesis; the metabolism of drugs by cytochrome P-450, which often leads to toxic interactions with other drugs; and respiration. When we breathe in O2, our respiratory system splits the O=O bond, forming a metal-oxo complex as a reactive intermediate on the way to the product, which is water.

My work on bonding in metal oxo complexes got me a job as an assistant professor at Columbia University in 1961. By '65 I was a full professor and getting offers from many places, including Caltech. I loved Columbia, and I would have stayed there, but the chemistry department was very small. I knew it would be hard to build inorganic chemistry in a small department that concentrated on organic and physical chemistry.

There weren't any inorganic chemists at Caltech, either, but division chair Jack Roberts encouraged me to build the field up to five or six faculty members. I came to Caltech in 1966, and we now have a very strong inorganic chemistry group.

When I got here, I started work in two new areas at the interface of inorganic chemistry and biology. I'm best known for my work showing how electrons flow through proteins in respiration and photosynthesis. I won the Wolf Prize and the Welch Prize and the National Medal of Science for this work.

I also got into inorganic photochemistry—solar-energy research. That work started well before the first energy crisis in 1973, and continued until oil became cheap again in the early 1980s and solar-energy research was no longer supported. In the late '90s, I restarted the work. Now I'm leading an NSF Center for Chemical Innovation in Solar Fuels, which has an outreach activity I proudly call the Solar Army.

And how's that going?

The Solar Army keeps growing. We now have at least 60 brigades at high schools across the U.S., and 10 more abroad. I'd say that about 1,000 students have been through the program since 2008. We're getting young scientists involved in research that could have a profound effect on the world they're going to inherit. They're helping us look for light absorbers and catalysts to turn water into hydrogen fuel, using nothing but sunlight. The solar materials need to be sturdy metal oxides that are abundant and dirt cheap. But there are many metals in the periodic table. When you start combining them in twos and threes in varying amounts, there are literally millions of possibilities to be tested. We already have found several very good water oxidation and reduction catalysts, and since the National Science Foundation has just renewed our CCI Solar Fuels grant, we expect to make great progress in the coming years in understanding how they work.

Let's shift gears and talk about the Beckman Institute. How did you first meet Arnold Beckman [PhD '28, inventor of the pH meter, founder of Beckman Instruments, and a Life Trustee of Caltech]?

I gave a talk back in 1967, probably on Alumni Day. Arnold was the chair of Caltech's Board of Trustees at the time, and he and his wife, Mabel, were seated in the second row. When the talk was over, they came down and introduced themselves. Mabel said—and I remember this very well—she said, "Arnold, I didn't understand much of what this young man said, but I really liked the way he said it." Arnold gave me the thumbs up, and that started our relationship.

When I became chairman of the Division of Chemistry and Chemical Engineering in 1978, I asked him to be on my advisory committee. I didn't ask him for money, but I asked him for advice, and we became quite close. He said he wanted to do something for us. That led to his gift for the Arnold and Mabel Beckman Laboratory of Chemical Synthesis, as well as a gift for instrumentation.

He liked it that we raised money to match his instrument gift. He told me that he wanted to do something bigger, so we started thinking about building the Beckman Institute. [Caltech President] Murph Goldberger and I would go down to Orange County about every week with a new plan. He rejected the first four or five until we came up with the idea of developing technology to support chemistry and biology—methods and instruments for fundamental research—and creating resource centers to house them.

Once we agreed on what the building should house, we started planning the building itself. But when we showed Arnold our design, which was four stories plus a basement, he said, "That's not big enough. You need another floor for growth." So we added a subbasement that was quickly occupied by a resource center for magnetic-resonance imaging and optical imaging that has been heavily used by biologists, chemists, and other investigators.

The Beckman Institute has done a lot over the last 25 years. But it develops technology for general research use, so it doesn't often make the headlines itself. Are you OK with that?

Many advances in science and technology have been made in the Beckman Institute over the last 25 years. The methods and instruments that have been developed in BI resource centers have made enormous impacts at the frontiers of chemistry and biology. Solar-fuels science and human biology are just two examples of areas where work in the Beckman Institute has made a big difference. And there are many more. Am I proud? You bet I am!

Toward Liquid Fuels from Carbon Dioxide

Linq: http://www.caltech.edu/news/toward-liquid-fuels-carbon-dioxide-49074
News Writer: 
Ker Than
C1 to C2: Connecting carbons by reductive deoxygenation and coupling of CO
Credit: Kyle Horak and Joshua Buss/Caltech

In the quest for sustainable alternative energy and fuel sources, one viable solution may be the conversion of the greenhouse gas carbon dioxide (CO2) into liquid fuels.

Through photosynthesis, plants convert sunlight, water, and CO2 into sugars, multicarbon molecules that fuel cellular processes. CO2 is thus both the precursor to the fossil fuels that are central to modern life as well as the by-product of burning those fuels. The ability to generate synthetic liquid fuels from stable, oxygenated carbon precursors such as CO2 and carbon monoxide (CO) is reminiscent of photosynthesis in nature and is a transformation that is desirable in artificial systems. For about a century, a chemical method known as the Fischer-Tropsch process has been utilized to convert hydrogen gas (H2) and CO to liquid fuels. However, its mechanism is not well understood and, in contrast to photosynthesis, the process requires high pressures (from 1 to 100 times atmospheric pressure) and temperatures (100–300 degrees Celsius).

More recently, alternative conversion chemistries for the generation of liquid fuels from oxygenated carbon precursors have been reported. Using copper electrocatalysts, CO and CO2 can be converted to multicarbon products. The process proceeds under mild conditions, but how it takes place remains a mystery.

Now, Caltech chemistry professor Theo Agapie and his graduate student Joshua Buss have developed a model system to demonstrate what the initial steps of a process for the conversion of CO to hydrocarbons might look like.

The findings, published as an advanced online publication for the journal Nature on December 21, 2015 (and appearing in print on January 7, 2016), provide a foundation for the development of technologies that may one day help neutralize the negative effects of atmospheric accumulation of the greenhouse gas CO2 by converting it back into fuel. Although methods exist to transform CO2 into CO, a crucial next step, the deoxygenation of CO molecules and their coupling to form C–C bonds, is more difficult.

In their study, Agapie and Buss synthesized a new transition metal complex—a metal atom, in this case molybdenum, bound by one or more supporting molecules known as ligands—that can facilitate the activation and cleavage of a CO molecule. Incremental reduction of the molecule leads to substantial weakening of the C–O bonds of CO. Once weakened, the bond is broken entirely by introducing silyl electrophiles, a class of silicon-containing reagents that can be used as surrogates for protons.

This cleavage results in the formation of a terminal carbide—a single carbon atom bound to a metal center—that subsequently makes a bond with the second CO molecule coordinated to the metal. Although a carbide is commonly proposed as an intermediate in CO reductive coupling, this is the first direct demonstration of its role in this type of chemistry, the researchers say. Upon C–C bond formation, the metal center releases the C2 product. Overall, this process converts the two CO units to an ethynol derivative and proceeds easily even at temperatures lower than room temperature.

"To our knowledge, this is the first example of a well-defined reaction that can take two carbon monoxide molecules and convert them into a metal-free ethynol derivative, a molecule related to ethanol; the fact that we can release the C2 product from the metal is important," Agapie says.

While the generated ethynol derivative is not useful as a fuel, it represents a step toward being able to generate synthetic multicarbon fuels from carbon dioxide. The researchers are now applying the knowledge gained in this initial study to improve the process. "Ideally, our insight will facilitate the development of practical catalytic systems," Buss says.

The scientists are also working on a way to cleave the C–O bond using protons instead of silyl electrophiles. "Ultimately, we'd like to use protons from water and electron equivalents derived from sunlight," Agapie says. "But protons are very reactive, and right now we can't control that chemistry."

The research in the paper, "Four-electron deoxygenative reductive coupling of carbon monoxide at a single metal site," was funded by Caltech and the National Science Foundation.

Where Is Solar Energy Headed?

Linq: http://www.caltech.edu/news/where-solar-energy-headed-49559
News Writer: 
Ramanuj Basu
Nate Lewis, George L. Argyros Professor of Chemistry, with a sample of a new, protective film that he helped develop to aid in the process of harnessing sunlight to generate fuels.
Credit: Lance Hayashida/Caltech

In a new paper in ScienceNate Lewis, the George L. Argyros Professor of Chemistry at Caltech, reviews recent developments in solar-energy utilization and looks at some of the challenges and opportunities that lie ahead in the research and development of solar-electricity, solar-thermal, and solar-fuels technologies. Read the full paper.

Rosens Recharge Support for Bioengineering

Linq: http://www.caltech.edu/news/rosens-recharge-support-bioengineering-49678
News Writer: 
Stacey Hong

Caltech board chair emeritus and longtime Compaq chairman Benjamin M. (Ben) Rosen (BS '54) and his wife, Donna, have made a bequest commitment to advance scientific exploration at the intersection of biology and engineering. It is anticipated that the couple's latest gift may double the endowment for the Donna and Benjamin M. Rosen Bioengineering Center.

Established in 2008 with $18 million from the Benjamin M. Rosen Family Foundation of New York, the Rosen Center has become a hub for research and educational initiatives that bring together applied physics, chemical engineering, synthetic biology, computer science, and more.

"Just as we had the digital revolution in the last century, we are having a biological sciences revolution in this century," Ben Rosen says. "And Caltech is the place to be."

Read more on the Caltech giving site.