July 30, 2014
Effort to model Facebook yields key to famous math problem (and a prize)
Srivastava, Marcus, and Spielman (left to right)
soon after completing the proof of the Kadison-Singer Problem.
July 7, 2014
By Holly Lauridsen
Dan Spielman, a Yale computer scientist, wasn’t looking for a new problem. He was already deeply immersed in a tricky effort to model complex online communities like Facebook, hoping to gain insight into how they form and interact.
But when a colleague in Jerusalem observed that aspects of Spielman’s research brought to mind the famous — and unsolved — Kadison-Singer math problem, Spielman saw irresistibly low-hanging fruit — or so it seemed.
“The Kadison-Singer problem looked so close to something we already knew to be true and was, in fact, identical to something we conjectured to be true in our work,” Spielman said. “We thought we should be able to prove it.”
The side project evolved into a five-year journey, and yielded a solution to the famous problem, which had baffled mathematicians since the Eisenhower administration.
First posed in 1959, the Kadison-Singer problem asks, at its core, if unique information can be extrapolated from a scenario in which not all features can be observed or measured. The idea is particularly relevant to abstract fields, including quantum physics, operator theory, complex analysis, graph theory, signal processing, and finite-dimensional geometry. In these fields, it is often impossible to quantify every characteristic of a system.
For example, in quantum physics, you might want to know three things about a particle — position, spin, and momentum. It is known that by measuring spin and position, you can calculate the particle’s momentum as well, even though its exact value cannot be observed. Proving the Kadison-Singer problem meant confirming that this always happens, for every physical system, making it possible to determine unobservable events from observable events.
For Spielman, a solution to the Kadison-Singer problem would improve his ability to model interactions among groups within complex networks. In his original models, the interactions between groups were all equal. By proving that the Kadison-Singer conjecture was correct, he could strengthen or weaken interactions between different communities to more realistically model virtual networks.
So, in June 2013, when Spielman and co-authors Adam Marcus, and Nikhil Srivastava publicly posted a proof of the Kadison-Singer conjecture, it was not only a triumph for them, but also great news for a variety of scholars and technologists.
A new approach to the problem
"This is doubly exciting, because they have proved an important conjecture and they did it by introducing a whole new approach to doing such proofs,” said Holly Rushmeier, chair of Yale’s computer science department. “This won't be the last big news to come out of this line of research."
The solution has also put the Yale computer science and math departments in the international spotlight: In the last year, Spielman and collaborators have traveled the world to give more than 100 talks on their work, in cities ranging from Boston to Bordeaux to Bangalore. Accolades have poured in, most recently from the Society for Industrial & Applied Mathematics, which this month (July) will award the three scientists the George Pólya Prize.
The accolades have been a long time coming for the team, which began its work in 2009, when Adam Marcus was a newly appointed Gibbs Assistant Professor in Applied Mathematics and Nikhil Srivastava was a graduate student working in Spielman’s group.
Polynomials and their roots
Together, Marcus, Srivastava, and Spielman broke the problem into three parts, all dealing with the roots of certain polynomials, or the values of x when y is equal to zero in a mathematical relationship such as y=3x2+6x+12.
Part 1 required the team to prove that all of the roots for these polynomials are real numbers, and part 2 required that the three prove the roots of certain polynomials interlace, or alternate in ascending order. For example, if one polynomial has roots 1, 4, and 8 and another has roots 0, 2, and 7, then they interlace.
Within a year the team members had worked through the first two parts and felt confident they could tackle the third part as quickly: demonstrating that there are upper bounds limiting the magnitude of the alternating polynomial roots.
“Parts one and two were always easier for us because they were fundamentally qualitative,” said Spielman. “There are fairly general techniques for proving qualitative statements like ‘polynomial p(x) is real-rooted.’ On the other hand, proving bounds on how large the roots of certain polynomials can be involved fairly intricate computations. To solve part three we had to come up with a very novel way of reasoning about where the roots of polynomials can lie.”
Knew they were on ‘the right path’
Years passed and life changed. Srivastava took a position with Microsoft Research in India. Marcus joined Crisply, a start-up company in Cambridge, Massachusetts, that allowed him to devote an entire day each week to working on the Kadison-Singer problem. And Spielman came to international attention after he was named a MacArthur Fellow. But the Kadison-Singer problem remained a constant for them all.
“We could never get excited on working on other problems,” said Spielman, who at Yale is professor of computer science, mathematics, and applied mathematics. “The Kadison-Singer problem was just too interesting and compelling: Every approach we pursued revealed beautiful structures. When you are following an approach to a math problem and you discover something beautiful, you take it as an indication that you are on the right path. We kept getting that feeling.”
Even though Spielman, Marcus, and Srivastava did not want to stray from their work on the Kadison-Singer problem, the pressure to publish was increasing. The team decided to take a slight detour to investigate Ramanujan graphs, which describe very sparse networks that are still highly connected. These graphs were often counterintuitive, required deep and difficult mathematics, and were only informative for a small subset of networks. Spielman, Marcus, and Srivastava thought the work they’d already done on the Kadison-Singer problem would yield simpler proofs for Ramanujan graphs that would apply to a larger number of networks.
They were correct.
The solution fit ‘so beautifully’
What the team wasn’t expecting was that Ramanujan graphs were, in fact, the key to the frustrating third part of the Kadison-Singer problem. By expanding the applications of Ramanujan graphs, Spielman, Marcus, and Srivastava gained insight on how to approach the challenging third part of the Kadison-Singer problem.
“I just started laughing,” said Srivastava. “The solution fit together so beautifully and sensibly you knew it was the 'right' proof and not something ad hoc. It combined bits of ideas that we had generated from all over the five years we spent working on this.”
Although the proof of the Kadison-Singer problem has received the majority of the attention, the work done by Spielman and his team on the second part of the problem, interlacing polynomials, is driving the team’s future work.
“Adam and Nikhil and I will be writing papers with two more applications of the technique this summer,” said Spielman.
July 30, 2014
Quick analysis: Mathematical model captures online social life
Hemali Chhapia, TNN | Jul 15, 2014, 04.16PM IST
MUMBAI: Researchers from the University of Oxford, the University of Limerick, and the Harvard School of Public Health have developed a mathematical model to examine online social networks, in particular the trade-off between copying our friends and relying on 'best-seller' lists.
The researchers examined how users are influenced in the choice of apps that they install on their Facebook pages by creating a mathematical model to capture the dynamics at play. By incorporating data from the installation of Facebook apps into their mathematical model, they found that users selected apps on the basis of recent adoptions by their friends rather than by using Facebook's equivalent of a best-seller list of apps.
The model suggests users tended to be swayed by recent activity—from their `friends' on Facebook—that they saw on their Facebook feeds over the previous couple of days. The research, published in the journal, Proceedings of the National Academy of Sciences, finds that the "copycat" tendency in human behaviour is strong and that we can be influenced by the activities of others over a relatively short period of time.
The mathematical model examined data from an empirical study published in 2010, which had tracked 100 million installations of apps adopted by Facebook users during two months. In the 2010 study, based on data collected in 2007, all Facebook users were able to see a list of the most popular apps (similar to best-seller lists) on their pages, as well as being notified about their friends' recent app installations.
In the 2010 study (which included two of the authors of the new study), researchers found that in some cases, a user's decision to install some apps seemed virtually unaffected by the activities of others, whereas sometimes they were strongly affected by the behaviour of others - even though the apps in these two categories did not appear to be distinguished by any particular characteristics. Instead, once an app reached some popularity threshold (as measured by the installation rate), its popularity tended to rise to stellar proportions.
In the new study, the researchers developed a mathematical model to distinguish between the consequences of two distinct, competing mechanisms that appeared to drive the dynamics behind the behaviour of the Facebook users. Using their model and extensive computer simulations, they looked behind the empirical data to see whether Facebook users' behaviour could be modelled as being influenced primarily by the notifications of apps recently installed on their friends' Facebook pages or mainly driven by which apps appeared on the best-seller list. Using the supercomputers of the Irish Centre for High-End Computing (ICHEC), the researchers ran thousands of simulations in which they varied the relative dominance of the two influences (recent installations versus cumulative popularity). It took the researchers 15,000 hours of computer processing to best match the results of the simulations with the characteristics of app installation that were observed in the earlier empirical study.
The researchers found that, although users seem to be influenced by both, the stronger effect on popularity dynamics was caused by the recent behaviour of others. The best-seller list did have a 'mild' effect on the behaviour of Facebook users, but an instinct to copy the behaviour of others was by far the more dominant instinct.
Associate Professor Felix Reed-Tsochas, James Martin Lecturer in Complex Systems at Said Business School and Director of Complexity Economics at the Institute for New Economic Thinking at the University of Oxford, said: 'We have used sophisticated modelling techniques to show how it is possible to tease apart different causal mechanisms that underpin behaviour even when the empirical data are purely observational. This is significant because the assumption these days is that only experimental research designs can provide such answers. Here, we found that the "copycat" tendency plays a very important role in online behaviour. This might be because users need to make quick decisions in information-rich environments, but other research has identified similar imitative behaviour in the off-line world.'
The other authors of the new study were Dr Davide Cellai (University of Limerick) and Assistant Professor Jukka-Pekka Onnela (Harvard). Professor James Gleeson, from the Department of Mathematics and Statistics at the University of Limerick, said: 'This study reveals how we can explore different scenarios using mathematical models to disentangle what drives people to behave the way they do using large data sets from the real online world. This opens up lots of new possibilities for studying human behaviour.'
Commenting on the significance of the method behind the study, Associate Professor Mason Porter, from the Mathematical Institute at University of Oxford, said: 'We hope that our paper can help serve as a guide for modelling complex systems and how data can be incorporated directly into such modelling efforts. The importance of mathematical modelling often seems to be lost amidst the overabundance of empirical studies, and I cannot stress enough that mathematics is also crucial to help illustrate how things work.'
July 30, 2014
Collecting just the right data
Larry Hardesty | MIT News Office
Much artificial-intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon and Netflix. But some types of data are harder to collect than online click histories —information about geological formations thousands of feet underground, for instance. And in other applications — such as trying to predict the path of a storm — there may just not be enough time to crunch all the available data.
Dan Levine, an MIT graduate student in aeronautics and astronautics, and his advisor, Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, have developed a new technique that could help with both problems. For a range of common applications in which data is either difficult to collect or too time-consuming to process, the technique can identify the subset of data items that will yield the most reliable predictions. So geologists trying to assess the extent of underground petroleum deposits, or meteorologists trying to forecast the weather, can make do with just a few, targeted measurements, saving time and money.
Levine and How, who presented their work at the Uncertainty in Artificial Intelligence conference this week, consider the special case in which something about the relationships between data items is known in advance. Weather prediction provides an intuitive example: Measurements of temperature, pressure, and wind velocity at one location tend to be good indicators of measurements at adjacent locations, or of measurements at the same location a short time later, but the correlation grows weaker the farther out you move either geographically or chronologically.
Such correlations can be represented by something called a probabilistic graphical model. In this context, a graph is a mathematical abstraction consisting of nodes — typically depicted as circles — and edges — typically depicted as line segments connecting nodes. A network diagram is one example of a graph; a family tree is another. In a probabilistic graphical model, the nodes represent variables, and the edges represent the strength of the correlations between them.
Levine and How developed an algorithm that can efficiently calculate just how much information any node in the graph gives you about any other — what in information theory is called “mutual information.” As Levine explains, one of the obstacles to performing that calculation efficiently is the presence of “loops” in the graph, or nodes that are connected by more than one path.
Calculating mutual information between nodes, Levine says, is kind of like injecting blue dye into one of them and then measuring the concentration of blue at the other. “It’s typically going to fall off as we go further out in the graph," Levine says. "If there’s a unique path between them, then we can compute it pretty easily, because we know what path the blue dye will take. But if there are loops in the graph, then it’s harder for us to compute how blue other nodes are because there are many different paths.”
So the first step in the researchers’ technique is to calculate “spanning trees” for the graph. A tree is just a graph with no loops: In a family tree, for instance, a loop might mean that someone was both parent and sibling to the same person. A spanning tree is a tree that touches all of a graph’s nodes but dispenses with the edges that create loops.
Betting the spread
Most of the nodes that remain in the graph, however, are “nuisances,” meaning that they don’t contain much useful information about the node of interest. The key to Levine and How’s technique is a way to use those nodes to navigate the graph without letting their short-range influence distort the long-range calculation of mutual information.
That’s possible, Levine explains, because the probabilities represented by the graph are Gaussian, meaning that they follow the bell curve familiar as the model of, for instance, the dispersion of characteristics in a population. A Gaussian distribution is exhaustively characterized by just two measurements: the average value — say, the average height in a population — and the variance — the rate at which the bell spreads out.
“The uncertainty in the problem is really a function of the spread of the distribution,” Levine says. “It doesn’t really depend on where the distribution is centered in space.” As a consequence, it’s often possible to calculate variance across a probabilistic graphical model without relying on the specific values of the nodes. “The usefulness of data can be assessed before the data itself becomes available,” Levine says.
July 30, 2014
Finding quantum lines of desire
JOE ANGELES/WUSTL PHOTOS
Kater Murch (right), assistant professor of physics, and junior Chris Munley work
with the equipment that can map a quantum device’s trajectory between two points
in quantum state space, a feat until recently considered impossible.
Groundskeepers and landscapers hate them, but there is no fighting them. Called desire paths, social trails or goat tracks, they are the unofficial shortcuts people create between two locations when the purpose-built path doesn’t take them where they want to go.
There’s a similar concept in classical physics called the “path of least action.” If you throw a softball to a friend, the ball traces a parabola through space. It doesn’t follow a serpentine path or loop the loop because those paths have higher "actions" than the true path.
But what paths do quantum particles, such as atoms or photons, follow? For these particles, the laws of classical physics cease to apply, and quantum physics and its counterintuitive effects takes over.
Quantum particles can exist in a superposition of states, yet as soon as quantum particles are "touched" by the outside world, they lose this quantum strangeness and collapse to a classically permitted state. Because of this evasiveness, it wasn’t possible until recently to observe them in their quantum state.
But in the past 20 years, physicists have devised devices that isolate quantum systems from the environment and allow them to be probed so gently that they don’t immediately collapse. With these devices, scientists can at long last follow quantum systems into quantum territory, or state space.
Kater Murch, PhD, an assistant professor of physics at Washington University in St. Louis, and collaborators Steven Weber and Irfan Siddiqui of the Quantum Nanoelectronics Laboratory at the University of California, Berkeley, have used a superconducting quantum device to continuously record the tremulous paths a quantum system took between a superposition of states to one of two classically permitted states.
Because even gentle probing makes each quantum trajectory noisy, Murch’s team repeated the experiment a million times and examined which paths were most common. The quantum equivalent of the classical “least action” path — or the quantum device’s path of desire — emerged from the resulting cobweb of many paths, just as pedestrian desire paths gradually emerge after new sod is laid.
The experiments, the first continuous measurements of the trajectories of a quantum system between two points, are described in the cover article of the July 31 issue of Nature.
A path of desire emerging from many trajectories between two points in quantum state space.
“We are working with the simplest possible quantum system,” Murch said. “But the understanding of quantum interactions we are gaining might eventually be useful for the quantum control of biological and chemical systems.
“Chemistry at its most basic level is described by quantum mechanics,” he said. “In the past 20 years, chemists have developed a technique called quantum control, where shaped laser pulses are used to drive chemical reactions — that is, to drive them between two quantum states. The chemists control the quantum field from the laser, and that field controls the dynamics of a reaction,” he said.
“Eventually, we’ll be able to control the dynamics of chemical reactions with lasers instead of just mixing reactant 1 with reactant 2 and letting the reaction evolve on its own,” he said.
An artificial atom
Between these two states, there are an infinite number of quantum states that are superpositions, or combinations, of the ground and excited states. In the past, these states would have been invisible to physicists because attempts to measure them would have caused the system to immediately collapse.
But Murch’s device allows the system’s state to be probed many times before it becomes an effectively classical system. The quantum state of the circuit is detected by putting it inside a microwave box. A very small number of microwave photons are sent into the box where their quantum fields interact with the superconducting circuit.
The microwaves are so far off resonance with the circuit that they cannot drive it between its ground and its excited state. So instead of being absorbed, they leave the box bearing information about the quantum system in the form of a phase shift (the position of the troughs and peaks of the photons’ wavefunctions).
JOE ANGELES/WUSTL PHOTOS
The superconducting circuit used as a model
quantum system in Murch’s lab is bolted to a stage
in a dilution refrigerator (pictured above), which
holds it at a temperature of 7 milliKelvin, or just a
hair’s breadth above absolute zero. With thermal
noise suppressed to this level,
the device enters quantum space.
“Every time we nudge the system, something different happens,” Murch said. “That’s because the photons we use to measure the quantum system are quantum mechanical as well and exhibit quantum fluctuations. So it takes many of these measurements to distinguish the system’s signal from the quantum fluctuations of the photons probing it.” Or, as physicists put it, these are weak measurements.
Murch compares these experiments to soccer matches, which are ultimately experiments to determine which team is better. But because so few goals are scored in soccer, and these are often lucky shots, the less skilled team has a good chance of winning. Or as Murch might put it, one soccer match is such a weak measurement of a team’s skill that it can’t be used to draw a statistically reliable conclusion about which team is more skilled.
Each time a team scores a goal, it becomes somewhat more likely that that team is the better team, but the teams would have to play many games or play for a very long time to know for sure. These fluctuations are what make soccer matches so exciting.
Murch is in essence able to observe millions of these matches, and from all the matches where team B wins, he can determine the most likely way a game that ends with a victory for team B will develop.
A line of desire
“Before we started this experiment,” Murch said, “ I asked everybody in the lab what they thought the most likely path between quantum states would be. I drew a couple of options on the board: a straight line, a convex curve, a concave curve, a squiggly line . . . I took a poll, and we all guessed different options. Here we were, a bunch of quantum experts, and we had absolutely no intuition about the most likely path.”
Andrew N. Jordan of the University of Rochester and his students Areeya Chantasri and Justin Dressel inspired the study by devising a theory to predict the likely path. Their theory predicted that a convex curve Murch had drawn on the white board would be the correct path.
“When we looked at the data, we saw that the theorists were right. Our very clever collaborators had devised a ‘principle of least action’ that works in the quantum case,” Murch said.
They had found the quantum system’s line of desire mathematically and by calculation before many microwave photons trampled out the path in Murch’s lab.
But as the famous physicist Richard Feynman once said, “It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.” And he was a theoretician.
July 30, 2014
Scientists Find Way to Maintain Quantum Entanglement in Amplified Signals
Physicists Sergei Filippov (MIPT and Russian Quantum Center at Skolkovo) and Mario Ziman (Masaryk University in Brno, Czech Republic, and the Institute of Physics in Bratislava, Slovakia) have found a way to preserve quantum entanglement of particles passing through an amplifier and, conversely, when transmitting a signal over long distances. Details are provided in an article published in the journal Physical Review A(see preprint).
Quantum entangled particles are considered to be the basis of several promising technologies, including quantum computers and communication channels secured against tapping. Quantum entangled particles are quantum objects that can be described in terms of a common quantum state. Two quantum entangled particles can be in different places, at any distance from each other, but they still are to be considered as a whole. This effect has no analogues in classical physics, and it has been actively studied for the past few decades.
Physicists have learned to entangle photons and have found application for them, including opticalfiber communication channels which are impossible to tap. When trying to intercept the transmission of data over such a channel, quantum entanglement of photons is inevitably destroyed and the legitimate recipient of the message immediately detects interference.
In addition to this, quantum entanglement allows for carrying out quantum teleportation, wherein a quantum object, for example, an atom, in a certain state in one laboratory transmits its quantum state to another object in another laboratory. It is quantum entangled particles that play the key role in this process, and it is not necessarily about the quantum entanglement of the atoms between which the transmission of the state takes place. The latter atom becomes absolutely identical to the former one, which in its turn transfers into a different state during the teleportation. If all atoms of an object were transferred like this, the second laboratory would have its exact copy.
The laws of quantum mechanics do not allow for the teleportation of objects and people, but it is already possible to quantum teleport single photons and atoms, which opens up exciting opportunities for the creation of new computing devices and communication lines. Due to specific quantum effects, a quantum computer will be able to efficiently solve certain problems, for example, hacking codes used in banking, but for now it is still just a theoretical possibility. In practice, quantum computing and teleportation are obstructed by a process called decoherence.
Decoherence is the destruction of the quantum state due to the interaction of a quantum system with the outside world. For experiments in quantum computing, scientists use single atoms caught in magnetic traps and cooled to temperatures close to absolute zero. After going through kilometers of fiber, photons cease to be quantum entangled in most cases and become ordinary, unrelated light quanta.
To create an effective quantum computing system, scientists have to solve a number of problems, including preserving quantum entanglement when the signal abates and when it passes through an amplifier. Fiber-optic cables on the ocean bed contain a great deal of special amplifiers composed of optical glass and rare earth elements. It is these amplifiers that make it possible to watch high-resolution videos stored on a server in California from the MIPT campus or a university in Beijing.
In their article, Filippov and Ziman say that a certain class of signals can be transmitted so that the risk ofruining quantum entanglement becomes much lower. In this case, neither the attenuation nor the amplification of a signal ruins the entanglement. To achieve this effect, it is necessary to have the particles in a special, non-Gaussian state, or, as physicists put it, “the wave function of the particles in the coordinate representation should not be in the form of a Gaussian wave packet.” A wave function is a basic concept of quantum mechanics, and Gaussian distribution is a major mathematical function used not only by physicists but also by statisticians, sociologists and economists.
Quantum mechanics differs from classical mechanics in that there are neither material points, nor clearly specified boundaries for bodies in it. Each object can be described by a wave function – each point in space corresponds to a complex number at each moment. If this number is squared* one may find an object at a given point. To get information on the momentum, energy, or other physical characteristic, the same wave function has to be exposed to a so-called operator.
* In fact, since the amplitude is expressed as a complex number, it is necessary to multiply the numberby a complex conjugate. This detail was omitted due to reader unfamiliarity with complex numbers.
Link for English version of complex number explanation: http://en.wikipedia.org/wiki/Complex_number
Gaussian distribution is a function that in its simplest form (without additional coefficients) looks like e-x2. In diagrams, it appears as a bell curve. Many processes in nature are described via this function when the results of observations are processed using mathematical methods. Ordinary photons, which are used in most quantum entanglement experiments, are also described by a Gaussian function. The probability of finding a photon at a given point (a translation of the expression “in the coordinate representation”) first increases and then decreases according to the rule of the Gaussian bell curve. In this case “it would be impossible to send the entanglement far, even if the signal is very strong,” Sergei Filippov told MIPT’s press service.
Using photons whose wave function has a different shape should increase the number of entangled photon pairs reaching the destination. However, this does not mean that a signal could be transmitted through a very opaque environment and at very long distances. If the signal/noise ratio falls below a certain critical threshold, quantum entanglement vanishes in any case.
MIPT’s press office would like to thank Dr. Sergei Filippov for his invaluable help in writing this article.
July 30, 2014
Making Quantum Connections
1) Using a laser, the ion spin chain is optically pumped to a spin state that is uncorrelated with respect to the spin-spin interactions.
(2) The system is suddenly perturbed by lasers, turning on global spin-spin interaction.
(3) After the spin system evolves for various lengths of time (t1…tn), the spin state of each ion is captured with a CCD camera.
The researchers can directly observe the spin-spin correlations propagating across the ion chain.
Image credit: S. Kelley/JQI
July 9, 2014
In quantum mechanics, interactions between particles can give rise to entanglement, which is a strange type of connection that could never be described by a non-quantum, classical theory. These connections, called quantum correlations, are present in entangled systems even if the objects are not physically linked (with wires, for example). Entanglement is at the heart of what distinguishes purely quantum systems from classical ones; it is why they are potentially useful, but it sometimes makes them very difficult to understand.
Physicists are pretty adept at controlling quantum systems and even making certain entangled states. Now JQI researchers*, led by theorist Alexey Gorshkov and experimentalist Christopher Monroe, are putting these skills to work to explore the dynamics of correlated quantum systems. What does it mean for objects to interact locally versus globally? How do local and global interactions translate into larger, increasingly connected networks? How fast can certain entanglement patterns form? These are the kinds of questions that the Monroe and Gorshkov teams are asking. Their recent results investigating how information flows through a quantum many-body system are published this week in the journal Nature (10.1038/nature13450), and in a second paper to appear in Physical Review Letters. Researchers can engineer a rich selection of interactions in ultracold atom experiments, allowing them to explore the behavior of complex and massively intertwined quantum systems. In the experimental work from Monroe’s group, physicists examined how quickly quantum connections formed in a crystal of eleven ytterbium ions confined in an electromagnetic trap. The researchers used laser beams to implement interactions between the ions. Under these conditions, the system is described by certain types of ‘spin’ models, which are a vital mathematical representation of numerous physical phenomena including magnetism. Here, each atomic ion has isolated internal energy levels that represent the various states of spin. In the presence of carefully chosen laser beams the ion spins can influence their neighbors, both near and far. In fact, tuning the strength and form of this spin-spin interaction is a key feature of the design. In Monroe's lab, physicists can study different types of correlated states within a single pristine quantum environment (Click here to learn about how this is possible with a crystal of atomic ions).
To see dynamics the researchers initially prepared the ion spin system in an uncorrelated state. Next, they abruptly turned on a global spin-spin interaction. The system is effectively pushed off-balance by such a fast change and the spins react, evolving under the new conditions.The team took snapshots of the ion spins at different times and observed the speed at which quantum correlations grew.
The spin models themselves do not have an explicitly built-in limit on how fast such information can propagate. The ultimate limit, in both classical and quantum systems, is given by the speed of light. However, decades ago, physicists showed that a slower information speed limit emerges due to some types of spin-spin interactions, similar to sound propagation in mechanical systems. While the limits are better known in the case where spins predominantly influence their closest neighbors, calculating constraints on information propagation in the presence of more extended interactions remains challenging. Intuitively, the more an object interacts with other distant objects, the faster the correlations between distant regions of a network should form. Indeed, the experimental group observes that long-range interactions provide a comparative speed-up for sending information across the ion-spin crystal. In the paper appearing in Physical Review Letters, Gorshkov’s team improves existing theory to much more accurately predict the speed limits for correlation formation, in the presence of interactions ranging from nearest-neighbor to long-range.
Verifying and forming a complete understanding of quantum information propagation is certainly not the end of the story; this also has many profound implications for our understanding of quantum systems more generally. For example, the growth of entanglement, which is a form of information that must obey the bounds described above, is intimately related to the difficulty of modeling quantum systems on a computer. Dr. Michael Foss-Feig explains, “From a theorist’s perspective, the experiments are cool because if you want to do something with a quantum simulator that actually pushes beyond what calculations can tell you, doing dynamics with long-range interacting systems is expected to be a pretty good way to do that. In this case, entanglement can grow to a point that our methods for calculating things about a many-body system break down.”
Theorist Dr. Zhexuan Gong states that in the context of both works, “We are trying to put bounds on how fast correlation and entanglement can form in a generic many-body system. These bounds are very useful because with long-range interactions, our mathematical tools and state-of-the-art computers can hardly succeed at predicting the properties of the system. We would then need to either use these theoretical bounds or a laboratory quantum simulator to tell us what interesting properties a large and complicated network of spins possess. These bounds will also serve as a guideline on what interaction pattern one should achieve experimentally to greatly speed up information propagation and entanglement generation, both key for building a fast quantum computer or a fast quantum network.”
From the experimental side, Dr. Phil Richerme gives his perspective, “We are trying to build the world’s best experimental platform for evolving the Schrodinger equation [math that describes how properties of a quantum system change in time]. We have this ability to set up the system in a known state and turn the crank and let it evolve and then make measurements at the end. For system sizes not much larger than what we have here, doing this becomes impossible for a conventional computer.”
This news item was written by E. Edwards/JQI.
"Non-local propagation of correlations in long-range interacting quantum systems," P. Richerme, Z.X. Gong, A. Lee, C. Senko, J. Smith, M. Foss-Feig, S. Michalakis, A.V. Gorshkov, C. Monroe, Nature, , (2014)
July 30, 2014
Math Can Make the Internet 5-10 Times Faster
Mathematical equations can make Internet communication via computer, mobile phone or satellite many times faster and more secure than today. Results with software developed by researchers from Aalborg University in collaboration with the US universities the Massachusetts Institute of Technology (MIT) and California Institute of Technology (Caltech) are attracting attention in the international technology media.
A new study uses a four minute long mobile video as an example. The method used by the Danish and US researchers in the study resulted in the video being downloaded five times faster than state of the art technology. The video also streamed without interruptions. In comparison, the original video got stuck 13 times along the way.
- This has the potential to change the entire market. In experiments with our network coding of Internet traffic, equipment manufacturers experienced speeds that are five to ten times faster than usual. And this technology can be used in satellite communication, mobile communication and regular Internet communication from computers, says Frank Fitzek, Professor in the Department of Electronic Systems and one of the pioneers in the development of network coding.
Goodbye to the packet principle
Internet communication formats data into packets. Error control ensures that the signal arrives in its original form, but it often means that it is necessary to send some of the packets several times and this slows down the network. The Danish and US researchers instead are solving the problem with a special kind of network coding that utilizes clever mathematics to store and send the signal in a different way. The advantage is that errors along the way do not require that a packet be sent again. Instead, the upstream and downstream data are used to reconstruct what is missing using a mathematical equation.
- With the old systems you would send packet 1, packet 2, packet 3 and so on. We replace that with a mathematical equation. We don’t send packets. We send a mathematical equation. You can compare it with cars on the road. Now we can do without red lights. We can send cars into the intersection from all directions without their having to stop for each other. This means that traffic flows much faster, explains Frank Fitzek.
Network coding has a large application field for Internet of Things (IoT), 5G communication systems, software defined networks (SDN), content centric networks (CCN), and besides transport also implication on distributed storage solutions.
Presence in Silicon Valley
In order for this to work, however, the data is coded and decoded with the patented technology. The professor and two of his former students from Aalborg University, developers Janus Heide and Morten Videbæk Pedersen, along with the US colleagues, founded the software company "Steinwurf." The company makes the RLNC technology (Random Linear Network Coding) available to hardware manufacturers and they are in secret negotiations that will bring the improvements to consumers. As part of the effort Steinwurf has established an office in Silicon Valley but the company is still headquartered in Aalborg.
- I think the technology will be integrated in most products because it has some crucial and necessary functions. The only thing that can stop the development is patents. Previously, individual companies had a solid grip on patents for coding. But our approach is to make it as accessible as possible. Among other things, we are planning training courses in these technologies, says Frank Fitzek.
July 30, 2014
Hopkins grad student makes 'useless' math strategy work 200 times faster
Johns Hopkins graduate student Xiang Yang, at right, teamed up with Rajat Mittal, a professor of mechanical engineering,
to revamp a “useless” 169-year-old math strategy, making it work up to 200 times faster.
IMAGE: WILL KIRK / HOMEWOODPHOTO.JHU.EDU
Phil Sneiderman / June 30, 2014
A relic from long before the age of supercomputers, a 169-year-old math strategy called the Jacobi iterative method is widely dismissed today as too slow to be useful. But thanks to a curious, numbers-savvy Johns Hopkins engineering student and his professor, it may soon get a new lease on life.
With just a few modern-day tweaks, the researchers say they've made the rarely used Jacobi method work up to 200 times faster. The result, they say, could speed up the performance of computer simulations used in aerospace design, shipbuilding, weather and climate modeling, biomechanics, and other engineering tasks.
Their paper describing this updated math tool was published recently in the online edition of the Journal of Computational Physics.
"For people who want to use the Jacobi method in computational mechanics, a problem that used to take 200 days to solve may now take only one day," said Rajat Mittal, a mechanical engineering professor in the university's Whiting School of Engineering and senior author of the journal article. "Our paper provides the recipe for how to speed up this method significantly by just changing four or five lines in the computer code."
This dramatic makeover emerged quietly in the fall of 2012, after Mittal told students in his Numerical Methods class about the Jacobi method. Mittal cited Jacobi's strategy as a mathematically elegant but practically useless method, and then moved on to faster methods and more modern topics. Xiang Yang, then a first-year grad student in the class, was listening intently.
Mittal had told his students that Carl Gustav Jacob Jacobi, a prominent German mathematician, unveiled this method in 1845 as a way to solve systems of linear equations by starting with a guess and then repeating a series of math operations over and over until a useful solution appeared.
By the early 20th century, the method was being used by "human computers," groups of men and women who were each assigned to perform small pieces of larger math problems. A noted mathematician during that era managed to make the method proceed five times faster, but that was still considered rather slow. With the advent of speedier strategies and electronic computers, the Jacobi method fell out of favor.
"It just took so much time and so many computations to get to the answer you wanted," Yang said. "And there were better methods. That's why this Jacobi method isn't being used much today."
But after learning about the method in Mittal's class, Yang began tinkering with it. He returned to Mittal and proposed a way to make the process of repeating numerical estimates move more efficiently, speeding up the arrival of a solution. "Instead of saying that this method has been around for 169 years, and that everyone has already tried to improve it without much success, Professor Mittal told me that he felt my idea was very promising," Yang said, "and he encouraged me to work on it."
Yang spent a couple of weeks honing the updated math strategy, which he and his professor called a "scheduled relaxation Jacobi method." Then the grad student and Mittal began working together on a paper about the work that could be submitted to a peer-reviewed journal, with Yang as lead author.
Now that it has been published and is being shared freely, Mittal expects the modified method to be embraced in many industry applications, particularly those involving fluid mechanics. For example, when an aerospace engineer wants to test several different wing designs in a computer simulation program, the revised Jacobi method could speed up the process.
"I expect this to be adopted very quickly," Mittal said. "Everyone is competing for access to powerful computer systems, and the new Jacobi method will save time. In fact, the beauty of this method is that it is particularly well suited for the large-scale parallel computers that are being used in most modern simulations."
Oddly enough, the Jacobi update is not directly related to the doctoral project that grad student Yang is supposed to be focusing on: how barnacles on the sides of a ship affect its movement through water. But Yang said his doctoral adviser, Charles Meneveau, another mechanical engineering professor, encouraged him to devote some time to the Jacobi paper as well.
Yang, 24, grew up in China and earned his undergraduate engineering degree at Peking University. The school's dean of engineering, Shiyi Chen, a former Johns Hopkins faculty member, encouraged Yang to continue his studies at the Baltimore campus. The grad student said he's appreciated the faculty support at Johns Hopkins.
"Professor Mittal taught me to look at a lot of possibilities with an open mind," he said. "Then it's been relatively easy to handle my schoolwork. He's the one who inspired me."
July 30, 2014
IUPUI mathematician receives prestigious NSF early career development award
This is Dr. Roland Roeder.
Photo credit: School of Science at Indiana University-Purdue University Indianapolis
Jul 24 2014
INDIANAPOLIS -- Roland Roeder, Ph.D., a mathematician from the School of Science at Indiana University-Purdue University Indianapolis (IUPUI), will receive $460,000 over the next five years from the National Science Foundation’s Division of Mathematical Sciences to support his research in pure math and the training of students from the graduate to high school levels.
The Faculty Early Career Development award is the NSF's most prestigious award in support of junior faculty. It is given to individuals who “exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research.”
The award will support Roeder’s research on dynamics in several complex variables, an area of pure mathematics focusing on the theoretical underpinnings of systems that change with time. “Systems that change with time appear at the core of nearly all scientific endeavors, including biology, chemistry, physics and the social sciences," he noted. "Given the current state of a system, can one predict its future state? How does this evolution of the state of the system depend on the parameters of the system?
"Many such dynamical systems are far too complicated for a rigorous study, so one often resorts to simpler models, which are hoped to indicate the types of behavior that one should expect experimentally. One venue for such simpler models is the iteration of holomorphic maps, the topic of my NSF-supported research.”
According to Roeder, insights obtained from complex dynamics have already provided a deeper understanding of real-world problems in a variety of fields including the study of magnetic materials and astrophysics.
In addition to supporting Roeder’s research, his CAREER grant will provide research training including tuition and living expenses for one or two doctoral students he will supervise over the next five years. The funding will also enable Roeder and the Department of Mathematical Sciences to hold two workshops for graduate mathematics students from universities throughout the United States. Each workshop will provide opportunities for students to make presentations and will bring top researchers to IUPUI to speak and interact with the students.
Among the novel aspects of Roeder’s NSF-supported work is his mentorship of extremely talented high school students on research projects and the continuation, along with Jeffrey Watt, Ph.D., associate dean for student affairs and outreach in the School of Science, of their co-organization of the IUPUI High School Math Contest, a highly respected competition that has been held for 17 years and recently expanded to include schools from throughout Indiana.
“You can’t do anything properly without logical reasoning and math is the art of logical reasoning,” said Roeder, who credits the awakening of his interest in the field as a young teen in Southern California to a local college faculty mentor who worked with motivated high school students. “A major goal of what we are doing here at IUPUI is providing students I like to call Super Stars -- talented high school students who work extremely hard -- with mentoring and an opportunity to learn advanced math, conduct original research and publish the results. Remarkably, they perform at the level of first-year graduate students. Roeder credits Pavel Bleher, Ph.D., Chancellor’s Professor of Mathematical Sciences at IUPUI, for getting him involved in leading high school students on research projects. The two mathematicians have had an impressive track record of mentoring exceptional students including a three-person team that won first place in the prestigious 2010 Siemens Competition in Math, Science and Technology and an individual student who was a 2013 Intel Science Talent Search Finalist. Roeder’s previous NSF grant provided funding for him to mentor these students.
Yushi Homma, the Intel talent-search finalist and a recent Carmel (Ind.) High School graduate, will attend Stanford University in fall 2014 where he plans to major in math or possibly computer science. The 18-year-old Homma has been meeting with Roeder weekly for almost two years to work on a problem involving polynomials whose coefficients are random variables (a concept not typically covered until upper level college math). While maintaining a full high school course load, on average Homma had been spending about eight hours a week on this math problem, although he admits to ramping up to 20 hours a week in the month before the Intel competition deadline.
"I like math," said Yushi Homma, the son of a lawyer and a homemaker. "I began participating in national math competitions in sixth grade but by the time I was 13 or 14 I realized that I preferred research because it was both a cumulative assessment of my math knowledge and a way to expand that knowledge. Working with Drs. Bleher and Roeder has helped launch me into the field." Under Roeder's and Bleher's continued tutelage, Homma is spending his last summer before college preparing a manuscript for submission to a peer-reviewed professional journal, a task that his mentors say very few high school students can accomplish, although this is the third time their mentees have a paper of this level. Interestingly, two of the other papers were written by the team that was successful in the Siemens competition -- a team that included Yushi Homma's older brother Youkow, currently a math major at Yale University.
"Roland Roeder is the first math department faculty member at IUPUI to receive this award from the National Science Foundation. Seven current School of Science faculty members now hold this prestigious award, an impressive number that underscores the high quality of the School's faculty and its commitment to education, research and community outreach," said Simon Rhodes, Ph.D., dean of the school. Other NSF CAREER award recipients in the School of Science are faculty members Yogesh Joglekar (physics); Gavriil Tsechpenakis, Murat Dundar and Mohammad Al Hasan (computer and information science); Gregory Druschel (earth sciences) and Haibo Ge (chemistry and chemical biology).
The School of Science at IUPUI is committed to excellence in teaching, research and service in the biological, physical, behavioral and mathematical sciences. The school is dedicated to being a leading resource for interdisciplinary research and science education in support of Indiana's effort to expand and diversify its economy.
July 30, 2014
New Math Technique Improves Atomic Property Predictions to Historic Accuracy
From NIST Tech Beat: June 25, 2014
Contact: Chad Boutin
[This article was revised on July 23, 2014, to clarify the relationship between the work reported here and the related problem of calculating relativistic and quantum effects on electron energy levels. The latter becomes a significant factor for the higher atomic number atoms. - Editor]
By combining advanced mathematics with high-performance computing, scientists at the National Institute of Standards and Technology(NIST) and Indiana University (IU) have developed a tool that allowed them to calculate a fundamental property of a number of atoms on the periodic table to historic accuracy—reducing error by a factor of a thousand in many cases. Computational techniques of this type could be used someday to determine a host of other atomic properties important in fields like nuclear medicine and astrophysics.*
NIST's James Sims and IU's Stanley Hagstrom have calculated the nonrelativistic base energy levels for the four electrons in the element beryllium as well as the positive ions of all the other elements having the same number of electrons as beryllium, an accomplishment that has required nearly an entire career's effort on Sims' part. (Electron energy levels also depend on relativity and quantum dynamics effects caused by the atom's nucleus, but they're negligible until you get to much larger atoms.) Precise determination of the base energy—crucial for determining the amount necessary to raise an atom from its base energy level to any level higher—has great intrinsic value for fundamental atomic research, but the team's technique has implications far broader than for a single element.
Sims says the technique allowed the calculation of energy levels with eight-decimal accuracy, resulting in a remarkably smooth curve that they expected theoretically but were not sure they would attain in practice. For the vast majority of the elements in the periodic table, the calculated results are a thousand times more accurate than previous values which have been reported for the nonrelativistic model. The results, according to Sims, suggest their method could enable computation of other atomic properties—electron affinity and ionization potential, for example—that are important for astrophysics and other fields of atomic research.
Their method is the culmination of decades of effort aimed at using quantum mechanics to predict base energy levels from first principles. Sims first proposed in the late 1960s that such a quantum approach could be possible, but the complex calculations involved were beyond the reach of the world's best computers. Only in 2006, after the advent of parallel computing—linking many computers together as a unified cluster—were he and Hagstrom able to create workable algorithms for calculating the energies for a two-electron hydrogen molecule more accurately than could be done experimentally. Then, in 2010, they improved the algorithms to bring lithium's three electrons within reach.**
Beryllium's four electrons proved a new hurdle, but perhaps the last significant one. Much of the difficulty stems from the fact that mutual repulsion among the electrons, combined with their attraction for the nucleus, creates a complex set of interacting forces that are at least time-consuming, if not practically impossible, to calculate. The complexity grows with the addition of each new electron, but the team found a mathematical approach that can reduce an atom's electron cloud to a group of problems, none of which are more complex than solving a four-electron system.
Calling their approach a shortcut would be in some ways a misnomer. Where the calculation for lithium required a cluster of 32 parallel processors, beryllium required 256, and even then, the cluster needed to operate at extremely high efficiency for days. But the payoff was that they could calculate the energies for all four-electron ground states—meaning not only all of the elements in beryllium's column on the periodic table, each of which has four electrons in its outer shell, but also for all other elements in ionized states that have four electrons, such as boron with one electron missing, carbon missing two, and so forth. Relativistic and other effects are not included in the current model and become more significant for larger atomic numbers, but this study does demonstrate the importance of careful analysis and parallel computational approaches to enable "virtual measurement" of atomic properties based on theory, according to NIST researchers.
*J.A. Sims and S.A. Hagstrom. Hylleraas-configuration-interaction nonrelativistic energies for the 1S ground states of the beryllium isoelectronic sequence. Journal of Chemical Physics, DOI 10.1063/1.4881639, June 11, 2014.
**See the 2010 Tech Beat article, "Theorists Close In on Improved Atomic Property Predictions" at www.nist.gov/public_affairs/tech-beat/tb20100112.cfm#atomic.
July 30, 2014
Philosopher uses game theory to understand how words, actions acquire meaning
The latest work from Elliott Wagner, assistant professor of philosophy, appears in the scientific journal
Proceedings of the National Academy of Sciences of the United States of America,
which is a rarity for philosophy research.
Mondey, July 21, 2014
MANHATTAN — Why does the word "dog" have meaning? If you say "dog" to a friend, why does your friend understand you?
Kansas State University philosopher Elliott Wagner aims to address these types of questions in his latest research, which focuses on long-standing philosophical questions about semantic meaning. Wagner, assistant professor of philosophy, and two other philosophers and a mathematician are collaborating to use game theory to analyze communication and how it acquires meaning.
"If I order a cappuccino at a coffee shop, I usually don't think about why it is that my language can help me communicate my desire for a cappuccino," Wagner said. "This sort of research allows us to understand a very basic aspect of the world."
The researchers' latest work appears in the scientific journal Proceedings of the National Academy of Sciences of the United States of America, or PNAS, in the article "Some dynamics of signaling games." It is rare for philosophy research to appear in the scientific journal, Wagner said. Collaborators include two other philosophers — Simon Hutteggar and Brian Skyrms — from the University of California, Irvine, as well as mathematician Pierre Tarres of the University of Toulouse in France.
The researchers are using evolutionary game theory models to understand how words and actions acquire meaning through natural processes, whether through biological evolution, social learning or other adaptive processes.
Game theory is a branch of mathematics that creates mathematical abstractions of social interactions and communication. Communication involves two agents — a sender and a receiver. The sender shares a message with the receiver through a sign or signal and the receiver uses the signal to act in the world. This interaction is called a signaling game.
The researchers used signaling games to study information flow in the natural world, which happens at all levels of biological organization, Wagner said. For example, bacteria such as those in the genus Pseudomonas communicate through chemical signals to attack the human immune system. Monkeys use vocalization to talk with each other. A peacock uses the size of his tail to signal his attractiveness to a female. People use gestures and language to communicate.
While these types of models have existed since the 1970s, Wagner and collaborators studied the dynamics of signaling games. The researchers incorporated evolution and individual learning to overturn other preconceived notions from previous models.
Through these models, the researchers start with a signaling game in which the sender's message does not have any prebuilt meaning. As the signaling system evolves, the sender's message may reflect the state of the world and the receiver may respond in a way that is appropriate for the state of the world.
"Through this process an arbitrary signal with no prebuilt meaning has come to mean something," Wagner said. "It appears that the meaning of a word has almost magically arisen out of this natural process."
If the researchers can show that this process occurs across a wide variety of models, then they may be able to explain how a word or action gains meaning.
"I think it's important for us to think carefully about features of our lives that we take for granted," Wagner said. "This research is one way for us to think carefully about why it is that words have meaning and how it is that words can acquire meaning through a natural process."
The National Science Foundation has supported the work. The researchers plan future studies and to expand their analysis to a wider class of signaling games.
July 30, 2014
“Game Theory” Model Reveals Vulnerable Moments for Metastatic Cancer Cells' Energy Production
Cooperation between oxygen-poor cancer cells (red) and oxygen-rich ones (green).
Laboratory of Kenneth Pienta, Johns Hopkins
Cancer’s no game, but researchers at Johns Hopkins are borrowing ideas from evolutionary game theory to learn how cells cooperate within a tumor to gather energy. Their experiments, they say, could identify the ideal time to disrupt metastatic cancer cell cooperation and make a tumor more vulnerable to anti-cancer drugs.
“The reality is that we still can’t cure metastatic cancer that has spread from its primary organ and game theory adds to our efforts to attack the problem,” says Kenneth J. Pienta, M.D., the Donald S. Coffey Professor of Urology at the Johns Hopkins Brady Urological Institute, and director of the Prostate Cancer Program at the Johns Hopkins Kimmel Cancer Center. A description of the work appears in a June 20 report in the journal Interface Focus.
Game theory is a mathematical study of strategic decision-making, and has been widely used to predict conflict and cooperation between individuals and even nations, but increasingly is applied to forecasting cell-to-cell interactions in biology with an ecological perspective. Tumors contain a variety of cells shifting between cooperative-like to competitive-like states, said Ardeshir Kianercy, Ph.D., a postdoctoral researcher in Pienta’s lab. “To study tumor cells in isolation is not enough,” he noted. “It makes sense to study their behavior and relationship with other cells and how they co-evolve together.”
In their research, the Johns Hopkins scientists used mathematical and computer tools to set up game parameters based on biological interactions between two types of tumor cells, one oxygen-rich and the other oxygen-poor. Cells within a tumor engage in different types of energy metabolism depending on how close they are to an oxygen-rich blood supply. Tumor cells in oxygen-poor areas use the sugar glucose to produce energy and, as part of the process, release a compound called lactate. Oxygen-rich cells use this lactate in a different type of energy metabolism process and, as a result, release glucose that can be used by oxygen-poor cells to burn for their own energy.
Generally, the process is an efficient partnership that can help a tumor thrive, but the partnership is always changing as the tumor cells mutate. The mutation rate influences the strength of the energy partnerships between the oxygen-rich and oxygen-poor cells and levels of glucose and lactate production and uptake, according to the scientists.
Applying game theory calculations that accounted for the tumor cells’ mutation rates and potential glucose and lactate levels, the scientists found that within certain ranges of mutation rates, “there are critical transitions when a tumor suddenly switches between different types of energy metabolic strategies,” Kianercy said. This switch in the playbook of energy production tactics may happen when tumors progress and spread.
The scientists think tumors might be especially vulnerable within this window of strategy-switching, making it a potentially ideal time for clinicians to disrupt the tumor’s environment and wreck the partnership among its cells.
Some tumor cells, for instance, may provoke the normal cells around them to release lactate for fuel. A therapy that disrupts lactate transport to the tumor cells during a critical transition “could push a tumor to a condition where cells are not cooperating with each other,” Kianercy explained. “And if they become non-cooperative, they are most likely to stay in that state and the tumor may become more vulnerable to anti-cancer therapies.”
Pienta said it isn’t clear yet whether this type of metabolic cooperation occurs in all tumors. But the game theory model used in the study gives scientists a new way to understand how cancers may progress. “We ultimately want to test how we can interrupt this process with therapies for cancer patients,” he said.
Robert Veltri, Ph.D., of the Brady Urological Institute was also involved in the study, which was supported by the National Institutes of Health’s National Cancer Institute (U54CA143803).
July 30, 2014
Statistical Analysis Could Improve Understanding and Treatment of Different Brain Tumors
Mon, 07/07/2014 - 10:21am
Discovering a brain tumor is a very serious issue, but it is not the end of the story. There are many different types of brain tumor with different survival rates and different methods for treatment. However, today, many brain tumors are difficult to clearly diagnose, leading to poor prognoses for patients.
Diagnosis today is mainly done by morphological appearance. However, this does not closely correlate with the pathogenesis mechanisms, and diagnosis based on morphology may have hampered the discovery of a cure for brain tumors and other types of cancers.
This is something that Xiaolong Fan and colleagues at Beijing Normal University in China are looking to improve. "We try to find ways to classify brain tumors according to the known pathogenesis processes," Fan explained. "This could help with diagnoses and maybe could avoid unnecessary treatment."
The Beijing researchers have a long-term collaboration with researchers at Lund University in Sweden. The two groups have together developed a new way to classify gliomas — the most common brain tumors in adults — into distinct molecular subtypes. As the researchers reported in a recent paper in the journal Proceedings of the National Academy of Sciences of the United States of America (www.pnas.org/cgi/doi/10.1073/pnas.1313814111), these molecular subtypes show differences in transcriptomic and genomic characteristics, as well as in patient survival rates.
To come up with the classification, the researchers studied two gene co-expression modules around key signaling pathways that are conserved between neural development and the formation of gliomas. In the search for patterns, they use publicly available datasets from three continents, including gene expression data, genomic data and clinical data.
To analyze these datasets and look for links, the researchers turned to a Lund-based scientific software company, Qlucore, founded in 2007 as a spin out of Lund University. The company was formed to develop an interactive software tool to conceptualize the vast amount of high-dimensional data generated by microarray gene expression analysis and it is now used with a range of high-dimensional biological datasets.
"Qlucore has a really interesting approach," said Fan, who has been using the software for four years. Using Qlucore Omics Explorer to carry out Pearson correlation co-efficient analysis, the team was able to identify gene co-expression modules around two receptor tyrosine kinases (RTKs) that govern cell fate specification, cell proliferation, migration in the neural stem cell compartment and glial development in gliomas. The two key RTKs that the team studied are epidermal growth factor receptor (EGFR) and platelet derived growth factor receptor A (PDGFRA).
Based on the expression patterns of these two modules, adult low-grade and high-grade gliomas could be classified into three major subtypes that are distinct in prognosis, genetic abnormalities and correlation to the cell lineages and differentiation stages of glial genesis but independent of glioma morphology.
The three subtypes are EM, PM and EMlowPMlow gliomas. According to the findings presented in the PNAS paper, EM gliomas were associated with higher age at diagnosis, poorer prognosis, and stronger expression of neural stem cell and astrogenesis genes. Both PM and EMlowPMlow gliomas were associated with younger age at diagnosis and better prognosis. In addition, "PM gliomas were enriched in the expression of oligodendrogenesis genes, whereas EMlowPMlow gliomas were enriched in the signatures of mature neurons and oligodendrocytes."
"In this process of this study, we have found that Qlucore has been very helpful in supporting a biologist without sufficient mathematic background to apply bioinformatics approaches in their studies. This has been essential for the implementation of the project," said Fan.
Fan is excited about the potential of this approach to improve classification of gliomas. However, there is plenty of work still to do, he said. The next steps, he said, are to use these classifications to elucidate glioma pathogenesis and to identify new glioma therapeutic targets.
And there are plenty of challenges too. "It is quite easy to bring data together if you have a procedure. We use the software as a mathematical tool," he said. However, he added that the limitation is biology. "There are lots of mathematical choices and statistical possibilities but not all make biological sense."
Reference: Yingyu Sun, Wei Zhang, Dongfeng Chen, Yuhong Lv, Junxiong Zheng, Henrik Lilljebjörn, Liang Ran, Zhaoshi Bao, Charlotte Soneson, Hans Olov Sjögren, Leif G. Salford, Jianguang Ji, Pim J. French, Thoas Fioretos, Tao Jiang, and Xiaolong Fan. (2014) A glioma classification scheme based on coexpression modules of EGFR and PDGFRA. PNAS 111:3538-3543., doi: 10.1073/pnas.1313814111
July 30, 2014
NSA data collection ineffective against terrorism and dangerous for democracy, say mathematicians
By John Leonard
Two American mathematicians have spoken of their concern that the mass data collection undertaken by the NSA with the aim of 'preventing terrorism' is both ineffective in achieving the stated goal and dangerous for democracy.
Writing in Notices of the AMS June/July 2014 (PDF), Keith Devlin, a mathematician at Stanford University who spent five years researching 'the area of extracting actionable information from vast amounts of data' funded by the US Department of Defense, claimed that mass data collection is an ineffective way of preventing terrorism, and that resources would be better deployed elsewhere.
"I concentrate on whether indiscriminate 'vacuuming up' of personal information that, according to the documents Edward Snowden has released, the NSA has routinely engaged in for several years, can effectively predict terrorist attacks," Devlin writes.
"I'll say up front that, based on everything I learned in those five years, blanket surveillance is highly unlikely to prevent a terrorist attack and is a dangerous misuse of resources that, if used in other ways, possibly could prevent attacks such as the 2013 Boston Marathon bombing. Anyone with a reasonable sense of large numbers could surmise a similar conclusion. When the goal is to identify a very small number of key signals in a large ocean of noise, indiscriminately increasing the size of the ocean is self-evidently not the way to go."
Writing in the same journal Andrew Odlyzko, professor of mathematics at the University of Minnesota in Minneapolis, claims that the concentration of power represented by mass collection of data by both the NSA and private organisations is dangerous for the entire political system.
"The antiterrorism mantra is driving public policy, and it is corroding the already weakened trust in democratic governance. When high-level officials feel free to give the 'least untruthful' answers or provide assurances of careful oversight and of intelligence successes that are then shown to be false, much is lost," Odlyzko says.
Odlyzko claims that this is just the beginning, and that the incipient Internet of Things will allow much greater levels of data collection and therefore potential for abuse. The NSA obtains data through private organisations, he points out, the business roadmaps of which are often predicated on collecting and processing ever larger amounts of data.
"Most of the data that the NSA has been using came from private organisations, and those are building their business cases on ever more intrusive data collection and exploitation," he writes.
"One report from the latest Consumer Electronics Show said that the 'unsettling message' of that event was that 'everything will be tracked'. What the NSA has been amassing is tiny compared to what will be available soon."
As well as the threat of officials misusing data amassed by the NSA for political purposes, for which he is careful to say there is no hard evidence, Odlyzko points to risks inherent in the way the data collected by private organisations is stored.
"Most of that [data] will be held in databases much more poorly protected than those of the NSA. Therefore we will have to worry about ... what might be done by even less trustworthy employees of the private organisations controlling that data and by all those who manage to break into those (inevitably insecure) databases," he writes.
In the interest of balance Notices of the AMS also sought mathematicians prepared to write in defence of the NSA, but said that for whatever reason "this proved difficult".
July 30, 2014
5 Things You Need To Know About The Future Of Math
Edtech is a big market, trillions of dollars globally. No wonder there are so many entrepreneurs trying to grab a piece.
In an age when start-up entrepreneurs routinely try to come up with the next big gold rush, however, it is easy to forget that the fundamentals of pedagogy have remained stable for thousands of years. Great technology does not disrupt as much as it empowers humans to do a better job at staying the same.
Each week, I get emails from hundreds of startups. They want me to look at their products…or should I say, their “innovations.” Some of them are good, but the majority are confused. They think they’re going to change civilization with some great “disruption.” They believe they will forever change the landscape of education and solve a global education crisis. Never buy into your own hype. There’s is nothing as trite and status quo as reading an entrepreneur’s press release about how “unique and game-changing” his or her product is. Have you asked the teachers what they want, or what they need? On average, teachers and school reformers tend to dismiss new technologies. “We’ve seen this before. There’s always something poised to change everything,” they say. So many educational technologies disappeared just as quickly as they arrived. Consider the stereograph, the reading accelerator, B.F. Skinner’s “teaching machines.” Education professionals are hardly inspired by promises of the next new big thing.
Perhaps your new product is disruptive, but disruption is not what we need.
The most effective pieces of edtech start with the assumption that educational tools are at their best when they function like the chalkboard: non-invasively helping teachers to do a well-practiced job with increased precision and ease.
“Teachers Know Best: What They Want From Digital Instructional Tools” is a report commissioned by the Bill & Melinda Gates Foundation. “We embarked on this research with a simple goal: to find out directly from teachers what kinds of tools and instructional technologies they’d like to have to help them tailor instruction to their students’ individual needs and skills.” They continue, “We hope these findings will provide a roadmap to ed tech companies that will help them develop products that better meet teachers’ needs, as well to districts and others who buy these tools to encourage them to give teachers a voice in choosing the digital resources they’ll use in their classroom.”
They surveyed 3,000 K-12 teachers and more than 1,000 students . While I think their report suffers from a misguided faith in the power of revolution, rejuvenation, disruption and innovation, there are still quite a few important take aways. Here are the five findings that are most interesting to me (with commentary).
“Teachers feel there’s a gap between the variety of tools currently available to them – most of which are purchased by districts on their behalf – and what they find to be most effective in the classroom.” Teachers have been saying this for decades. The nature of school infrastructure is such that teachers are rarely the ones who make the purchasing decisions. Certainly I understand the reasons, but in the age of crowdsourced decision making, you’d hope we could figure out better ways to empower (and trust) teachers to make decisions that were in the best interest of the individuals they work with everyday.
“Teachers really do want digital tools” but “those tools need to transform the learning experience in new ways – they can’t simply be ‘converted’ from successful traditional tools.” I know this almost sounds like teachers want disruptive innovations, but that’s not it. Instead, they want edtech that doesn’t try to fix what’s not broken. Stop trying to build better chalkboards; this is not an area where problems exist. Everything doesn’t need to be digitized. But there are some particular areas where teachers do want digital tools…
“Teachers identified six instructional purposes for which digital instructional tools are useful: Delivering instruction directly to students; Diagnosing student learning needs; Varying the delivery method of instruction; Tailoring the learning experience to meet individual student needs; Supporting student collaboration and providing interactive experiences; Fostering independent practice of specific skills.” All of these examples point in one direction: collaborative adaptive learning technologies. Everyone knows it. We need technologies that allow us to personalize instruction, assessment, practice, and collaboration in increasingly sophisticated ways. We don’t need technologies that fix the classroom–it works just fine–we need technologies that fix the big problems: individual work, homework, and standardized testing. It is sad to consider our culture’s priorities: we’ve built amazingly sophisticated adaptive algorithms to make consumers incur more and more credit card debt while shopping online but we still haven’t applied the same level of engineering expertise to education.
“In math, as grade levels increased, teachers were less likely to report having available, sufficient, and digital resources, with high school math teachers reporting the biggest gaps. The opposite trend is seen in English language arts (ELA), with elementary school teachers reporting the biggest gaps.” Take notes; these are the gaps. And it is not unique to edtech. In general, we’re really good at teaching the building blocks of STEM, but as it gets more ambiguous, we struggle. In ELA, it is the opposite. In my opinion, this is because of our misguided belief that science is factual and the humanities are abstract. In fact, both are language systems–the foundation (or should I say, the grammar) is rigid, the eventual implementation is always creative, fuzzy, and ambiguous. Multi-disciplinary edtech will help everyone understand that the strict divisions between subjects/disciplines is problematic. From what I’ve seen, game based learning is the likely front runner in this area.
“Teachers don’t get to choose many of the products their students use, but when they are given the opportunity to select them, they are more likely to report that products were effective.” Remember what really matters. At the end of the school day, it is all about whether or not a teacher is able to reach a student–to nurture a critically thinking, productive contributor to a better civilization. Forget about who signs the checks, and who the customer is, and the size of the market. Focus on the teachers’ relationship with their students. Establish ways to empower them to make their own decisions.
Remember some of the best innovations in the history of humanity: the wheel, irrigation, written language, etc. Real entrepreneurship does not function in service of “disruption,” or “revenue models.” Instead it reimagines everyday implementation patterns in ways that increasingly benefit human civilizations.
Read the full report which the Bill and Melinda Gates Foundation released on April 22nd at the ASU/GSV Education Innovation Summit: http://tinyurl.com/TeachersKnowBest
Jordan Shapiro is author of FREEPLAY: A Video Game Guide to Maximum Euphoric Bliss, a book about how playing video games can transform psychological attitudes. For information on Jordan’s upcoming books and events click here.
July 30, 2014
Hoijer '15 Studies Correlations Between Mathematics and Music
Natalie Hoijer '15 leads activities to help students
understand the theories behind their music.
BLOOMINGTON, Ill. — Illinois Wesleyan University student Natalie Hoijer ’15 (Arlington Heights, Ill.) is combining her academic majors to investigate the relationships between mathematical symmetries and classical music.
“The connection between math and music has always fascinated me and by studying both of them, I see how they both seem to rely on each other,” said Hoijer, who is double majoring in both fields. “They both parallel in many ways, such as in the mathematics of the sound produced by instruments, and tuning, probability and group theory.”
As an Eckley Summer Scholar and Artist, Hoijer is researching mathematical symmetries such as the Golden Mean, the Fibonacci Series and palindromes, and how they apply to the composition of concert music. Under the mentorship of School of Music Director Mario Pelusi, she is exploring how these techniques affect the styles and structures of a variety of musical compositions and how these works compare to compositions that do not use mathematical symmetries.
As part of her project, Hoijer is also teaching a class titled “Unleashing Music’s Hidden Blueprint” at the Illinois Chamber Music Festival held through Illinois Wesleyan’s School of Music. She incorporates her research and findings into her lesson plans, and creates hands-on activities to help students understand the theories and foundations on which compositions are based.
“I love sharing my enthusiasm for math and music with students and seeing them enjoy the content as much as I do,” said Hoijer. “The chance to be in front of a classroom of students, create my own lesson plans and share my discoveries, as well as help students reach discoveries of their own is an exciting experience.”
After receiving feedback from surveys, projects and activities that she has completed with her class, Hoijer said she has been able to measure quantitatively the aesthetic quality of contrastingly structured compositions. She hopes to discover which compositional tool is most effective in enhancing the overall structure of a work of music, information she plans on presenting or publishing through a research paper.
The Eckley Summer Scholars and Artists endowment supports summer research and creative activity for several students each year, enabling them to stay on campus over the summer under the direction of faculty mentors. The program was established as one aspect of a major gift to the University by President Emeritus Robert S. Eckley, his wife Nell and the Eckley Family Foundation, before he passed away in 2012.
“It has been wonderful to have unencumbered time devoted to dissecting my research and also invigorating to make these discoveries,” said Hoijer. “To transform a subjective field, such as music into an objective understanding with colors, shapes or proportions, has been a rich and gratifying experience.”
July 30, 2014
Seeker, Doer, Giver, Ponderer
Beatrice de Gea for The New York Times
JULY 7, 2014
James H. Simons likes to play against type. He is a billionaire star of mathematics and private investment who often wins praise for his financial gifts to scientific research and programs to get children hooked on math.
But in his Manhattan office, high atop a Fifth Avenue building in the Flatiron district, he’s quick to tell of his career failings.
He was forgetful. He was demoted. He found out the hard way that he was terrible at programming computers. “I’d keep forgetting the notation,” Dr. Simons said. “I couldn’t write programs to save my life.”
After that, he was fired.
His message is clearly aimed at young people: If I can do it, so can you.
Down one floor from his office complex is Math for America, a foundation he set up to promote math teaching in public schools. Nearby, on Madison Square Park, is the National Museum of Mathematics, or MoMath, an educational center he helped finance. It opened in 2012 and has had a quarter million visitors.
Dr. Simons, 76, laughs a lot. He talks of “the fun” of his many careers, as well as his failings and setbacks. In a recent interview, he recounted a life full of remarkable twists, including the deaths of two adult children, all of which seem to have left him eager to explore what he calls the mysteries of the universe.“I can’t help it,” he said of the science he finances. “It’s very exciting.”
Jeff Cheeger, a mathematician at New York University who studied with him a half century ago at Princeton, described Dr. Simons’s career as “mind-boggling.”
Dr. Simons received his doctorate at 23; advanced code breaking for the National Security Agency at 26; led a university math department at 30; won geometry’s top prize at 37; founded Renaissance Technologies, one of the world’s most successful hedge funds, at 44; and began setting up charitable foundations at 56.
This year, he was elected to the National Academy of Sciences, an elite body that Congress founded during Lincoln’s presidency to advise the federal government.
With a fortune estimated at $12.5 billion, Dr. Simons now runs a tidy universe of science endeavors, financing not only math teachers but hundreds of the world’s best investigators, even as Washington has reduced its support for scientific research. His favorite topics include gene puzzles, the origins of life, the roots of autism, math and computer frontiers, basic physics and the structure of the early cosmos.
Working closely with his wife, Marilyn, the president of the Simons Foundation and an economist credited with philanthropic savvy, Dr. Simons has pumped more than $1 billion into esoteric projects as well as retail offerings like the World Science Festival and a scientific lecture series at his Fifth Avenue building. Characteristically, it is open to the public.
His casual manner — he’s known as Jim — belies a wide-ranging intellect that seems to resonate with top scientists.
“He’s an individual of enormous talent and accomplishment, yet he’s completely unpretentious,” said Marc Tessier-Lavigne, a neuroscientist who is the president of Rockefeller University. “He manages to blend all these admirable qualities.”
On a wall in Dr. Simons’s office is one of his prides: a framed picture of equations known as Chern-Simons, after a paper he wrote with Shiing-Shen Chern, a prominent geometer. Four decades later, the equations define many esoteric aspects of modern physics, including advanced theories of how invisible fields like those of gravity interact with matter to produce everything from superstrings to black holes.
Math is considered a young person’s game. But Dr. Simons continues to map its frontiers.
“He said, ‘What do you mean, reading?’ ” Dr. Stillman recalled. The journal held one of Dr. Simons’s papers. Given that Dr. Simons still works in business as well as philanthropy, Dr. Stillman added, “that’s pretty impressive.”
A Boyhood Love of Math and Logic
During the interview, Dr. Simons reached into the pocket of his blue shirt and pulled out a pack of cigarettes, at times letting one dangle from his mouth unlit. He was relaxed and chatty, wearing tan pants and loafers, his accent betraying his Boston birth and upbringing.
Dr. Simons said he knew as a boy that he loved math and logic. He would lie in bed thinking about how to give the instruction “pass it on” in a clearly defined way.
“One night, I figured it out,” he recalled. By morning, he added, he could no longer remember the insight.
At 14, during a Christmas break, he was hired by a garden supply store for a stockroom job. But he was quickly demoted to floor sweeper after repeatedly forgetting where things went. His bosses were incredulous when, at vacation’s end, he told them he wanted to study mathematics at the nearby Massachusetts Institute of Technology.
Excellent test scores and the recommendation of a high school adviser got him into the prestigious school. He graduated in three years, and received his doctorate from the University of California, Berkeley, in three more. It was at Berkeley that he met Dr. Chern, a math prodigy from China.
In his doctoral thesis, Dr. Simons advanced the mathematical understanding of curved spaces, a topic Einstein exploited in his general theory of relativity to show how gravity deforms space and time.
Returning east, he taught math at M.I.T., then Harvard. In 1964, he was recruited into the shadowy world of government spying. At Princeton, while ostensibly part of the academic elite, he worked for the Institute for Defense Analyses, its Princeton arm a furtive contractor for the N.S.A.
On his own time, once a week, he tutored Dr. Cheeger, then a graduate student. “He became my teacher, unofficially,” the N.Y.U. professor recalled.
At Princeton, Dr. Simons’s cryptography strides helped the N.S.A. break codes and track potential military threats. But he failed as a programmer.
He also managed to fall into political conflict with his boss, Maxwell D. Taylor, a retired four-star Army general. In 1967, General Taylor defended the Vietnam War in a New York Times Magazine article. Dr. Simons objected. His reply, also published in The Times, said the conflict would “diminish our security” and urged a pullout “with the greatest possible dispatch.”
Soon after, he was dismissed, and Stony Brook University on Long Island courted him to become its math chairman.
“It was a lousy department,” he recalled. “When I was interviewed by the provost, he said, ‘Well, Dr. Simons, I have to say you’re the first person we’ve interviewed for this job who actually wants it.’
“I said: ‘I want it. I want it. It sounds like fun.’ And it was fun. And I went there, and we built up a very good department.”
In 1976, Dr. Simons won the Oswald Veblen Prize of the American Mathematical Society — geometry’s highest honor — raising the department’s stature. The award was for recasting the higher math of area-minimizing surfaces, a simple example being a soap film that forms across a wire frame.
But he became restless, and the business world beckoned. In Boston, his family had run a shoe factory. At Berkeley, he had traded stocks. Once, after driving to Bogotá, Colombia, on a motor scooter with a college friend, he persuaded his father to join him in an investment there.
In 1978, he founded a predecessor to Renaissance Technologies in a strip mall close to the Stony Brook campus. In 1982, he set up Renaissance, which grew to occupy a 50-acre campus, complete with tennis courts.
In time, his novel approach helped change how the investment world looks at financial markets. The man who “couldn’t write programs” hired a lot of programmers, as well as physicists, cryptographers, computational linguists, and, oh yes, mathematicians. Wall Street experience was frowned on. A flair for science was prized. The techies gathered financial data and used complex formulas to make predictions and trade in global markets.
The company thrived, rewarding investors with double-digit annual returns. It marked an early triumph of the “quants” — quantitative analysts who use advanced math to guide investments — and foreshadowed the ascendency of Big Data.
The secret? “He’s a very good people manager,” said Nick Patterson, a former Renaissance partner. “That’s not,” he added, “the stereotype of a mathematician.”
Dr. Simons credits his employees. “A good atmosphere and smart people can accomplish a lot,” he said.
But he also conceded that his curiosity drove him to examine all kinds of unusual possibilities, such as whether sunspots and lunar phases influenced the financial markets. During the birth of one of his five children, a nurse told Dr. Simons that the obstetrics ward was always crowded during a full moon.
“I tested that one, too,” he said. “Not true.”
Success and Tragedy Were Companions
His philanthropic work began in 1994 when he and his wife founded the Simons Foundation, followed by other charities.
Tragedy hit as his successes grew. On Long Island in 1996, his son Paul, 34, was killed by a car while riding a bicycle.
In 2003, a younger son, Nicholas, 24, drowned while globetrotting. He had worked in Katmandu, and Dr. Simons and his wife went to Nepal repeatedly to set up a memorial institute.
Dr. Simons said he began thinking a lot about old math riddles. “It was a refuge,” he said, “a quiet place in my head.”
One morning in Katmandu, as he relaxed on a hotel porch, the structure of a proof suddenly came to him. It was a solid advance — one he didn’t forget. He discussed it with Dennis P. Sullivan, a mathematician at Stony Brook who had recently won the National Medal of Science, and the two collaborated.
In 2007, the resulting paper ran under the title “Axiomatic Characterization of Ordinary Differential Cohomology.”
“It’s very hard to explain,” Dr. Simons said after a few tries. “But we solved it.”
Dr. Sullivan said that Dr. Simons, in his career, had made a series of seminal contributions and that an early one “revolutionized the consciousness of later generations.” He added that in May 2013, to celebrate Dr. Simons’s 75th birthday, four American math and science luminaries gave lectures about fields he had advanced.
Forbes magazine ranks him as the world’s 93rd richest person — ahead of Eric Schmidt of Google and Elon Musk of Tesla Motors, among others — and in 2010, he and his wife were among the first billionaires to sign the Giving Pledge, promising to devote “the great majority” of their wealth to philanthropy.
Of late, Dr. Simons said, his giving had accelerated, adding that he was particularly proud of Math for America. It awards stipends and scholarships of up to $100,000 to train high school math and science teachers and to supplement their regular salaries. The corps is expanding to 1,100 teachers, mainly in New York City, but also in Boston, Los Angeles and elsewhere.
His passion, however, is basic research — the risky, freewheeling type. He recently financed new telescopes in the Chilean Andes that will look for faint ripples of light from the Big Bang, the theorized birth of the universe.
The afternoon of the interview, he planned to speak to Stanford physicists eager to detect the axion, a ghostly particle thought to permeate the cosmos but long stuck in theoretical limbo. Their endeavor “could be very exciting,” he said, his mood palpable, like that of a kid in a candy store.
For all his self-deprecations, Dr. Simons does credit himself with a contemplative quality that seems to lie behind many of his accomplishments.
“I wasn’t the fastest guy in the world,” Dr. Simons said of his youthful math enthusiasms. “I wouldn’t have done well in an Olympiad or a math contest. But I like to ponder. And pondering things, just sort of thinking about it and thinking about it, turns out to be a pretty good approach.”