Student Feature - Sean P. Cornelius

Student Feature - Sean P. Cornelius

On the afternoon of August 14, 2003, an overloaded transmission line struck a tree near Walton Hills, Ohio. Incidents like this happen all the time, and an array of built-in safeguards usually let the power grid hum along along without skipping a beat. But the events of this particular afternoon culminated in loss of power for 55 million people in the United States and Canada, at the time representing the second-largest blackout in history. Fast forward to May 6, 2010. At 2:32 PM, the Dow Jones Industrial average plummeted nearly 1000 points, only to see the market mysteriously regain most of that value in the next hour. Authorities would later attribute this "flash crash" in part to a large number of fake sell orders from a single trader. What do these two events have in common? If you’re like most people, your answer is probably "not much." But if you — like me — are part of the emerging field of network science, the answer is “quite a bit.”

Network science explores the structure and dynamics of systems like power grids, food webs, and the genetic architecture of living cells. Though the particulars may vary, complex interconnected systems like these display a number of common phenomena regardless of whether their constituents are power generators, animal species, or genes. One fundamental property of networks is that small perturbations can have a disproportionately large effect, propagating outward and causing the system as a whole to change behavior or fail as in the two examples above. Figuring out how to prevent these cascading failures, and more generally how to keep networks under control, is a key question of my research. 


My interest in these topics originated in two projects pursued during my time as an undergraduate at the University of Wisconsin (UW). As a freshman, I was lucky to attend an undergraduate-level colloquium that amounted to a whirlwind introduction to the topic of chaos — colorful images of fractals, depictions of turbulence with ink and water, an experimental realization of a Lorenz water wheel, and more. Afterward I approached that professor and unceremoniously begged to work with him, going on to study how chaos can emerge in neural networks despite surprisingly simple, regular network topology. Later, I joined three other students in an interdisciplinary summer research program under the auspices of the Applied Math, Engineering, and Physics (AMEP) department and the Department of Zoology. I devised a model for the spontaneous self-organization of animal populations into traveling swarms — think of the chevron formation of migrating birds or the stunning vortices formed by shoaling fish (Fig. 1). These two undergraduate projects kindled an interest in interdisciplinary applications of mathematics, with a particular eye toward biology. They also taught a common lesson: simple building blocks, if connected in the right way, can produce complex large-scale behavior.

With that in mind, let’s return to the examples of the 2003 power blackout and the 2010 "flash crash" of the New York Stock Exchange. One might ask: are large-scale failures a necessary side-effect of how networks respond to perturbations, or can they be averted? Answering this question was the focus of my graduate work at Northwestern University. My colleagues and I studied this issue in the context of the metabolic networks of single-cell organisms like the bacterium E. coli. Laboratory and in silico experiments had shown previously that these organisms respond to the knockout of key enzyme-coding genes by spontaneously activating a large number of otherwise-dormant metabolic reactions. The natural hypothesis is that this response has been evolved to help the strain cope with perturbations — i.e., without these so-called "latent pathways," it might not survive. Yet we found that suppressing this decentralized network response results in strains with higher fitness than their wild-type counterparts subjected to the same gene knockouts [1].

This parable illustrates that network failures in the wake of perturbations are not a fait accompli. Adverse perturbations typically do not destroy the "good" dynamical states of a system. Instead, a network, by virtue of operating in a decentralized way, simply doesn’t find those states in its natural response. Here my colleagues and I had an idea. If an adverse perturbation can drive a network to a "bad" state even when good ones are available, then it might be possible to exploit the same principle in reverse: in other words, maybe one could deliberately apply a second perturbation that causes the network to settle into a "good" state. We would call this a compensatory perturbation, as it compensates for the effect of the original perturbation by mitigating or reversing the accompanying damage.

We formalized this idea by taking a nonlinear-dynamics-based view of cascading failures (Fig. 2). Here, the dynamics of a network (Fig. 2A) are represented by a system of ordinary differential equations, and the form of the nonlinearity characterizes the particular network under consideration (food web dynamics, power flow equations, etc.). In this view, a network undergoing a cascade corresponds to an initial condition that will ultimately evolve to an undesirable stable state (fixed point, limit cycle, chaotic attractor, etc.), as shown in Fig. 2B. We would instead like to drive the system to a more desirable, target stable state by making a perturbation to the states of one or more of the network nodes. The catch is that, in general, there will be constraints that limit the interventions allowed in real systems. It is easier to knock down genes than up-regulate entire biochemical pathways, animal populations are easier culled than augmented, changes in power flow are limited by line capacity, and so on. In sum, such constraints generally preclude bringing a system directly to a target state. How, then, are we to have any hope of mitigating cascading failures in the way I propose?


Our key insight was that associated with every stable state of a dynamical system is a set of other states that converge to it under time evolution — its basin of attraction. Even if constraints forbid reaching the target directly, it may nonetheless be possible to reach one of these states using eligible perturbations to the system state (Fig. 1C). Once there, our work is done; the network will spontaneously evolve toward the target without any further intervention. This leads to a clear conclusion: we can control a complex network in this way if and only if the desired basin of attraction intersects the region spanned by eligible perturbations (Fig. 2C).

Unfortunately, there was no general method to identify basins of attraction (let alone this potential overlap) in the high-dimensional state spaces characteristic of large complex networks. Numerical techniques such as forward integration applied to a brute-force sampling of initial conditions suffer from Bellman’s "curse of dimensionality," whereas analytical techniques (including those based on Lyapunov functions) require system-specific heuristics and hence are not sufficiently general. Consequently, the key contribution made by our team was an efficient algorithm that systematically identifies reachable points in a desired basin of attraction of a general nonlinear system of ODEs. We proceed iteratively, building up an eligible compensatory perturbation from of a series of small perturbations, whose effects on the system dynamics can be forecast accurately using a variational equation. (For a schematic illustration of how this process works, see Figs. 2D and 2E.) 

We demonstrated through applications that the compensatory perturbations identified by our algorithm can be used both to rescue networks from failure (such as in a power grid on the verge of a large blackout) and to reprogram a network even in the absence of any failure (Fig. 3). The moral of these applications is that nonlinearity, while usually regarded as an obstacle to studying the dynamics of complex systems, actually proves advantageous when seeking to control them. By allowing multiple stable states, nonlinearity allows us to exploit the natural stability of the system, driving it to a desired state with minimal effort. This work was published in Nature Communications [2], for which I was honored to be one of the recipients of SIAM’s Best Student Paper prize in the summer of 2014, coinciding with the completion of my Ph.D. As a postdoc at Northeastern University’s Center for Complex Network Research (CCNR), I’ve continued this line of research, looking for ways to take advantage of properties ubiquitous in real systems that are nonetheless often regarded as "nasty" from an analytical standpoint. For instance, consider the fact that the connections in real-world networks are usually not constant but rather vanish and re-emerge sporadically, like short-duration binding processes between proteins in a cell. My colleagues and I have shown that despite being "less connected" in this way, such temporal networks offer surprising control advantages by providing a degree of flexibility unattainable in a fixed network environment [3]. Likewise, real systems typically do not exist in isolation, but are coupled dynamically to other systems in a "network of networks." For example, the transportation system relies on power for lighting and electrified rail, and the power grid in turn relies on the transportation network for delivery of fuel and mobilization of workers and repair crews. With colleagues at UC Davis, we have demonstrated that one can exploit differences in timescales and dynamics between network "layers" to render a system as a whole easier to manipulate [4]. It is my hope that these lines of research will encourage others to embrace all aspects of model systems — the good, the bad, and the ugly. There is an ever-present temptation to throw away inconvenient features of real systems in the name of analytical expediency or simplicity of modeling. But as I have shown here, the features that put the "complex" in complex systems are often a blessing in disguise.

Sean P. Cornelius



[1] S. P. Cornelius, J. S. Lee, and A. E. Motter. Dispensability of Escherichia coli's latent pathways. Proc. Natl. Acad. Sci. USA 108, 3124 (2011).

[2] S. P. Cornelius, W. L. Kath, and A. E. Motter. Realistic control of network dynamics. Nature Communications 4, 1942 (2013).

[3] A. Li, S. P. Cornelius, Y.-Y. Liu, L. Wang, A.-L. Barabási. The fundamental advantages of temporal networks. In review (2016).

[4] M. Pósfai, J. Gao, S. P. Cornelius, A.-L. Barabási, R. M. D’Souza. Controllability of multi-layer, multi-timescale systems. In review (2016).









Please login or register to post comments.