|
Chair morning session: Reimer Kühn
|
09:00 - 10:00
|
Sebastian Reich
(University of Potsdam)
Data assimilation as a control problem
Continuous-time data assimilation can be rephrased as an interacting particle system, mean field equation, respectively. The feedback particle filter (FPP) formulation is particularly attractive as it can be viewed as controlled version of the free dynamics in the spirit of Kalman’s original work. The FPP
also naturally leads to the popular ensemble Kalman-Bucy filter. The assimilation of discrete-in-time data is more delicate. However, application of a coupling approach, originally considered by Schroedinger, to
the discrete-time data assimilation problem leads again to a controlled version of the underlying stochastic process. Alternatively, optimal transportation and ensemble transform methods can also be used to enhance available particle filters. I will survey these recent developments.
|
10:00 - 10:30
|
Coffee break
|
10:30 - 11:30
|
David Saad
(Aston University)
Optimising Spreading Processes
The modern world can be best described as interlinked networks, of individuals, computing devices and social networks; where information and opinions propagate through their edges in a probabilistic or deterministic manner via interactions between individual constituents. These interactions can take the form of political discussions between friends, gossiping about movies, or the transmission of computer viruses. Winners are those who maximise the impact of scarce resource such as political activists or advertisements, or by applying resource to the most influential available nodes at the right time. We developed an analytical framework, motivated by and based on statistical physics tools, for impact maximisation in probabilistic information propagation on networks; to better understand the optimisation process macroscopically, its limitations and potential, and devise computationally efficient methods to maximise impact (an objective function) in specific instances.
The research questions we have addressed relate to the manner in which one could maximise the impact of information propagation by providing inputs at the right time to the most effective nodes in the particular network examined, where the impact is observed at some later time. It is based on a statistical physics inspired analysis, Dynamical Message Passing that calculates the probability of propagation to a node at a given time, combined with a variational optimisation process. We address the following questions: 1) Given a graph, a budget and a propagation/infection process, which nodes are best to infect to maximise the spreading? 2) Maximising the impact on a subset of particular nodes at given times, by accessing a limited number of given nodes. 3) Identify the most appropriate vaccination targets to isolate a spreading disease through containment of the epidemic. We also point to potential applications.
Lokhov A.Y. and Saad D., Optimal Deployment of Resources for Maximizing Impact in Spreading Processes, PNAS 114 (39), E8138 (2017)
|
11:30 - 12:30
|
Discussions
|
12:30 - 13:30
|
Lunch break
|
13:30 - 14:20
|
Discussions
|
|
Chair afternoon session: Peter Sollich
|
14:20 - 14:40
|
Malbor Asllani
(University of Namur)
Hopping in the Crowd to Unveil Network Topology
The standard model of diffusion assumes Fick's law, namely that the rate at which one substance diffuses through another is directly proportional to the concentration gradient of the diffusive substance. However one might expect nonlinear corrections in various situations e.g. if obstacles are present or if there are more than one species at high concentrations. In many cases of interest, the diffusion takes place on a heterogeneous support of the network type. The brain, Internet and the cyberworld, foodwebs, social contacts and commuting fall within the vast realm of network science. Irrespective of the specific domain, individual entities (e.g. material units, bits of information) belonging to a certain population, jump from one site (node) to its adjacent neighbors, following the intricate web of distinct links, which defines the architecture of the complex network. In real-world applications, the carrying capacity of each node is finite, an ineluctable constraint that should be accommodated for under crowded conditions. With this motivation in mind, we here introduce a nonlinear operator to describe diffusion in a crowded network. The derivation originates from a microscopic stochastic framework and it is solidly grounded on first principles.
More precisely, individual units sitting on a node behave as standard random walkers: jumps towards adjacent nodes are equiprobable. Each node can only host a finite number of walkers, or, in other words, it bears a finite carrying capacity. The transition probability from one node to another weights the free available space at the destination site. Starting from this microscopic formulation, we derive a diffusion equation characterized by a transport operator that differs from the standard random-walk Laplacian matrix, for the presence of nonlinear terms involving products of the density of walkers. This difference is directly reflected in the stationary solution: the asymptotic concentration of crawlers per node is a nonlinear function of the node’s connectivity and not just proportional to it, as it is the case under standard diffusion. This latter is recovered when operating under diluted condition.
Based on this generalization, we shall also outline an innovative scheme to reconstruct the unknown topology of a network, which exploits the dynamical entanglement among walkers, as instigated by the competition for the available space. The method allows one to accurately determine the distribution of connectivities and to quantify, with unprecedented efficiency, the size of the examined network. Notably, all necessary information is gathered from just one random node of the collection. We successfully tested our method in both synthetic networks but also in realistic ones (see Figure 1 for the C. Elegans neural network).
References
[1] M. Asllani, T. Carletti, F. Di Patti, D. Fanelli and F. Piazza, Hopping in the Crowd to Unveil Network Topology, Phys. Rev. Lett. 120, 158301 (2018)
|
14:40 - 15:00
|
Federico Tomasi
(Università degli Studi di Genova)
Latent Variable Time-varying Network Inference
In many applications of finance, biology and sociology, complex
systems involve entities interacting with each other. These pro-
cesses have the peculiarity of evolving over time and of comprising
latent factors, which influence the system without being explicitly
measured. In this work we present latent variable time-varying
graphical lasso (LTGL), a method for multivariate time-series graph-
ical modelling that considers the influence of hidden or unmea-
surable factors. The estimation of the contribution of the latent
factors is embedded in the model which produces both sparse and
low-rank components for each time point. In particular, the first
component represents the connectivity structure of observable vari-
ables of the system, while the second represents the influence of
hidden factors, assumed to be few with respect to the observed
variables. Our model includes temporal consistency on both com-
ponents, providing an accurate evolutionary pattern of the system.
We derive a tractable optimisation algorithm based on alternating
direction method of multipliers, and develop a scalable and effi-
cient implementation which exploits proximity operators in closed
form. LTGL is extensively validated on synthetic data, achieving
optimal performance in terms of accuracy, structure learning and
scalability with respect to ground truth and state-of-the-art meth-
ods for graphical inference. We conclude with the application of
LTGL to real case studies, from biology and finance, to illustrate
how our method can be successfully employed to gain insights on
multivariate time-series data.
|
15:00 - 16:00
|
José Bento
(Boston College)
Distributed Averaging via ADMM, Random Walks on Graphs and Markov Chain Lifting
A connection between the distributed alternating direction method of multipliers (ADMM) and lifted Markov chains was recently proposed by Franca et al. 2016 for a distributed averaging problem on a graph G. This was followed by a conjecture that ADMM is faster than gradient descent by a square root factor in its convergence time, in close analogy to the mixing speedup achieved by lifting several Markov chains. Nevertheless, a proof was lacking.
In this talk, I will start by revisiting Franca’s conjecture. Afterwards, I will fully characterize the distributed over-relaxed ADMM for this consensus problem in terms of the topology of the graph G. To be specific, I will relate its convergence rate with the mixing time of random walks on the graph G. A consequence of this result is a proof of the aforementioned conjecture, which, interestingly, is valid for any graph, even those whose random walks cannot be accelerated via Markov chain lifting.
|
16:00 - 16:30
|
Coffee break
|
16:30 - 17:30
|
Chris Sherlock
(Lancaster University)
Fast exact inference for Markov jump processes using the rate matrix
Building on the work of Al-Mohy and Higham (2011) we describe a new algorithm for evaluating the left action of the exponential of a rate matrix on a vector of probabilities. The specifics of our application allow an algorithm that is both considerably simpler than the general algorithm of Al-Mohy and Higham, and over three times faster. We apply our algorithm to exact inference for partially and discretely observed Markov jump processes (MJPs) with finite state spaces, finding additional, substantial computational savings in the cases of well known epidemic models. We then consider MJPs with infinite state spaces and build on the work of Georgoulas, Hillston and Sanguinetti (2017) to provide a new, fast, and more easily tuned algorithm for Bayesian inference.
Fast exact inference for Markov jump processes using the rate matrix.
|
17:30 - 18:30
|
Discussions
|
18:30 - 19:30
|
Dinner
|
19:30 - 21:00
|
Discussions & Poster Session II (focus on even numbers)
|