Hawkes process - animation
Hawkes processes are a class of temporal point processes that model time occurrences of events. It extends the well-known class of Poisson processes in a way that enables simple and yet powerful description of self-excitation or inhibition. They are widely used in finance, neuroscience, social networks, and more, where the occurrence of one event influences the probability of future events. This post contains well-known facts about Hawkes processes, but also an interactive JavaScript simulation of a bivariate Hawkes process which is quite original up to my knowledge.
All temporal point processes below are denoted by $N$, i.e. $N$ is a random set of times.
Introduction
Let us start with the simplest case, which is a univariate (linear) Hawkes process $N$. A univariate Hawkes process models a single type of events. Like any temporal point process model, it is characterized by its conditional intensity $\lambda_t$ which describes the instantaneous rate at which events occur, given the past history $\mathcal{F}_t$. If the history is empty at initial time $t=0$, the intensity reads as
\[\lambda_t = \mu + \sum_{T < t} h(t - T) = \mu + \int_0^{t-} h(t-s) N(\mathrm{d}s),\]where:
- $\mu \geq 0$ is the baseline intensity (spontaneous event rate),
- $h: \mathbb{R} \to \mathbb{R}$ is the delay function, which models the influence of past events on future intensity,
- the sum runs across all times $T$ of the Hawkes process $N$ that occurred before time $t$.
The main restriction of this model is that the values of $h$ must be non negative. In turn, such a process cannot model inhibitory phenomena.
A multivariate Hawkes process generalizes this to multiple interacting event types. For a process with $n$ dimensions, the (random) sets are denoted by $N^1,\dots, N^n$ and the intensities read as
\[\lambda^i(t) = \mu_i + \sum_{j=1}^{n} \int_0^{t-} h_{i,j}(t-s) N^j(\mathrm{d}s).\]Here, $h_{i,j}$ encodes how events of type $j$ excite events of type $i$. This framework is extremely useful when modeling networks of interacting neurons, trades in multiple assets, or social interactions.
Delay Functions
When the delay functions are exponential, Hawkes processes enjoy a Markovian structure, therefore their simulation and theoretical study are way more efficient. For instance, if $n = 1$ and the delay function is given by $h(t) = \beta e^{−\alpha t}$, then
\[\Xi_t = \int_0^t \beta e^{−\alpha (t−s)} N(ds),\]defines a Markov process which furthermore satisfies a simple stochastic differential equaiton with jumps. In this example, $\Xi$ follows an exponential decay to 0 at rate $\alpha$ with jumps of height $\beta$. The parameter $\beta$ describes the interaction strength.
Nonlinear case
The Hawkes processes defined above are usually described as linear in contrast with the more generic case where the intensity (in the univariate case) reads as
\[\lambda_t = f\left( \int_0^{t-} h(t-s) N(\mathrm{d}s) \right),\]where the activation function $f: \mathbb{R} \to \mathbb{R}_+$ ensures positivity and/or model saturation effects. Nonlinear Hawkes processes allow more flexible modeling, especially in networks with inhibition: delay functions $h$ with negative values are allowed here.
Remark: The linear case corresponds to $f(\xi) = \mu + \xi$ under the restriction that $h$ has non-negative values. It is commonly extended to the rectified-linear case, i.e. $f(\xi) = \max(0, \mu + \xi)$.
JavaScript animation
Here is the link to an interactive simulation of a bivariate Hawkes process in JavaScript using Plotly.js. The intensities are given by: (the colors below correspond to the colors in the animation)
\[\begin{align*} {\color{darkblue} \lambda^1_t} &= \mu_1 + \int_0^t \beta_{11} e^{-\alpha(t-s)} {\color{darkblue} N^1}(\mathrm{ds}) + \int_0^t \beta_{12} e^{-\alpha(t-s)} {\color{orange} N^2}(\mathrm{ds}),\\ {\color{orange} \lambda^2_t} &= \mu_2 + \int_0^t \beta_{21} e^{-\alpha(t-s)} {\color{darkblue} N^1}(\mathrm{ds}) + \int_0^t \beta_{22} e^{-\alpha(t-s)} {\color{orange} N^2}(\mathrm{ds}). \end{align*}\]The simulation algorithm uses the thinning of two underlying Poisson processes ${\color{darkblue}\Pi^1}$ and ${\color{orange}\Pi^2}$. That is the reason why most of the points are not modified when you modify the parameters. A good exercise is to wonder what would happen if the simulation algorithm used time change instead of thinning.
Here are the animation features:
- the parameters $\mu_1, \mu_2, \alpha, \beta_{11}, \beta_{12}, \beta_{21}, \beta_{22}$ can be modified via sliders;
- the dots on the x-axis represent the times of $N^1$ and $N^2$;
- the solid lines represent both conditional intensities $\lambda^1$ and $\lambda^2$;
- the dashed lines represent both baseline intensities $\mu_1$ and $\mu_2$;
- a button to re-simulate the underlying Poisson processes;
- a checkbox to toggle visibility of the underlying Poisson processes with cross markers.
The interaction strengths $\beta_{11}, \beta_{12}, \beta_{21}, \beta_{22}$ are set to 0 by default which corresponds to the case where $N^1$ and $N^2$ are independent Poisson processes.
Try It Yourself
And feel free to reach me if you want the source code ot have some ideas for improvement.
Enjoy Reading This Article?
Here are some more articles you might like to read next: