Stochastic rewriting

I. Stochastic rewriting

  1. Petri Nets (PN): multi-set rewriting
  2. (Site) graph rewriting

II. ABC model contest

III. Causality analysis/Incremental update

I.

Petri net rewriting

Petri nets (abbrev. PN or PT)/Multi-set rewriting

  • Places: $𝒫$
  • Marking: $M: 𝒫 ⟶ ℕ$

    • $M = \lbrace s: n_s, t: n_t, …\rbrace$
  • Rewrite rule: of the form

    \[R = s: i + t:j ⟶ v:k\]

cf. picture

Places, symbols = variables

States:

variables + markings

Reachables:

states that can be reached given a PN and an initial marking

Activity of a reaction $R$:

number of ways it can be triggered (denoted by $λ_R$)

Activity of the whole system:
\[λ \; ≝ \; \sum\limits_{ i ∈ \lbrace 1, …, n\rbrace } λ_i\]

Probability of triggering the rule $i$:

\[P_i = \frac{λ_i}{λ}\]

Biological/physical clock (how long you have to wait before triggering a rule):

\[λ \exp(-λt) \sim \frac 1 {λ}\]

NB: we could adapt this probabilistic reduction strategy to any rewriting system (ex: $λ$-calculus with CBV)

The biological clock is not proportional to the CPU clock: a biological system can create new tokens (cf. picture)

To compute the new activity, you only need $λ$ and the rule that has been applied (local update).

Graph rewriting

In PN, for operational semantics, you don’t care about token preservation (you erase and create tokens), but in graph rewriting, knowing which node is preserved/erased is very important (this might create new links/erase old ones).

Rule:

A partial map called action map:

\[L ⟶ R\]

Which is tantamount to

\[\begin{xy} \xymatrix{ & D \ar@{^{(}->}[dl]_{ f } \ar@{^{(}->}[dr]^{ g } & \\ L \ar[rr] && R } \end{xy}\]

cf. picture

Double pushout rewriting: we build a pushout complement (POC), and then a pushout (PO) (cf. picture)

But problems with double pushouts:

  • you can’t prohibit already existing links between two nodes (pb 1 in pictures)
  • you can’t erase a node that is linked to other nodes ⟶ workaround: single pushouts (cf. picture : warning: triangle head arrows = partial maps)

Site graphs

A site graph $G = (A_G, L_G)$ is given by

  • $A_G ⊆ 𝒜$: a set of agents
  • $L_G ⊆ ℒ$: a set of links

Name:

\[Name: 𝒜 ⟶ \underbrace{𝒩}_{\text{set of names}}\]

Signature (how many sites an agent with name $a$ has):

\[Σ: 𝒩 ⟶ ℕ\]

Sites:

\[𝒮 ⊆ 𝒜 × ℕ\\ \text{ where } \quad (a,i) \text{ is the } i\text{-th site of agent } a\\ (a, i) ∈ 𝒮 \quad ⟺ Σ(Name(a)) ≥ i\]

Links:

\[ℒ ⊆ 𝒫_2(𝒜) \\ \lbrace (a, i), (b, j)\rbrace ∈ ℒ \qquad\text{ (implicitely : } (a,i) ≠ (b,j) \text{ )}\]

Conflict freeness (not necessary in every site graph)

\[∀ e, e' ∈ ℒ \text{ then }\\ e = \lbrace (a,i), x\rbrace ∧ e' = \lbrace (a,i), y\rbrace ⟹ x=y\]

Morphism $G \overset{f}{\hookrightarrow} H$:

\[\lbrace (a, i), (b,j) \rbrace ∈ L_G\\ \lbrace (f(a), i), (f(b), j)\rbrace\]

(cf picture)

Stochastic graph rewriting

\[[L]_G \; ≝ \; \vert \lbrace h \; \mid \; L \overset{h}{\hookrightarrow} G \rbrace \vert\]

Transitions: $(G, \rightarrowtriangle, G’)$

Now:

  • consider a set of rules

    \[ℛ \; ≝ \; \lbrace R_i \hookleftarrow (D_i, k_i) \hookrightarrow L_i \rbrace_{1 ≤ i ≤ n}\]
  • consider a state $G$

  • Define the activity the rule $i ∈ 1, …, n$:

    \[λ_i^G \; ≝ \; [L_i]_G × k_i\]

    NB: in Petri nets, tokens are consider up to permutation: that is, if they’re are two token and a rule needs two tokens to be triggered, then there the activity of the rule is only one.

    So, by convention, we may do the same here and consider “tokens” (i.e. embeddings) up to automorphism:

    \[λ_i^G \; ≝ \; \frac{[L_i]_G × k_i}{[L_i]_{L_i}}\]
  • The activity of $ℛ$ in $G$ is

    \[λ_{ℛ, G} = \sum\limits_{ i } λ_i^G\]

Computing subgraph iso is linear for site graphs.

%agent: A(b[a.B], c[a.C])
%agent: B(a[b.A], c[b.C])
%agent: C(a[c.A], b[c.B])
%var: 'kon' 1
%var: 'koff' 1

'A.B' A(b[.]), B(a[.]) <-> A(b[1]), B(a[1]) @ 'kon', 'koff'
'A.C' A(c[.]), C(a[.]) <-> A(c[2]), C(a[2]) @ 'kon', 'koff'
'B.C' B(c[.]), C(b[.]) <-> B(c[3]), C(b[3]) @ 'kon', 'koff'



%init: 10000 A(),B(), C()
%obs: 'tri' |A(b[1], c[2]), B(a[1], c[3]), C(a[2], b[3])|/10000

Prozone effect ⟶ if you want to form more

  • $A - B - C$: at the beginning, increasing $B$ increases the amount, but if you have too many $B$’s, the $A$’s and $C$’s are more likely to bind to distinct $B$’s, thus decreasing the amount.
  • triangles: you could decrease the probability of a binding to happen, to force the order

Rule refinement:

(cf picture)

Consider a rule of the form

\(r: L ⟶ R\) ⋯ cf picture

%agent: A(b[a.B], c[a.C])
%agent: B(a[b.A], c[b.C])
%agent: C(a[c.A], b[c.B])
%var: 'kon' 1
%var: 'koff' 1

'A.B' A(b[.]), B(a[.]) <-> A(b[1]), B(a[1]) @ 'kon', 'koff'
'B.C' B(c[.]), C(b[.]) <-> B(c[3]), C(b[3]) @ 'kon', 'koff'
'A.C' A(c[.]), C(a[.]) <-> A(c[2]), C(a[2]) @ 'kon'/10000, 'koff'
'A.Cref' B(a[4], c[5]), A(c[.], b[4]), C(a[.], b[5])  -> B(a[4], c[5]), A(c[6], b[4]), C(a[6], b[5]) @ inf


%init: 1000 A(),B(), C()
%obs: 'tri' |A(b[1], c[2]), B(a[1], c[3]), C(a[2], b[3])|/1000

%mod: [true] do $TRACK A(b[1], c[2]), B(a[1], c[3]), C(a[2], b[3]) [true] ;

Causality analysis:

What the point of using a kappa model? Suppose that you have an extremely sophisticated model (running for hours) that spits out curves. You could go to a lab and do the same experiment.

In biology/real-world experiment: you can only reason by means of counter-factuals.

“$A$ activates $B$ phos”:

  • in biology, you test with and without $A$ ⟹ just correlation, not causality
  • but in kappa, you can formally reason about causality ⟹ causality analysis (counter-example minimisation in verification)

    • you need a notion of commutation of events
    • compressed trace = story: knock out property (= removing any event prevents us to get the observable) cf. picture

cf picture


%agent: A(b[a.B], c[a.C])
%agent: B(a[b.A], c[b.C])
%agent: C(a[c.A], b[c.B])
%var: 'kon' 1
%var: 'koff' 1

'A.B' A(b[.]), B(a[.]) <-> A(b[1]), B(a[1]) @ 'kon', 'koff'
'B.C' B(c[.]), C(b[.]) <-> B(c[3]), C(b[3]) @ 'kon', 'koff'
'A.C' A(c[.]), C(a[.]) <-> A(c[2]), C(a[2]) @ 'kon'/10000, 'koff'
'A.Cref' B(a[4], c[5]), A(c[.], b[4]), C(a[.], b[5])  -> B(a[4], c[5]), A(c[6], b[4]), C(a[6], b[5]) @ inf


%init: 1000 A(),B(), C()
%obs: 'tri' |A(b[1], c[2]), B(a[1], c[3]), C(a[2], b[3])|/1000

%mod: [true] do $TRACK A(b[1], c[2]), B(a[1], c[3]), C(a[2], b[3]) [true] ;


I. Causality analysis II. Incremental update for graph rewriting III. Kappa projects

Example:

/*** TEMPLATE MODEL AS DESCRIBED IN THE MANUAL ***/

/* Signatures */
%agent: A(x,c) // Declaration of agent A
%agent: B(x) // Declaration of B
%agent: C(x1{u p},x2{u p}) // Declaration of C with 2 modifiable sites

/* Rules */
'a.b' A(x[.]),B(x[.]) -> A(x[1]),B(x[1]) @ 'on_rate' //A binds B
'a..b' A(x[1/.]),B(x[1/.]) @ 'off_rate' //AB dissociation

'ab.c' A(x[_],c[.]),C(x1{u}[.]) ->A(x[_],c[2]),C(x1{u}[2])
       @ 'on_rate' //AB binds C
'mod x1' C(x1{u}[1]),A(c[1]) ->C(x1[.]{p}),A(c[.]) @ 'mod_rate' //AB modifies x1
'a.c' A(x[.],c[.]),C(x1{p}[.],x2[.]{u}) ->
      A(x[.],c[1]),C(x1{p}[.],x2[1]{u}) @ 'on_rate' //A binds C on x2
'mod x2' A(x[.],c[1]),C(x1{p},x2{u}[1]) ->
         A(x[.],c[.]),C(x1{p},x2{p}[.]) @ 'mod_rate' //A modifies x2

/* Variables */
%var: 'on_rate' 1.0E-3 // per molecule per second
%var: 'off_rate' 0.1 // per second
%var: 'mod_rate' 1 // per second
%obs: 'AB' |A(x[x.B])|
%obs: 'Cuu' |C(x1{u},x2{u})|
%obs: 'Cpu' |C(x1{p},x2{u})|
%obs: 'Cpp' |C(x1{p},x2{p})|

%var: 'n_ab' 1000
%obs: 'n_c' 0
%var: 'C' |C()|

/* Initial conditions */
%init: 'n_ab' A(),B()
%init: 'n_c' C()

%mod: alarm 10.0 do $ADD 10000 C();
%mod: |C()| > 0 do $TRACK 'Cpp' [true];

Leave a comment