Lecture 2: Büchi condition

Teachers: Wieslaw Zielonka

Games and Graphs

  • $G ≝ (V, E)$
  • $V ≝ V_A \sqcup V_B$
  • $A$: Alice, $B$: Bill
$Paths$:

the set of finite paths in $G$

$Paths^∞$:

the set of infinite paths

p = v_1 v_2 ⋯ \qquad ∀i, (v_i, v_{i+1}) ∈ E
$W_A ⊆ Paths^∞$:

the set of infinite paths winning for Alice

$W_B = Paths^∞ \backslash W_A ⊆ Paths^∞$:

the set of infinite paths winning for Bill

Strategy $σ$ for Alice:

is a mapping $σ: V^\ast V_A ⟶ V$ such that ∀ x \, ≝ \, v_1 ⋯ v_n ∈ V^\ast V_A \qquad (v_n, σ(x)) ∈ E

$(σ, τ)$-strategy profile:

if $σ$ (resp. $τ$) is strategy for Alice (resp. Bill).

A path $p ≝ v_1 ⋯ v_n$ is consistent with the strategy $σ$ of Alice:

if for each $i$ such that $v_i ∈ V_A$, $v_{i+1} = σ(v_1 ⋯ v_i)$

If we fix an initial vertex $v ∈ V$, then there exists a unique infinite path $p(v, σ, τ)$ which starts at $v$ and is consistent with $σ$ and $τ$.

$σ$ is a winning strategy for Alice for the initial vertex $v$:

if each infinite path $p$ strating at $v$ and consistent with $σ$ belongs to $W_A$.

For a given initial vertex $v$, $σ, τ$, or none of them, may be a winning strategy.

The game is determined:

if for each initial vertex, one of the players has a winning strategy.

Memoryless/positional strategies

The strategy $σ$ is memoryless (or positional):

iff for all $x ≝ yv ∈ V^\ast V_A$: σ(x) = σ(v)

σ: V_A ⟶ V \qquad (v, σ(v)) ∈ E \quad ∀v ∈ V_A

Ex: If $σ$ and $τ$ are memoryless and the graph is finite: all the plays are “lasso” loops in the graph.

Normally, when considering games on graphs, one looks for a partition:

V = \underbrace{Win_A}_{\text{vertices winning for Alice}} \sqcup Win_B \sqcup (V \backslash (Win_A ∪ Win_B))

Moreover, usually: people look for memoryless strategy that are independent on the initial vertex.

$OneStep_A(X)$:

set of vertices such that, strating from one of these vertices, Alice can reach $X$ in exactly one move.

OneStep_A(X) \, ≝ \, \lbrace v ∈ V_A \; \mid \; ∃ w ∈ X; (v, w)∈E \rbrace\\ ∪ \lbrace v ∈ V_B \; \mid \; ∀w ∈ V, (v,w)∈E ⟹ w ∈ X \rbrace
\overline{OneStep}_A(X) = OneStep_A(X) ∪ X

and:

\begin{cases} X_0 ≝ X \\ X_{i+1} ≝ \overline{OneStep}_A(X_i) \end{cases}

Then

X ⊆ \overline{OneStep}_A(X)

and if $G$ is finite, there exists $k$ such that $X_{k+1} = X_k$.

Reach_A(X) \; ≝ \; \bigcup_i X_i

For all $v ∈ V$,

rank(v) \; ≝ \; \min \lbrace i \; \mid \;v ∈ X_i \rbrace \qquad \min ∅ ≝ ∞

Let $v ∈ Reach_A(X)$ such that $rank(v) > 0$.

If

  • $v ∈ V_A$, then there exists $w$ s.t. $(v, w) ∈ E$ and $rank(w) < rank(v)$
  • $v ∈ V_B$, then for all $w$ s.t. $(v, w) ∈ E$, $rank(w) < rank(v)$

Büchi condition

Büchi: Alice wins iff the play visits infinitely often a fixed set $R$ of vertices.


If $X ⊆ Y$:

  • $OneStep_A(X) ⊆ OneStep_A(Y)$
  • $Reach_A(X) ⊆ Reach_A(Y)$

So the following mapping is clearly monotone: $Y ⟼ OneStep_A(Reach_A(Y))$

And on top of that:

Reach_A(Y) \backslash Y ⊆ OneStep_A(Reach_A(Y)) ⊆ Reach_A(Y)

NB: $Reach_A(Y) \backslash Y$ because there may exist some $v ∈ Y$ s.t. $v ∉ OneStep_A(Reach_A(Y))$, that is: for which Bill has a strategy to leave $Reach_A(Y)$ in one step.

ϕ(Y) \; ≝ \; OneStep_A\Big(Reach_A( \overbrace{Y ∩ R}^{\rlap{\text{fixed set of vertices}}})\Big)

Let $W$ s.t. $ϕ(W) = W$. Then Alice has a strategy to visit $R$ infinitely often.

(Indeed:

Reach_A(W ∩ R) \backslash (W ∩ R) ⊆ W = OneStep_A(Reach_A((W ∩ R))) ⊆ Reach_A((W ∩ R))

so $Reach_A(W ∩ R) \backslash (W ∩ R) = ∅$, and $Reach_A(W ∩ R) ⊆ W ∩ R$)

Let

Y_0 ≝ V, \; Y_1 ≝ ϕ(Y_0), \; … , \, Y_{i+1} = ϕ(Y_i)

Then by induction: this sequence is decreasing: $Y_{i+1} ⊆ Y_i \; ∀i$.

So $\bigcap\limits_{i} Y_i$ is a fixed point of $ϕ$. And it is the greatest fixed point. Indeed, if $W$ is another such fixed point. Then $W ⊆ V$, and by applying $ϕ$ over and over, we get the result.

Y_{i+1} ∪ (Y_i ∩ R) = Reach_A(Y_i ∩ R)

Path consistent with Bill’s strategy at $Y_i \backslash Y_{i+1}$

p = v_1 ⋯
  • either $∀ i ≥ 2$, $v_i ∉ R$
  • or there exists $j≥2$ s.t. $v_j ∈ Y_i^c$ and all vertices $v_2, …, v_{j-1}$ do not belong to $R$.

Forgetting the first moment, the index corresponds to the maximal number of times that Alice can force the visit to $R$.

Example

Imagine a pie that two players share to eat.

  1. Player 1 proposes a proportion $x_1$ of the pie to player 2
  2. Player can either accept, in which the proportions gotten are $(1-x_1, x_1)$, or rejects.
  3. etc… (as described below)
  digraph {
    rankdir=TB;
    1 -> 2[label="P1 proposes x_1"];
    2 -> "(1-x_1, x_1)"[label="P2 accepts"];
    2 -> 0[label="P2 rejects"];
    0 -> "(0, 0)"[label="1 - δ"];
    0 -> "2'"[label="δ"];
    "2'" -> "1'"[label="P2 proposes x_2"];
    "1'" -> "(x_2, 1-x_2)"[label="P1 accepts"];
    "1'" -> "0'"[label="P1 rejects"];
    "0'" -> 1[label="δ"];
    "0'" -> "(0, 0)'"[label="1-δ"];
  }

With proba 1, the game stops.

Nash equilibria: for each $0 ≤ x ≤ 1$, the proportion $(x, 1-x)$ is a Nash equilibrium. Indeed: P1 never accepts less than $x$, and never proposes more than $1-x$. Symmetrically for P2.

Can we find a subgame-perfect equilibrium? (where each time P1 (resp. P2) plays, he/she makes the same proportion).

Backward induction: begin at node $1$ and go backward (along the $δ$ arrow):

  1. Expectation for each player: $(z, 1-z)$
  2. At $0’$, expectation: $(1-δ)×(0, 0) + δ(z, 1-z) = (δz, δ(1-z))$
  3. At $2’$: P2 proposes $(zδ, 1-zδ)$, so that P1 accepts and P2’s payoff is $1-zδ > (1-z)δ$
  4. etc…
  digraph {
    rankdir=TB;
    "1 | (1-(δ-zδ²), δ-zδ²)" -> 2[label="P1 proposes x_1"];
    2 -> "(1-x_1, x_1) | (1-(δ-zδ²), δ-zδ²)"[label="P2 accepts"];
    2 -> "0 | (zδ², δ-zδ²)"[label="P2 rejects"];
    "0 | (zδ², δ-zδ²)" -> "(0, 0)"[label="1 - δ"];
    "0 | (zδ², δ-zδ²)" -> "2' | (zδ, 1-zδ)"[label="δ"];
    "2' | (zδ, 1-zδ)" -> "1'"[label="P2 proposes x_2"];
    "1'" -> "(x_2, 1-x_2) | (zδ, 1-zδ)"[label="P1 accepts"];
    "1'" -> "0' | (zδ, (1-z)δ)"[label="P1 rejects"];
    "0' | (zδ, (1-z)δ)" -> 1[label="δ"];
    "0' | (zδ, (1-z)δ)" -> "(0, 0)'"[label="1-δ"];
  }

So now:

(z, 1-z) = (1-δ+zδ^2, δ-zδ^2)

Therefore:

z = \frac{1-δ}{1-δ^2} = \frac{1}{1+δ}

Leave a comment