Lecture 5: P=AL

Last week:

  • Alternating TM
  • Circuit value (monotonic: no “not” in the circuit)
  • HornSAT
  • MCV ≼ CV ≼ HornSAT

Th: $MCV$ is

  • $ASPACE(\log(n))$-hard
  • $AL$-hard

For all $L$, if $L$ is in AL, then

\[L ≼_{\bf L} MCV\]

Reduction:

\[x ⟼ 𝒞_x, t_x\]

s.t. $x∈L$ iff $t_x$ evaluates to true in $𝒞_x$

Configurations:

  • $C: q, i, j, w$

    • where $i$: position on input tape
    • $j$: position on working tape
    • $w$: word on working tape

Initial configuration:

\[C_0: q_0, 1, 1, ε\]

⟶ $∧$ (resp. $∨$) gate before “$C: q, i, j, w$” if $q ∈ Q_∀$ (resp. $q ∈ Q_∃$)

One does a loop where one enumerate all pairs of configurations that are edges (to use logspace on the working tape), and we write the circuit on the ouptut tape.

But how to avoid cycle in the graph of configurations?

Trick: we label the configuration nodes with the current time $t$ (increased at each iteration of the loop), so that we end up “unfolding the potential loops”.

How many configurations?

\[\vert Q \vert × (n+2) × (\log n + 2) × \underbrace{\vert Γ \vert^{\log(n)}}_{≝ n^{\log \vert Γ \vert} \text{ : a polynomial in } n} ≤ p(n)\]

and with $t ≤ p(n)$:

\[\vert Q \vert × (n+2) × (\log n + 2) × n^{\log \vert Γ \vert} × p(n) ≤ p(n)^2\]

Th: \(AL ⊆ P ≝ TIME(n^{O(1)})\)

Proof: because $MCV ∈ P$

One uses a topological sorting of the nodes (in polynomial time) and we evaluate the nodes one by one (storing the partial results in memory).

NB: it’s the same for $CV ∈ P$

Th: $HornSAT ∈ P$

HornSAT:

SAT, but with at most one positive litteral per clause

In HornSAT, a priori, there’s a valuation to guess (contrary to CV, where one has to compute the value of each node).

A valuation

\[v : Var ⟶ \lbrace ⊥, ⊤ \rbrace\]

With the order $≤$, we define a partial order

\[v ≤ v'\]

pointwise.

$v \overset{H}{⟶} v’$:
\[v'(X) = \begin{cases} ⊤ \text{ if } ∃X ⇐ Y_1 ∧ ⋯ ∧ Y_n ∈ H \text{ s.t. } v(Y_1 ∧ ⋯ ) = ⊤ \\ v(X) \text{ otherwise} \end{cases}\]

Propositions:

  1. $v \overset{H}{⟶} v’ ⟹ v’ ≥ v$
  2. $v ≤ w$ and $\begin{cases} v \overset{H}{⟶} v’
    w \overset{H}{⟶} w’ \end{cases} ⟹ v’ ≤ w’$
  3. $v_⊥ \overset{H}{⟶} v_1 \overset{H}{⟶} v_2 \overset{H}{⟶} ⋯ \overset{H}{⟶} v_n = v_{n+1}$: the sequence is stationnary since you can strictly increase only $\vert Var \vert$ times (and it is deterministic, so if $v_0 = v_1$, then $v_1 = v_2 = ⋯$)

$v_{\vert Var \vert}$ satisfies $H$ iff $∃ v;$ $v$ satisfies $H$

$⟹$ trivial.

$⟸$: if $v$ satisfies $H$, then any $v’$ s.t. $v \overset{H}{⟶} v’$ also satisfies $H$

$v_n$ is stable ($= v_{n+1} = ⋯$), and $v_n ≤ v$ which is stable and satisfies $H$ itself, so $v_n$ satisfies $H$.

  • for the $X ⇐ Y_1 ∧ ⋯ ∧ Y_r$ clauses: $v_i$ stable ⟺ $v_i$ satisfies all clauses $X ⇐ Y_1 ∧ ⋯ ∧ Y_r$
  • for the $⊥ ⇐ Y_1 ∧ ⋯ ∧ Y_r$ clauses: if $v ≤ v’$ and $v’$ satisfies $H$, then $v$ satisfies also the clause, since $v$ is “more false” than $v’$

Th:

\[HornSAT ∈ AL\]

$Q∀$: “Check that $v_n$ validates all clauses $⊥ ⇐ Y_1 ∧ ⋯ ∧ Y_n$”

for $⊥ ⇐ Y_1 ∧ ⋯ ∧ Y_n$, do:

$Q∃$: “guess $Y_i$ and check that $v_n(Y_i) = ⊥$”

for $v_n(Y_i) = ⊥$, do:

check, for all ($Q∀$) clauses $Y_i ⇐ Z_1 ∧ ⋯ ∧ Z_k$, that $∃$ ($Q∃$) $Z_j; \; v_{n-1}(Z_j) = ⊥$ and if there is no such clause, check that $v_{n-1}(Y_i) = ⊥$

It’s in AL, since at each step, you either have to remember a clause (with a pointer) or a variable, and $n$.

Th: \(P = AL\)

We will show that CV is $P$-hard.

So

\[MCV ≼ CV ≼ HornSAT\]

are PTIME-complete and ASPACE-complete

\[TIME(2^{O(f)}) = ASPACE(f(n))\]

and we saw last time that

\[SPACE(f^{O(1)}) = ATIME(f^{O(1)})\]

What’s the point of showing that a problem $L$ is $PTIME$-complete?

⟶ because it means that we’ll need polynomial memory, and when I use it, I need random access memory

Ex: dynamic programming ⟹ PTIME-hard, we look at every previous value

⟶ VS: not in PTIME: it might be that we have efficient ways to solve the problem

The following is believed:

not PTIME-hard ⟺

  1. Logspace
  2. pushdown in PTIME (we only look at the top of the stack: very quick)
  3. Parallel logtime (problem cut out in lots of computations: every processor reads part of the input ⟶ then they communicate)
    • ex: finding the majority letter in a word
    • circuit whose depth is logarithmic

Leave a comment