DM: Advanced Complexity
Homework Assignment: Advanced Complexity
Younesse Kaddar
\[\newcommand{\nabNP}{\mathop{\rm ∇NP}\nolimits} \newcommand{\NP}{\mathop{\rm NP}\nolimits} \newcommand{\coNP}{\mathop{\rm coNP}\nolimits} \newcommand{\ll}{\left[\!\!\left[} \newcommand{\rr}{\right]\!\!\right]}\]1. $\nabNP$, a new complexity class.
\[\nabNP ≝ \lbrace L_1 \backslash L_2 \mid L_1, L_2 ∈ \NP \rbrace\]1.
\[\begin{align*} \textsf{YesNoSAT} & ≝ \lbrace (F, G) \mid F ∈ \textsf{SAT}, G ∉ \textsf{SAT}\rbrace \\ & = \underbrace{\left\lbrace (F, G) \middle \begin{cases} F ∈ \textsf{SAT} \\ G \text{ any boolean expression} \end{cases} \right\rbrace}_{∈ \; \NP} \backslash \underbrace{\left\lbrace (F, G) \middle \begin{cases} F \text{ any boolean expression} \\ G ∈ \textsf{SAT} \end{cases} \right\rbrace}_{∈ \; \NP} \\ &∈ \nabNP \end{align*}\]The underbraced sets are in $\NP$ since:
 $\textsf{SAT} ∈ \NP$
 and the unconstrained boolean expression only requires the Turing machine to check that it is a syntactically wellformed boolean formula.
2.
\[\textsf{Prime} ∈ \coNP = \lbrace \underbrace{\textsf{All}}_{∈NP: \; \rlap{\text{language associated with the problem that accepts everything}}} \backslash L_2 \mid L_2 ∈ \NP \rbrace ⊆ \nabNP\]Indeed: The nondeterministic Turing machine which, for each input $n∈ℕ$:
 guesses an integer $d ∈ \ll 2, n1 \rr$
 returns true if $d$ is a divisor of $n$, false otherwise
recognizes in nondeterministic polynomial time $\underbrace{\overline{\textsf{Prime}}}_{\rlap{\text{complement of } \textsf{Prime}}} \overset{\text{hence}}{∈} \NP$, so that $\textsf{Prime} ∈ \coNP$.
3.
As $2^k$ is written with $k+1$ digits in binary:
\[\begin{align*} \vert e_n \vert & = \sum\limits_{ k=0 }^{n1} \underbrace{(k+1) + (k+2) + 1}_{\text{length of } 2^k ∪ 2^{k+1}} + \underbrace{n1}_{\text{number of } "+"} \\ & = 2 \sum\limits_{ k=0 }^{n1} k + 4n + n1 \\ & = 2 \frac{n(n1)}{2} + 5n  1 \\ & = n(n + 4)  1 \end{align*}\] \[\begin{align*} V(e_n) & = \left\lbrace \sum\limits_{ k=0 }^{n1} ε_k 2^k \; \middle \; (ε_k) ∈ \lbrace 1, 2 \rbrace^n \right\rbrace \\ & = \left\lbrace \sum\limits_{ k=0 }^{n1} 2^k + \sum\limits_{ k=0 }^{n1} (ε_k1) 2^k \; \middle \; (ε_k)_{0 ≤ k ≤ n1} ∈ \lbrace 1, 2 \rbrace^{n} \right\rbrace \\ & = \left\lbrace 2^n  1 + \sum\limits_{ k=0 }^{n1} ε'_k 2^k \; \middle \; (ε'_k)_{0 ≤ k ≤ n1} ∈ \lbrace 0, 1 \rbrace^n \right\rbrace \\ \end{align*}\]But any integer in $\ll 0, 2^n1 \rr$ can be decomposed into a sum of the form $\sum\limits_{ k=0 }^{n1} ε’k 2^k$, where $(ε’_k){0 ≤ k ≤ n1} ∈ \lbrace 0, 1 \rbrace^n$ (decomposition in base $2$).
So \(V(e_n) = \ll 2^n  1, \, 2^{n+1}  2 \rr\)
4.
\[\begin{align*} \textsf{IsolVal} & = \underbrace{\left\lbrace (e, n) \middle \; n ∈ V(e)\right\rbrace}_{∈ \; \NP} \backslash \underbrace{\Big( \left\lbrace (e, n) \middle \; n1 ∈ V(e)\right\rbrace ∪ \left\lbrace (e, n) \middle \; n+1 ∈ V(e)\right\rbrace \Big)}_{∈ \; \NP} \\ &∈ \nabNP \end{align*}\]Indeed:
Lemma: $\NP$ is closed under union (resp. intersection).
Proof: Let $L_1, L_2 ∈ \NP$ respectively recognized by $ℳ_1, ℳ_2$.
Let $ℳ$ be the Turing machine which, on input $w$:

runs $ℳ_1$ on $w$ and accepts (resp. rejects) if $w$ is (resp. is not) accepted

runs $ℳ_2$ on $w$ and accepts (resp. rejects) if $w$ is (resp. is not) accepted

otherwise rejects (resp. accepts).
$ℳ$ clearly recognizes $L_1 ∪ L_2$ (resp. $L_1 ∩ L_2$), and runs in polynomial time, since $ℳ_1$ and $ℳ_2$ do. So the result follows.

$\lbrace (e, n) \mid \; n ∈ V(e)\rbrace ∈ \NP$ :
 The nondeterministic Turing machine guesses an element of $V(e)$ and checks if it is equal to $n$.

The “guessing” process can be recursively specified as follows:
 for a NE of the form $m$: it picks $m$
 for a NE of the form $e_1 + e_2$: it guesses an element in $e_1$, another one in $e_2$ and sums both of them
 for a NE of the form $e_1 ∪ e_2$: it nondeterministically chooses between $e_1$ and $e_2$ and guesses an element in it
Likewise, $\lbrace (e, n) \mid \; n1 ∈ V(e)\rbrace, \lbrace (e, n) \mid \; n+1 ∈ V(e)\rbrace ∈ \NP$

As $\NP$ is closed under union (lemma), $\lbrace (e, n) \mid \; n1 ∈ V(e)\rbrace ∪ \lbrace (e, n) \mid \; n+1 ∈ V(e)\rbrace ∈ \NP$
5.
\[\begin{align*} \textsf{AlmostSAT} & = \underbrace{\left\lbrace S ≝ C_1 ∧ ⋯ ∧ C_n \middle \; S\backslash C_i ∈ \textsf{SAT} \right\rbrace}_{∈ \; \NP} \backslash \underbrace{\left\lbrace S ≝ C_1 ∧ ⋯ ∧ C_n \middle \; S ∈ \textsf{SAT} \right\rbrace}_{∈ \; \NP} \\ &∈ \nabNP \end{align*}\]Indeed:
The first set is in $\NP$: one runs the Turing machine $ℳ$ recognizing $\textsf{SAT}$ on each $S\backslash C_i$ and accepts if and only if all of them are satisfiable: this is done in polynomial time, since a linear number of executions of $ℳ$ (running in polynomial time) are performed.
6.
Let $\textsf{All} ∈ NP$ be the language associated with the problem that accepts everything, $∅ ∈ \NP$ the empty language (they are in $\NP$: the machines immediately accept or reject). Then
\[\begin{cases} \NP = \lbrace L_1 \backslash ∅ \mid L_1 ∈ \NP \rbrace ⊆ \nabNP \\ \coNP = \lbrace \textsf{All} \backslash L_2 \mid L_2 ∈ \NP \rbrace ⊆ \nabNP \end{cases}\]so \(\NP ∪ \coNP ⊆ \nabNP\)
7.
For all $L ≝ L_1\backslash L_2, \; L’ ≝ L_1’\backslash L_2’ ∈ \nabNP$ (where $L_1, L_2, L_1’, L_2’ ∈ \NP$):
\[L ∩ L' = (L_1 ∩ L_2)\backslash (L_1' ∪ L_2') ∈ \nabNP\]Indeed:
 $L_1 ∩ L_2 ∈ \NP$, since $\NP$ is closed under intersection (cf. lemma, question 4)
 $L_1’ ∪ L_2’ ∈ \NP$, since $\NP$ is closed under union (cf. same lemma)
2. A few simple $\nabNP$complete problems
8.
As $\textsf{YesNoSAT} ∈ \nabNP$ (question 1), it suffices to show that $\textsf{YesNoSAT}$ is $\nabNP$hard.
For all $L ≝ L_1\backslash L_2 ∈ \nabNP$ (where $L_1, L_2, L_1’, L_2’ ∈ \NP$): as $\textsf{SAT}$ is $\NP$complete (CookLevin theorem), there exist two logspace reductions $r_1, r_2$ such that:
\[\begin{cases} ∀w, \; w ∈ L_1 ⟺ r_1(w) ∈ \textsf{SAT} \\ ∀w, \; w ∈ L_2 ⟺ r_2(w) ∈ \textsf{SAT} \\ \end{cases}\]Let \(r ≝ w ⟼ (r_1(w), r_2(w))\)
Then for all $w$:
\[\begin{align*} w ∈ L_1\backslash L_2 &⟺ w ∈ L_1 ∧ w ∉ L_2 \\ &⟺ r_1(w) ∈ \textsf{SAT} ∧ r_2(w) ∉ \textsf{SAT} \\ &⟺ (r_1(w), r_2(w)) ∈ \textsf{YesNoSAT} \end{align*}\]Moreover, $r$ runs clearly in logspace, as $r_1$ and $r_2$ do.
So \(∀L ∈ \nabNP, \; L ≼_{\rm L} \textsf{YesNoSAT}\)
and
$\textsf{YesNoSAT}$ is $\nabNP$complete.
9.
$\textsf{BestClique} ∈ \nabNP$:
\[\begin{align*} \textsf{BestClique} & = \textsf{Clique} \backslash \underbrace{\left\lbrace (G, k) \middle \; (G, k+1) ∈ \textsf{Clique} \right\rbrace}_{∈ \; \NP} \\ &∈ \nabNP \end{align*}\]Indeed:
$\textsf{Clique} ∈ \NP$: guess a set $S$ of vertices, check if $\vert S \vert≥ k$, then check whether all vertices in $S$ are connected by an edge (it take quadratic nondeterministic time, hence polynomial nondeterministic time). Analogously, $\lbrace (G, k) \mid \; (G, k+1) ∈ \textsf{Clique} \rbrace ∈ \NP$.
$\textsf{BestClique}$ is $\nabNP$hard:
Let us first show that $\textsf{BestClique}$ is $\NP$hard (it will come in handy at questions 16 and 17).
Lemma: There exists a logspace reduction $r$ from $\textsf{3SAT}$ to $\textsf{BestClique}$ such that for all 3CNF boolean formula $φ$ with $m$ clauses:
\[\begin{cases} φ ∈ \textsf{3SAT} ⟹ r(φ) ≝ (G, m) ∈ \textsf{BestClique} &&⊛ \\ φ ∉ \textsf{3SAT} ⟹ (G, m1) ∈ \textsf{BestClique} &&⊛⊛ \end{cases}\]In particular, $φ ∈ \textsf{3SAT} ⟺ r(φ)∈ \textsf{BestClique}$, so $\textsf{BestClique}$ is $\NP$hard
We will use the same reduction as the one seen last year (http://younesse.net/Calculabilite/TD7/ Réductions> EX1 and EX3):
For each clause $C$ of $r$ literals in $φ$, we add $r$ nodes $G$, each one labeled with a literal from $C$. We don’t connect any nodes stemming from the same clause.
Then, we put edges between each pair of nodes coming from distinct clauses, except for the pairs of the form $(x, ¬x)$, so that any clique in $G$ is of size smaller (or equal) than $m$.
One easily sees that:
 if $φ$ is satisified by a valuation $v$: the clique comprised, for each clause $C$, of one vertex corresponding to one satisfied literal, is of size $m$ (and it thereby maximal)
 if $G$ has a largest clique $c$ of size $m$: then $c$ has exactly one node from each clause (at most one since any nodes from a same clause are not connected, and one because it is of size $m$). Then by setting the corresponding literals to true (and everything else to false), the resulting valuation satifies $φ$, since each clause has a satisfied literal.
Then, to ensure that if $φ$ is unsatisfied, the largest clique is of size $m1$: we modify $G$ by artificially taking the union with a new clique of size $m1$
NB: the reduction is clearly logspace, since:
 $m$ can be computed by reading the input, with one counter
 one can build $G$ by scanning through each clause, with a constant number of pointers
$\nabNP$hardness
We reduce $\textsf{BestClique}$ from $\textsf{YesNoSAT}$.
Let $φ_1, φ_2$ be two CNF formulas having respectively $m_1$ and $m_2$ clauses. We can ensure that $m_1 ≠ m_2$ by possibly adding new tautological clauses (i.e. of the form $x ∨ ¬x$, where $x$ is a fresh variable), which doesn’t change the satisfiability of the formulas.
By denoting by $r$ the reduction introduced in the previous lemma:

$r(φ_1) ≝ (G_1, m_1)$ has a largest clique of size:
 $m_1$ if $φ_1$ is satisfiable
 $m_1  1$ otherwise
And similarly for $r(φ_2 ≝ (G_2, m_2)$.
Then, we define:
\[G ≝ G_1 × G_1\]That is:
 the vertex set of $G$ is the cartesian product of the vertex sets of $G_1$ and $G_2$

in $G$, $(u_1,u_2)$ and $(v_1, v_2)$ are adjacent if and only if
 $u_1$ is adjacent with $v_1$
 $u_2$ is adjacent with $v_2$
It follows that:
\[(φ_1, φ_2) ∈ \textsf{YesNoSAT} ⟺ (G, m_1(m_21)) ∈ \textsf{BestClique}\]Indeed, one cannot have $m_1(m_21) = (m_1  1)m_2$, since it would imply that $\frac{m_1}{m_1  1} = \frac{m_2}{m_2  1}$, but the function $n ⟼ \frac{n}{n  1}$ is injective and $m_1 ≠ m_2$.
The reduction is logspace since:
 one computes $m_1$ and $m_2$ with two counters
 one adds possibly one new tautological formula
 to build $G ≝ G_1 × G_2$, one only keeps a constant amount of pointers on the working tape (e.g. the current nodes of $G_1$ and $G_2$ considered)
On the whole, as $\textsf{YesNoSAT}$ has been proven to be $\nabNP$hard in question 8, so is $\textsf{BestClique}$.
As $\textsf{BestClique} ∈ \nabNP$ and $\textsf{BestClique}$ is $\nabNP$hard, $\textsf{BestClique}$ is $\nabNP$complete.
10.
As $\textsf{IsolVal}$ has been shown to be in $\nabNP$ at question 4, one has to show that it is $\nabNP$hard.
For all $L ≝ L_1\backslash L_2 ∈ \nabNP$ (where $L_1, L_2, L_1’, L_2’ ∈ \NP$): as $\textsf{SubsetSum}$ is $\NP$complete, there exist two logspace reductions $r_1, r_2$ such that:
\[\begin{cases} ∀w, \; w ∈ L_1 ⟺ r_1(w) ≝ (\lbrace a_1, ⋯, a_k \rbrace, t) ∈ \textsf{SubsetSum} \\ ∀w, \; w ∈ L_2 ⟺ r_2(w) ≝ (\lbrace a'_1, ⋯, a'_l \rbrace, t') ∈ \textsf{SubsetSum} \\ \end{cases}\]If $t≥ t’$
For all $w$, let $r(w) ≝ (e, n)$ be defined by:
 \[e ≝ \overbrace{3a_1 ∪ 0 + ⋯ + 3a_k ∪ 0}^{≝ \, e_1}\\ ∪ \Big(\underbrace{3a'_1 ∪ 0 + ⋯ + 3a'_l ∪ 0 + 3(tt')+1}_{≝ \, e_2} \Big)\]
 \[n ≝ 3t\]
It follows that:
If $(\lbrace a_1, ⋯, a_k \rbrace, t) ∈ \textsf{SubsetSum}$ and $(\lbrace a’_1, ⋯, a’_l \rbrace, t’) ∉ \textsf{SubsetSum}$:
then $n ≝ 3t ∈ V(e_1) ⊆ V(e)$, and

$n1 = 3t1 ∉ V(e_1) ∪ V(e_2) = V(e)$, since $n1 ≡ 2 \mod 3$ and
 $∀m ∈ V(e_1), \; m ≡ 0 \mod 3$
 $∀m ∈ V(e_2), \; m ≡ 1 \mod 3$

$n+1 = 3t+1 ∉ V(e_1) ∪ V(e_2) = V(e)$, since $n+1 ≡ 1 \mod 3$ and
 $∀m ∈ V(e_1), \; m ≡ 0 \mod 3$
 if $n+1 ∈ V(e_2)$, then there exists $J ⊆ \ll 1, l\rr$ such that \(3 \sum\limits_{ j ∈ J } a'_j + 3(t  t') + 1 = 3t+1\) i.e \(\sum\limits_{ j ∈ J } a'_j = t'\) which contradicts $(\lbrace a’_1, ⋯, a’_l \rbrace, t’) ∉ \textsf{SubsetSum}$
so $r(w) ∈ \textsf{IsolVal}$
If $(\lbrace a_1, ⋯, a_k \rbrace, t) ∉ \textsf{SubsetSum}$ or $(\lbrace a’_1, ⋯, a’_l \rbrace, t’) ∈ \textsf{SubsetSum}$:

If $(\lbrace a_1, ⋯, a_k \rbrace, t) ∉ \textsf{SubsetSum}$, then
 $n ∉ V(e_1)$ (subset sum condition)
 $n = 3t ∉ V(e_2)$ (as argued before, due to $n ≡ 0\mod 3$)
so $n ∉ V(e)$

If $(\lbrace a’_1, ⋯, a’_l \rbrace, t’) ∈ \textsf{SubsetSum}$, then
 $n+1 = 3t+1 ∈ V(e_2)$, since a subset sum sums to $t’$
in either case, $r(w) ∉ \textsf{IsolVal}$
If $t’ > t$
We proceed similarly, with $r(w) ≝ (e, n)$ where:
 \[e ≝ \overbrace{3a_1 ∪ 0 + ⋯ + 3a_k ∪ 0 + 3(t't)}^{≝ \, e_1}\\ ∪ \Big(\underbrace{3a'_1 ∪ 0 + ⋯ + 3a'_l ∪ 0 +1}_{≝ \, e_2} \Big)\]
 \[n ≝ 3t'\]
The proof is perfectly analogous, since the elements of the sets $V(e_1)$ and $V(e_2)$ have still the same value modulo $3$.
On the whole, we have shown that
\[\begin{align*} w ∈ L_1\backslash L_2 &⟺ (\lbrace a_1, ⋯, a_k \rbrace, t) ∈ \textsf{SubsetSum} \text{ and } (\lbrace a'_1, ⋯, a'_l \rbrace, t') ∉ \textsf{SubsetSum} \\ &⟺ r(w) ∈ \textsf{IsolVal} \end{align*}\]Moreover, $r$ runs clearly in logspace, as all we do is comparing $t$ and $t’$, before using pointers to write the NE directly on the output tape.
$\textsf{IsolVal}$ is $\nabNP$complete.
3. A more complex reduction
11.
\[S^Z ≝ C^Z ∧ \bigwedge_i C_i^Z ∧ \bigwedge_{i≠j} D_{i,j}^Z\]where
 $C^Z ≝ z_1 ∨ ⋯ ∨ z_n$
 $C_i^Z ≝ z_1 ∨ ⋯ ∨ z_{i1} ∨ ¬ z_i ∨ z_{i+1} ∨ ⋯ ∨ z_n$
 $D_{i, j}^Z ≝ ¬ z_i ∨ ¬ z_j$
If $n = 0$
Then
 $C^Z ≝ \, ⊥$
 $\ll 1, n \rr = ∅$
so that:
\[S^Z ≝ \; ⊥ ∧ \underbrace{\bigwedge_{i ∈ ∅} C_i^Z}_{ = ⊤} \; ∧ \; \underbrace{\bigwedge_{i≠j ∈ ∅} D_{i,j}^Z}_{= ⊤}\]and the CNFform of $S^Z$ is:
\[S^Z ≝ \; ⊥\]Therefore $S^Z ∈ \textsf{AlmostSAT}$, since $S^Z$ is unsatisfiable, and $S\backslash ⊥$, which is the empty conjunction (that is, $⊤$) is satisfiable.
If $n > 0$
$S^Z$ is unsatisfiable
By contradiction, if there existed a valuation $v$ satisfying $S^Z$:
 as $v$ would satisfy $C^Z ≝ z_1 ∨ ⋯ ∨ z_n$, there would exist $i ∈ \ll 1, n \rr$ such that $v(z_i) = ⊤$
 but then, $v$ satisfying $C_i^Z ≝ z_1 ∨ ⋯ ∨ z_{i1} ∨ ¬ z_i ∨ z_{i+1} ∨ ⋯ ∨ z_n$ would yield another $j ≠ i$ such that $v(z_j) = ⊤$
 which would contradict $v$ satisfying $D_{i, j}^Z ≝ ¬ z_i ∨ ¬ z_j$
$S^Z \backslash C^Z$ is satisfiable
Setting all variables to $⊥$ then satisfies $\bigwedge_i C_i^Z ∧ \bigwedge_{i≠j} D_{i,j}^Z = S^Z\backslash C^Z$.
$S^Z \backslash C_k^Z$ is satisfiable
Setting all variables to $⊥$ except $z_k$ (which is set to $⊤$) then satisfies $C^Z ∧ \bigwedge_{i≠k} C_i^Z ∧ \bigwedge_{i≠j} D_{i,j}^Z = S^Z\backslash C_k^Z$.
$S^Z \backslash D_{k, l}^Z$ is satisfiable
Setting all variables to $⊥$ except $z_k$ and $z_l$ (which are set to $⊤$) then satisfies $C^Z ∧ \bigwedge_{i} C_i^Z ∧ \bigwedge_{\substack{i≠j \ i, j ∉ \lbrace k, l \rbrace}} D_{i,j}^Z = S^Z\backslash D_{k, l}^Z$.
It has been shown that
\[S^Z ∈ \textsf{AlmostSAT}\]
12.
Let us reduce $\textsf{AlmostSAT}$ from $\textsf{coSAT}$.
First, we reduce $\textsf{co3SAT}$ from $\textsf{coSAT}$: as $\textsf{3SAT}$ is $\NP$hard, $\textsf{co3SAT}$ is also $\coNP$hard.
Indeed: for all $L ∈ \coNP$, there exist a reduction $r$ of $\overline{L} ∈ \NP$ from $\textsf{3SAT}$, so that \(∀w, \; w∈\overline{L} ⟺ r(w) ∈ \textsf{3SAT}\) that is \(∀w, \; w∈L ⟺ w ∉ \overline{L} ⟺ r(w) ∉ \textsf{3SAT} ⟺ r(w) ∈ \textsf{co3SAT}\) so that $r$ is a reduction of $L$ to $\textsf{co3SAT}$.
So we want to show \(\textsf{coSAT} ≼_{\rm L} \textsf{co3SAT} ≼_{\rm L} \textsf{AlmostSAT}\)
by reducing $\textsf{AlmostSAT}$ from $\textsf{co3SAT}$.
Let $φ ≝ \bigwedge_{i} \underbrace{𝒞_i}_{≝ \; l_i^1 ∨ l_i^2 ∨ l_i^3}$ be a CNF formula.
Firstly, one defines a formula $\widetilde{φ}$ out of $φ$ such that $\widetilde{φ}$ has no tautological clause and all literals appear once per clause, i.e. no clause containing $x$ and $¬x$, or $x$ or $¬x$ twice, for a variable $x$: $φ$ and the resulting $\widetilde{φ}$ are then equisatisfiable.
This can be done in logspace, as all the clauses are of size smaller (or equal) than $3$: one scans through all the clauses, by writing the literals $l_i^k$ of the current examined clause $𝒞_i$ on the working tape, and one checks if $𝒞_i$ is tautological or includes a repeated literal: as it happens, one does not consider (“ignores” or “eliminates” in a way) in the following reduction:
 either $𝒞_i$ as a whole in case it is tautological
 or the repeated literal if $𝒞_i$ includes such a literal
So from now on, we will work on $\widetilde{φ}$, and we can assume, without loss of generality, that $\widetilde{φ} = φ$, so that $φ$ has no tautological clauses and no repeated (in a single clause) literal.
Let $r$ be defined as:
\[\begin{align*} r(φ) &≝ \bigwedge_{i} \overbrace{\bigvee_{j≠i} z_j ∨ 𝒞_i}^{≝ \; E^Z_{i}} \\ &∧ \bigwedge_{j≠i} \overbrace{¬ z_i ∨ ¬z_j}^{≝ \; D^Z_{i,j}}\\ &∧ \bigwedge_{i, r} \overbrace{\bigvee_{j≠i} z_j ∨ \overline{l_i^r} ∨ ¬ z_i}^{≝ \; C^Z_{i, r}} \end{align*}\]where
 the $z_i$ are fresh variables
 $\overline{l_i^r}$ is the negation of the literal $l_i^r$
$φ$ and $r(φ)$ are equisatisfiable
if $φ$ is satisfied by a valuation $v$
then the valuation obtained out of $v$ by setting all the $z_i$ to false satisfies $r(φ)$:
 each $E^Z_{i}$ are satisfied due to $𝒞_i$ being satisfied
 each $D^Z_{i,j}$ and $C^Z_{i, r}$ are all satisfied as well since all the $z_i$ are set to false
if $φ$ is unsatisfiable
then, by contradiction, let us assume that there exists a valuation $v$ satisfying $r(φ)$.
NB: By abuse of notation, we extend $v$ to any clause, so that: if $C = \bigvee_i l_i, \; v(C) ≝ \bigvee_i v(l_i)$

For each $i$: as $v(E^Z_{i})$ is true and $v(𝒞_i)$ is false ($φ$ unsatisfiable): there exists $i_0 ≠ i$ such that $v(z_{i_0}) = ⊤$

Then as, for all $j ≠ i_0$, $D^Z_{i_0,j}$ is satisfied: for all $j ≠ i_0, \; v(z_j) = ⊥$

So $v(𝒞_{i_0}) = ⊤$ is true, since $v(E^Z_{i_0}) = ⊤$ and for all $j ≠ i_0$, $v(z_j) = ⊥$

But for all $r$, as $v(C^Z_{i_0, r}) = ⊤$ and:
 for all $j ≠ i_0$, $v(z_j) = ⊥$
 $v(¬z_{i_0}) = ⊥$
it follows that $v(\overline{l_{i_0}^r}) = ⊤$ for all $r$, so that $v(𝒞_{i_0}) = ⊥$, which contradicts the point 3.
For all $i_0$, $r(φ)\backslash E^Z_{i_0}$ is satisfiable
Let $v$ be a valuation such that
 $v(z_{i_0}) = ⊤$
 $∀j ≠ i_0, \; v(z_j) = ⊥$
 $∀r, \; v(\overline{l_{i_0}^r}) = ⊤$
 associates anything ($⊤$ or $⊥$) to all the other variables
NB: $v$ is welldefined over the $\overline{l_{i_0}^r}$ because no clause in $φ$ is tautological, so that no two literals $\overline{l_{i_0}^r}, \overline{l_{i_0}^{r’}}$ are such that one is a variable and the other the negation thereof.
$v$ satisfies $r(φ)\backslash E^Z_{i_0}$:
 for all $i ≠ i_0$, for all $r$, $E^Z_{i}$ and $C^Z_{i, r}$ are satisfied because of $z_{i_0}$
 each $D^Z_{i,j}$ is satisfied because $z_{i_0}$ is the only $z_k$ that is set to true by $v$: all the others are set to false
 for all $r$, $C^Z_{i_0, r}$ is satisfied because of $\overline{l_{i_0}^r}$
For all $i_0, j_0$, $r(φ)\backslash D^Z_{i_0, j_0}$ is satisfiable
By setting both $z_{i_0}$ and $z_{j_0}$ to $⊤$, and all the other $z_k$ to $⊥$, one easliy checks that $r(φ)\backslash D^Z_{i_0, j_0}$ is satisfied.
For all $i_0, r_0$, $r(φ)\backslash C^Z_{i_0, r_0}$ is satisfiable
By setting:

$z_{i_0}$ and $\overline{l_{i_0}^{r_0}}$ to $⊤$

all the other $z_k$ and $\overline{l_{i_0}^r}$ to $⊥$
one checks that $r(φ)\backslash C^Z_{i_0, r_0}$ is satisfied:
 for all $i ≠ i_0$, for all $r$, $E^Z_{i}$ and $C^Z_{i, r}$ are satisfied because of $z_{i_0}$
 each $D^Z_{i,j}$ is satisfied because $z_{i_0}$ is the only $z_k$ that is set to true: all the others are set to false
 $E^Z_{i_0}$ is satisfied because of $\overline{l_{i_0}^{r_0}}$ satisfying $𝒞_{i_0}$
On the whole, we have shown that:
\[φ ∈ \textsf{co3SAT} ⟺ r(φ)∈ \textsf{AlmostSAT}\]The reduction is logspace since:
 one computes the number of clauses of $φ$ with a counter
 With finite number of pointers: for each clause $𝒞_i$, one writes the $E^Z_{i}$ and the $C^Z_{i, r}$ on the output tape, by remembering which is the current $z_i$ with a counter
 then, one writes the $D^Z_{i,j}$ with possibly another counter
Finally, as $\textsf{coSAT} ≼{\rm L} \textsf{co3SAT}$ and $ \textsf{co3SAT} ≼{\rm L} \textsf{AlmostSAT}$:
\[\textsf{coSAT} ≼_{\rm L} \textsf{AlmostSAT}\]
13.
We proceed in the same way as the previous question, the only difference being that we add the clause $C^Z ≝ \bigvee_i z_i$ to $r(φ)$ (with the same notations as before), so that the new $r(φ)$ is defined as:
\[\begin{align*} r(φ) &≝ \bigwedge_{i} \overbrace{\bigvee_{j≠i} z_j ∨ 𝒞_i}^{≝ \; E^Z_{i}} \\ &∧ \bigwedge_{j≠i} \overbrace{¬ z_i ∨ ¬z_j}^{≝ \; D^Z_{i,j}}\\ &∧ \bigwedge_{i, r} \overbrace{\bigvee_{j≠i} z_j ∨ \overline{l_i^r} ∨ ¬ z_i}^{≝ \; C^Z_{i, r}} \\ &∧ \underbrace{\bigvee_{i} z_i}_{≝ \; C^Z} \\ \end{align*}\]Let us checks that \(φ ∈ \textsf{SAT} ⟺ r(φ) ∈ \textsf{AlmostSAT}\) in a similar fashion:
$r(φ)$ is unsatisfiable
By contradiction: if a valuation $v$ satisfies $r(φ)$
 because of $C^Z$, there exists $i_0$ such that $v(z_{i_0}) = ⊤$
 but then, because of the $D^Z_{i, j}$, all the $z_j$ for $j≠i_0$ are set to $⊥$…
 … which means, because of $E^Z_{i_0}$, that $v(𝒞_{i_0}) = ⊤$
 Owing to $C_{i, r}^Z$ for all $r$: $v(\overline{l_{i_0}^r}) = ⊤$ (since $v(¬z_{i0})$ and $v(z_j)$ for $j≠i_0$ are equal to $⊥$)
 which yields a contradiction, similarly as before: as $v(\overline{l_{i_0}^r}) = ⊤$ for all $r$, $v(𝒞_{i_0}) = ⊥$, which contradicts the point 3.
if $φ$ is satisfiable, $r(φ)\backslash E^Z_{i_0}, \; r(φ)\backslash D^Z_{i_0, j_0}, \; r(φ)\backslash C^Z_{i_0, r_0}\; \text{ and } r(φ)\backslash C^Z$ are satisfiable
For
 $r(φ)\backslash E^Z_{i_0}$
 $r(φ)\backslash D^Z_{i_0, j_0}$
 $r(φ)\backslash C^Z_{i_0, r_0}$
the proofs go exactly as in previous question, since:
 we didn’t assume any satisfiability property of $φ$ back then, to show these results
 and in these demonstrations, the clause $C^Z$ (which was not there before) would systematically have been satisfied, since always at least one the $z_i$ was set to true.
For $r(φ)\backslash C^Z$: the proof has already been done in the previous question, just after the definition of $r(φ)$, when we showed “$φ$ satisfiable implies $r(φ)$ satisfiable”.
if $φ$ is unsatisfiable $r(φ)\backslash C^Z$ is unsatisfiable
Again, the proof has already been done in the previous question, just after the definition of $r(φ)$, when we showed “$φ$ unsatisfiable implies $r(φ)$ unsatisfiable”.
On the whole, we have shown that
\[φ ∈ \textsf{SAT} ⟺ r(φ) ∈ \textsf{AlmostSAT}\]and this reduction, as in the previous question (the argument remain the same) is still in logspace.
Thus
\[\textsf{SAT} ≼_{\rm L} \textsf{AlmostSAT}\]
14.
If $S ≝ \bigwedge_i C_i, S’ ≝ \bigvee_i C’_i$ are CNF formulas, one defines
\[r((S, S')) ≝ \left(\bigwedge_i (C_i ∨ x) \right) ∧ \left(\bigwedge_i (C_i' ∨ ¬x) \right)\]where $x$ a fresh variable.
If $(S, S’) ∈ \textsf{doubleAlmostSAT}$, one easliy checks that:
 $r((S, S’))$ is unsatisfiable, as $S$ and $S’$ are

$r((S, S’))$ minus any clause is satisfiable:
 $r((S, S’))\backslash (C_i ∨ x)$ is satisfied by the valuation obtained out of the valuation satisfying $S\backslash C_i$ and which sets $x$ to false (so that $\bigwedge_i (C_i’ ∨ ¬x)$ is satisfied)
 the other case is symmetric
If $S$ or $S’$ is not in $\textsf{AlmostSAT}$:
 If $S$ or $S’$ is satisfiable: so is $r((S, S’))$ (by setting $x$ appropriately, as before).
 If $S\backslash C_i$ is unsatisfiable: so is $r((S, S’))\backslash (C_i ∨ x)$, since any valuation satisfying $\bigwedge_j (C_j ∨ ¬x)$ can be restricted (by forgetting $x$) into a valuation satisfying $\bigwedge_j C_j = S$
 the other case is symmetric
so $r((S, S’))$ is not in $\textsf{AlmostSAT}$ either.
We have shown that
\[(S, S') ∈ \textsf{doubleAlmostSAT} ⟺ r((S, S')) ∈ \textsf{AlmostSAT}\]and the reduction is trivially in logspace: one only scans through the clauses and possibly adds one extra fresh variable (one counts (with one counter) the total number of variables to do so).
Thus
\[\textsf{doubleAlmostSAT} ≼_{\rm L} \textsf{AlmostSAT}\]
15.
As $\textsf{YesNoSAT}$ is $\nabNP$complete (question 8), any language in $\nabNP$:
 can be turned (in logspace) into an instance thereof
 which can itself be turned (in logspace) into an intance of $\textsf{doubleAlmostSAT}$ (questions 12 and 13)
 which can itself be turned (in logspace) into an intance of $\textsf{AlmostSAT}$ (question 14).
As all these reductions are logspace, we use the property that $≼_{\rm L}$ is transitive (as seen in class) to conclude that $\textsf{AlmostSAT}$ is $\nabNP$hard.
As $\textsf{AlmostSAT} ∈ \nabNP$ (question 5),
$\textsf{AlmostSAT}$ is $\nabNP$complete.
16.
We will show the contrapositive.
We have shown at question 9 that $\textsf{BestClique}$ is $\nabNP$complete (since we proved that it is $\NP$hard). So as $\coNP ⊆ \nabNP$, $\textsf{BestClique}$ is $\coNP$hard.
But:
Lemma:
if a $\coNP$hard language $L$ is in $\NP$, then $\coNP = \NP$
if an $\NP$hard language $L$ is in $\coNP$, then $\coNP = \NP$
Proof of 1.:
$\coNP ⊆ \NP$
For any language $L’∈ \coNP$: $L’$ can be reduced to $L$. But as $L ∈ \NP$, it follows that $L’∈ \NP$.
$\NP ⊆ \coNP$
For any language $L’ ∈\NP$, as $\overline{L’} ∈ \coNP$, $\overline{L’}$ can be reduced to $L$ by a logspace reduction $r$.
Thus,
\(∀w, \; w∈\overline{L'} ⟺ r(w) ∈ L\) which implies that \(∀w, \; w∈L' ⟺ w ∉ \overline{L'} ⟺ r(w) ∉ L ⟺ r(w) ∈ \overline{L}\)
That is, $L’$ can be logspace reduced to $\overline{L}$. But as $L ∈ \NP$, $\overline{L} ∈ \coNP$, and the result follows.
The proof of 2. is symmetric.
By applying the first point of this lemma to $L = \textsf{BestClique}$, it follows that $\coNP = \NP$.
17.
Again, we prove the contrapositive.
As $\textsf{BestClique}$ has been shown (question 9) to be $\NP$hard, by applying the first point of the previous lemma to $L = \textsf{BestClique}$, it follows that $\coNP = \NP$.
18.
If $\coNP ≠ \NP$, by questions 16 and 17: $\textsf{BestClique} ∉ \NP ∪ \coNP$. But by question 9, $\textsf{BestClique} ∈ \nabNP$. Thus, $\textsf{BestClique} ∈ \nabNP \backslash (\NP ∪ \coNP)$, and since $\NP ∪ \coNP ⊆ \nabNP$ (question 6):
\[\NP ∪ \coNP ⊊ \nabNP\]
Leave a comment