| by: burt rosenberg |
| at: university of miami |
| original-date: 19 Sept 2003 |
Example: whether a graph has a perfect matching or not. The language would start with a universe, all syntatically correct descriptions of graphs, in some chosen encoding. Given an encoding of a graph, the algorithm solving the problem would intrepret the string as a graph and see if it is bipartite and has a perfect matching. If it does, the algorithm accepts the string, else it rejects.
The definition of algorithm is based on the program of a Turing Machine. As we are concerned with efficiency of algorithms with polynomial factors, this is sufficient. A Turing Machine and most other models of computation, more practical or less, have efficiencies related by less than a polynomial factor.
The typical classification of langauges (problems) are those accepted in time polynomial in the length of the input, and those that cannot be accept so speedily. The class of all languages acceptible in polynomial time by a (deterministic) Turing Machine is called P. The next stage outside of P are those languages which cannot be accepted in polynomial time, but where each string in the language can be verified to be in the language in polynomial time if the Turing Machine is given an additional hint string. Formally,
where f is computable in polynomial time. These are the NP language. In general it is hard to find y, the hint string, that is, to search for it without method is time consuming. Any P language is an NP language. One ignores the hint and computes f(x) without aid. It is not known if all NP languages are P languages.Consider a vast tree of all possible computation paths. That is, the computation f(x,y) for all possible y (there are a finite number if f is to run polynomial in the length of x). NP means that if x in in the language, at least one path accepts; if x is not in the language, no path accepts.
However, the other paths do not reject, they are inconclusive. If every path either accepts or rejects, the language is in P. Because you can fix a y once and for all, build it into the function f if you wish, and run the function. It will either accept or reject conclusively in polynomial time.
The complement of NP is co-NP. Because of the asymmetry, I will define it with the same f(x,y) that accepts element in the language.
A co-NP can also be in NP. For instance, P is in both. Describing the set NP ∩ co-NP has the difficulty of excluding the case the both exist, a witness for inclusion and an witness of exclusion in the set, but that at least on of these exist. One needs something like a function g(x,z) such that, both g(x,z) and f(x,y) are polynomial time decidable relations.
+----------+
x ∈ L | ε | T
-----------+------------
| \ 1-ε |
x ∉ L | \ | ¬ T
--------------+---------
| |
+----------+
The Class NP, ε>1/2n
The Class RP, ε≥1/4
+----------+
x ∈ L | | ¬ F
--------------+---------
| / |
x ∉ L | / ε | F
-----------+------------
| 1-ε |
+----------+
The Class co-RP, ε<3/4
We introduce probability into our complexity classes. Consider if our approach to solving an NP problem is to take a witness y at random. Then we have a probabilistic algorithm for membership that has a perhaps exponentially small chance of accepting elements in the language, and zero chance of accepting an element not in the language. Because it is impossible in polynomial time to distinguish between these two probability distributions, this is not a useful characterization in practice. However, it leads to definitions that are very practical.
Let ε be some fixed value between zero and one, like 1/3.
The class RP is the class of languges for which if x is in the language, at least a fraction ε of y accept. If x is not in the language, no y accepts.
The class co-RP is the dual, if x is not in the language, at least a fraction ε of strings y reject; if x is in the language, no y rejects.
The class ZPP is the intersection class of classes RP and co-RP: at least a fraction ε of y correctly decides membership of x and no y leads to an error in classification of x.
The class BPP labels all computation as either accepting or rejecting and f(x,y) might error in deciding membership of x in the language. However, if x is in the language at least 2/3 of the y accept; if x is not in the language at least 2/3 of the y reject.
The classes RP, co-RP, ZPP and BPP are basis for randomized complexity. An RP language can use randomness to confirm membership of strings in the language by picking random y until an accepting computation path is found. A ZPP language can decide membership, in or out, by picking random y. This is a feasible strategy for the randomized complexity classes because there are many conclusive paths. It cannot be done for NP - there may be one path among the exponential number of possiblities which is conclusive. It is exponentially unlikely that the correct y will be chosen.
RP, co-RP and ZPP decide membership probabilistically without error. If they determine a string to be in or out of the language, there are indeed in or out of the language. With every decreasing likelihood as more computation paths are attempted, they might, however, remain inconclusive. In fact, RP (co-RP) is as useless to reject (accept) a string as is NP - where as about 1/ε paths need be tried before an accept (reject) on average, fully a fraction (1-ε) of the paths must be tried before one can infer rejection (accepting).
BPP decides membership with an arbitrarily small error. However, it always has an opinion. By repeating a BPP computation for various random y, and taking the majority result, one can reduce the probability of misclassification.
BQP
|
NP BPP co-NP
\ / \ /
RP co-RP
\ /
ZPP
|
P
Probabilistic complexity hierarchy
BQP: Quantum complexity class
Warning! non-stated relations
might not be known
We establish the relationships,
Since P always halts correctly in polynomial time, P ⊆ ZPP. Since ZPP is the intersection of RP and co-RP, ZPP ⊆ RP, co-RP. Since NP only requires 1 accepting path, and RP requires many, RP ⊆ NP. Likewise co-RP ⊆ co-NP.Finally, let L be an RP language. We show L is in BPP. Let M be the machine accepting L and construct M' from M. First, M' will run M multiple times, accept the input if ever M accepts. The number of repetitions will give an accepting probability for M' of at least 2/3. If M never accepts, M' rejects.
M' is a BPP machine. If l ∈ L then by the repeated calls to M, l is accepted with good probability, that is, M' accepts with probability greater than 2/3. If l ∉ L then M never accepts, so M' rejects, that is, M' rejects l with probability greater than 2/3 (in fact, with probability 1).
A similar construction gives co-RP ⊆ BPP.
The relationship between NP and BPP is not known. Remember, it could be that everything here is equal to P.
There is another manner in which an algorithm can be randomized polynomial time. The above notions gave a guarenteed runtime polynomial in input size, but the result may be incomplete or incorrect. There is a randomized polynomial time class which requires correct answer and expected runtime polynomial in input size. It is possible that for some coin flips (or input choices) the algorithm run for, say, exponential number of steps. But these must be "exponentially rare" so that they do not adversly affect the average runtime.
An algorithm which runs in expected polynomial time can be modified so that it stops itself after a polynomial number of steps. If the algorithm is so terminated, it returns an incomplete result. Using Markov's inequality, one can bound the probability that the algorithm has been prematurely stopped, and thus yield a RP, co-RP or ZPP algorithm.
In an equally simple manner, a ZPP algorithm can be turned into an expected runtime algorithm by repeating it with different coin flips until it definitively accepts or rejects. A calculation of probability will show the expected number of repetitions will be polynomial.
The situation for BPP, RP and co-RP algorithms is left as a matter for discussion.
These notes were added 2 September 2025.
If language L is in NP then the language Ł is in co-NP where
The effect of this on proof cannot be overlooked. The language L is defined by a predicate V(x,y) and the set of all x for which there is a y such that V(x,y) = T. The complement of this is the set of all x for which for all y, V(x,y) ≠ T, as follows by the rules of logic and negating an existence predicate.It is not the class in which there is necessarily an existence type proof of the complement. That would be the class NP ∩ co-NP — where both the set and the complement have a P-time verifier, say V(x,y) and Vc(x,y), and V and Vc cannot contradict each other.
The existence of witnesses y is non-constructive. If the witness is constructible in P-time, this is the class P.
A good example is the problem of graph isomorphism. Two graphs are either isomorphic or not. If they are, one can produce an isomorph and quickly check. Non-isomorphism, as far as we know, is proven by the lack of any such isomorphism.
+----------+
x ∈ L | ε | T
------------+-----------
| \ 1-ε|
| \ | ⊥ (¬T∧¬F)
| +--------
| / |
x ∉ L | / 1-ε| F
------------+-----------
| ε |
+----------+
The Class NP ∩ coNP if ε>1/2n
The Class ZPP if ε>0 (fixed ε)
Note that V is polynomial time, hence so is any proof. Therefore the class co-NP does have a demonstration, the concatenation of the vast yet finite possible proofs, all of which are unsuccessful. However this proof is inaccessible since it is exponentially long.
It is possible to prove non-isomorphism by an Interactive Protocol (IP) proof system. That system will necessarily have imperfect soundness.
Referring to the diagrams, RP and co-RP, the asymmetry is expressed in writing ¬T rather than F, for instance. The class ZPP = RP ∩ co-RP shows this distinction,

Author: Burton Rosenberg
Created: 2 September 2025
Last Update: 18 November 2025