The standard notion of a randomness extractor is a procedure which
converts any weak source of randomness into an almost uniform
distribution. The conversion necessarily uses a small amount of pure
randomness, which can be eliminated by complete enumeration in some,
but not all, applications.
Here, we consider the problem of *deterministically* converting a weak
source of randomness into an almost uniform distribution. Previously,
deterministic extraction procedures were known only for sources
satisfying strong independence requirements. In this paper, we look
at sources which are "samplable", i.e., can be generated by an
efficient sampling algorithm. We seek an efficient deterministic
procedure that, given a sample from any samplable distribution of
sufficiently large min-entropy, gives an almost uniformly distributed
output. We explore the conditions under which such "deterministic
extractors" exist.
We observe that no deterministic extractor exists if the sampler is
allowed to use more computational resources than the extractor. On
the other hand, if the extractor is allowed (polynomially) more
resources than the sampler, we show that deterministic extraction
becomes possible. This is true unconditionally in the nonuniform
setting (i.e., when the extractor can be computed by a small circuit),
and (necessarily) relies on complexity assumptions in the uniform
setting.
One of our uniform constructions is as follows: assuming that there
are problems in E=DTIME(2^{{O(n)}) that are not solvable by
subexponential-size circuits with Sigma_6 gates, there is an efficient
extractor that transforms any samplable distribution of length n and
min-entropy (1-gamma)n into an output distribution of length
(1-O(gamma))n, where gamma is any sufficiently small constant. The
running time of the extractor is polynomial in n and the circuit
complexity of the sampler. These extractors are based on a connection
between deterministic extraction from samplable distributions and
hardness against nondeterministic circuits, and on the use of
nondeterminism to substantially speed up "list decoding" algorithms
for error-correcting codes such as multivariate polynomial codes and
Hadamard-like codes.
Joint work with Luca Trevisan.
Back to
Discrete Math/Theory of Computing seminar