# Small-bias sample space

Jump to navigation Jump to search

In theoretical computer science, a small-bias sample space (also known as ${\displaystyle \epsilon }$-biased sample space, ${\displaystyle \epsilon }$-biased generator, or small-bias probability space) is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.

The main useful property of small-bias sample spaces is that they need far fewer truly random bits than the uniform distribution to fool parities. Efficient constructions of small-bias sample spaces have found many applications in computer science, some of which are derandomization, error-correcting codes, and probabilistically checkable proofs. The connection with error-correcting codes is in fact very strong since ${\displaystyle \epsilon }$-biased sample spaces are equivalent to ${\displaystyle \epsilon }$-balanced error-correcting codes.

## Definition

### Bias

Let ${\displaystyle X}$ be a probability distribution over ${\displaystyle \{0,1\}^{n}}$. The bias of ${\displaystyle X}$ with respect to a set of indices ${\displaystyle I\subseteq \{1,\dots ,n\}}$ is defined as[1]

${\displaystyle {\text{bias}}_{I}(X)=\left|\Pr _{x\sim X}\left(\sum _{i\in I}x_{i}=0\right)-\Pr _{x\sim X}\left(\sum _{i\in I}x_{i}=1\right)\right|=\left|2\cdot \Pr _{x\sim X}\left(\sum _{i\in I}x_{i}=0\right)-1\right|\,,}$

where the sum is taken over ${\displaystyle \mathbb {F} _{2}}$, the finite field with two elements. In other words, the sum ${\displaystyle \sum _{i\in I}x_{i}}$ equals ${\displaystyle 0}$ if the number of ones in the sample ${\displaystyle x\in \{0,1\}^{n}}$ at the positions defined by ${\displaystyle I}$ is even, and otherwise, the sum equals ${\displaystyle 1}$. For ${\displaystyle I=\emptyset }$, the empty sum is defined to be zero, and hence ${\displaystyle {\text{bias}}_{\emptyset }(X)=1}$.

### ϵ-biased sample space

A probability distribution ${\displaystyle X}$ over ${\displaystyle \{0,1\}^{n}}$ is called an ${\displaystyle \epsilon }$-biased sample space if ${\displaystyle {\text{bias}}_{I}(X)\leq \epsilon }$ holds for all non-empty subsets ${\displaystyle I\subseteq \{1,2,\ldots ,n\}}$.

### ϵ-biased set

An ${\displaystyle \epsilon }$-biased sample space ${\displaystyle X}$ that is generated by picking a uniform element from a multiset ${\displaystyle X\subseteq \{0,1\}^{n}}$ is called ${\displaystyle \epsilon }$-biased set. The size ${\displaystyle s}$ of an ${\displaystyle \epsilon }$-biased set ${\displaystyle X}$ is the size of the multiset that generates the sample space.

### ϵ-biased generator

An ${\displaystyle \epsilon }$-biased generator ${\displaystyle G:\{0,1\}^{\ell }\to \{0,1\}^{n}}$ is a function that maps strings of length ${\displaystyle \ell }$ to strings of length ${\displaystyle n}$ such that the multiset ${\displaystyle X_{G}=\{G(y)\;\vert \;y\in \{0,1\}^{\ell }\}}$ is an ${\displaystyle \epsilon }$-biased set. The seed length of the generator is the number ${\displaystyle \ell }$ and is related to the size of the ${\displaystyle \epsilon }$-biased set ${\displaystyle X_{G}}$ via the equation ${\displaystyle s=2^{\ell }}$.

## Connection with epsilon-balanced error-correcting codes

There is a close connection between ${\displaystyle \epsilon }$-biased sets and ${\displaystyle \epsilon }$-balanced linear error-correcting codes. A linear code ${\displaystyle C:\{0,1\}^{n}\to \{0,1\}^{s}}$ of message length ${\displaystyle n}$ and block length ${\displaystyle s}$ is ${\displaystyle \epsilon }$-balanced if the Hamming weight of every nonzero codeword ${\displaystyle C(x)}$ is between ${\displaystyle ({\frac {1}{2}}-\epsilon )s}$ and ${\displaystyle ({\frac {1}{2}}+\epsilon )s}$. Since ${\displaystyle C}$ is a linear code, its generator matrix is an ${\displaystyle (n\times s)}$-matrix ${\displaystyle A}$ over ${\displaystyle \mathbb {F} _{2}}$ with ${\displaystyle C(x)=x\cdot A}$.

Then it holds that a multiset ${\displaystyle X\subset \{0,1\}^{n}}$ is ${\displaystyle \epsilon }$-biased if and only if the linear code ${\displaystyle C_{X}}$, whose columns are exactly elements of ${\displaystyle X}$, is ${\displaystyle \epsilon }$-balanced.[2]

## Constructions of small epsilon-biased sets

Usually the goal is to find ${\displaystyle \epsilon }$-biased sets that have a small size ${\displaystyle s}$ relative to the parameters ${\displaystyle n}$ and ${\displaystyle \epsilon }$. This is because a smaller size ${\displaystyle s}$ means that the amount of randomness needed to pick a random element from the set is smaller, and so the set can be used to fool parities using few random bits.

### Theoretical bounds

The probabilistic method gives a non-explicit construction that achieves size ${\displaystyle s=O(n/\epsilon ^{2})}$.[2] The construction is non-explicit in the sense that finding the ${\displaystyle \epsilon }$-biased set requires a lot of true randomness, which does not help towards the goal of reducing the overall randomness. However, this non-explicit construction is useful because it shows that these efficient codes exist. On the other hand, the best known lower bound for the size of ${\displaystyle \epsilon }$-biased sets is ${\displaystyle s=\Omega (n/(\epsilon ^{2}\log(1/\epsilon ))}$, that is, in order for a set to be ${\displaystyle \epsilon }$-biased, it must be at least that big.[2]

### Explicit constructions

There are many explicit, i.e., deterministic constructions of ${\displaystyle \epsilon }$-biased sets with various parameter settings:

• Naor & Naor (1990) achieve ${\displaystyle \displaystyle s={\frac {n}{{\text{poly}}(\epsilon )}}}$. The construction makes use of Justesen codes (which is a concatenation of Reed–Solomon codes with the Wozencraft ensemble) as well as expander walk sampling.
• Alon et al. (1992) achieve ${\displaystyle \displaystyle s=O\left({\frac {n}{\epsilon \log(n/\epsilon )}}\right)^{2}}$. One of their constructions is the concatenation of Reed–Solomon codes with the Hadamard code; this concatenation turns out to be an ${\displaystyle \epsilon }$-balanced code, which gives rise to an ${\displaystyle \epsilon }$-biased sample space via the connection mentioned above.
• Concatenating Algebraic geometric codes with the Hadamard code gives an ${\displaystyle \epsilon }$-balanced code with ${\displaystyle \displaystyle s=O\left({\frac {n}{\epsilon ^{3}\log(1/\epsilon )}}\right)}$.[2]
• Ben-Aroya & Ta-Shma (2009) achieves ${\displaystyle \displaystyle s=O\left({\frac {n}{\epsilon ^{2}\log(1/\epsilon )}}\right)^{5/4}}$.
• Ta-Shma (2017) achieves ${\displaystyle \displaystyle s=O\left({\frac {n}{\epsilon ^{2+o(1)}}}\right)}$ which is almost optimal because of the lower bound.

These bounds are mutually incomparable. In particular, none of these constructions yields the smallest ${\displaystyle \epsilon }$-biased sets for all settings of ${\displaystyle \epsilon }$ and ${\displaystyle n}$.

## Application: almost k-wise independence

An important application of small-bias sets lies in the construction of almost k-wise independent sample spaces.

### k-wise independent spaces

A random variable ${\displaystyle Y}$ over ${\displaystyle \{0,1\}^{n}}$ is a k-wise independent space if, for all index sets ${\displaystyle I\subseteq \{1,\dots ,n\}}$ of size ${\displaystyle k}$, the marginal distribution ${\displaystyle Y|_{I}}$ is exactly equal to the uniform distribution over ${\displaystyle \{0,1\}^{k}}$. That is, for all such ${\displaystyle I}$ and all strings ${\displaystyle z\in \{0,1\}^{k}}$, the distribution ${\displaystyle Y}$ satisfies ${\displaystyle \Pr _{Y}(Y|_{I}=z)=2^{-k}}$.

#### Constructions and bounds

k-wise independent spaces are fairly well understood.

• A simple construction by Joffe (1974) achieves size ${\displaystyle n^{k}}$.
• Alon, Babai & Itai (1986) construct a k-wise independent space whose size is ${\displaystyle n^{k/2}}$.
• Chor et al. (1985) prove that no k-wise independent space can be significantly smaller than ${\displaystyle n^{k/2}}$.

#### Joffe's construction

Joffe (1974) constructs a ${\displaystyle k}$-wise independent space ${\displaystyle Y}$ over the finite field with some prime number ${\displaystyle n>k}$ of elements, i.e., ${\displaystyle Y}$ is a distribution over ${\displaystyle \mathbb {F} _{n}^{n}}$. The initial ${\displaystyle k}$ marginals of the distribution are drawn independently and uniformly at random:

${\displaystyle (Y_{0},\dots ,Y_{k-1})\sim \mathbb {F} _{n}^{k}}$.

For each ${\displaystyle i}$ with ${\displaystyle k\leq i, the marginal distribution of ${\displaystyle Y_{i}}$ is then defined as

${\displaystyle Y_{i}=Y_{0}+Y_{1}\cdot i+Y_{2}\cdot i^{2}+\dots +Y_{k-1}\cdot i^{k-1}\,,}$

where the calculation is done in ${\displaystyle \mathbb {F} _{n}}$. Joffe (1974) proves that the distribution ${\displaystyle Y}$ constructed in this way is ${\displaystyle k}$-wise independent as a distribution over ${\displaystyle \mathbb {F} _{n}^{n}}$. The distribution ${\displaystyle Y}$ is uniform on its support, and hence, the support of ${\displaystyle Y}$ forms a ${\displaystyle k}$-wise independent set. It contains all ${\displaystyle n^{k}}$ strings in ${\displaystyle \mathbb {F} _{n}^{k}}$ that have been extended to strings of length ${\displaystyle n}$ using the deterministic rule above.

### Almost k-wise independent spaces

A random variable ${\displaystyle Y}$ over ${\displaystyle \{0,1\}^{n}}$ is a ${\displaystyle \delta }$-almost k-wise independent space if, for all index sets ${\displaystyle I\subseteq \{1,\dots ,n\}}$ of size ${\displaystyle k}$, the restricted distribution ${\displaystyle Y|_{I}}$ and the uniform distribution ${\displaystyle U_{k}}$ on ${\displaystyle \{0,1\}^{k}}$ are ${\displaystyle \delta }$-close in 1-norm, i.e., ${\displaystyle {\Big \|}Y|_{I}-U_{k}{\Big \|}_{1}\leq \delta }$.

#### Constructions

Naor & Naor (1990) give a general framework for combining small k-wise independent spaces with small ${\displaystyle \epsilon }$-biased spaces to obtain ${\displaystyle \delta }$-almost k-wise independent spaces of even smaller size. In particular, let ${\displaystyle G_{1}:\{0,1\}^{h}\to \{0,1\}^{n}}$ be a linear mapping that generates a k-wise independent space and let ${\displaystyle G_{2}:\{0,1\}^{\ell }\to \{0,1\}^{h}}$ be a generator of an ${\displaystyle \epsilon }$-biased set over ${\displaystyle \{0,1\}^{h}}$. That is, when given a uniformly random input, the output of ${\displaystyle G_{1}}$ is a k-wise independent space, and the output of ${\displaystyle G_{2}}$ is ${\displaystyle \epsilon }$-biased. Then ${\displaystyle G:\{0,1\}^{\ell }\to \{0,1\}^{n}}$ with ${\displaystyle G(x)=G_{1}(G_{2}(x))}$ is a generator of an ${\displaystyle \delta }$-almost ${\displaystyle k}$-wise independent space, where ${\displaystyle \delta =2^{k/2}\epsilon }$.[3]

As mentioned above, Alon, Babai & Itai (1986) construct a generator ${\displaystyle G_{1}}$ with ${\displaystyle h={\tfrac {k}{2}}\log n}$, and Naor & Naor (1990) construct a generator ${\displaystyle G_{2}}$ with ${\displaystyle \ell =\log s=\log h+O(\log(\epsilon ^{-1}))}$. Hence, the concatenation ${\displaystyle G}$ of ${\displaystyle G_{1}}$ and ${\displaystyle G_{2}}$ has seed length ${\displaystyle \ell =\log k+\log \log n+O(\log(\epsilon ^{-1}))}$. In order for ${\displaystyle G}$ to yield a ${\displaystyle \delta }$-almost k-wise independent space, we need to set ${\displaystyle \epsilon =\delta 2^{-k/2}}$, which leads to a seed length of ${\displaystyle \ell =\log \log n+O(k+\log(\delta ^{-1}))}$ and a sample space of total size ${\displaystyle 2^{\ell }\leq \log n\cdot {\text{poly}}(2^{k}\cdot \delta ^{-1})}$.

## Notes

1. ^ cf., e.g., Goldreich (2001)
2. ^ a b c d cf., e.g., p. 2 of Ben-Aroya & Ta-Shma (2009)
3. ^ Section 4 in Naor & Naor (1990)

## References

• Alon, Noga; Babai, László; Itai, Alon (1986), "A fast and simple randomized parallel algorithm for the maximal independent set problem" (PDF), Journal of Algorithms, 7 (4): 567–583, doi:10.1016/0196-6774(86)90019-2
• Alon, Noga; Goldreich, Oded; Håstad, Johan; Peralta, René (1992), "Simple Constructions of Almost k-wise Independent Random Variables" (PDF), Random Structures & Algorithms, 3 (3): 289–304, doi:10.1002/rsa.3240030308
• Ben-Aroya, Avraham; Ta-Shma, Amnon (2009), "Constructing Small-Bias Sets from Algebraic-Geometric Codes" (PDF), Proceedings of the 50th Annual Symposium on Foundations of Computer Science, FOCS 2009: 191–197, doi:10.1109/FOCS.2009.44, ISBN 978-1-4244-5116-6
• Chor, Benny; Goldreich, Oded; Håstad, Johan; Freidmann, Joel; Rudich, Steven; Smolensky, Roman (1985), "The bit extraction problem or t-resilient functions", Proceedings of the 26th Annual Symposium on Foundations of Computer Science, FOCS 1985: 396–407, doi:10.1109/SFCS.1985.55, ISBN 0-8186-0644-4
• Goldreich, Oded (2001), Lecture 7: Small bias sample spaces
• Joffe, Anatole (1974), "On a Set of Almost Deterministic k-Independent Random Variables", Annals of Probability, 2 (1): 161–162, doi:10.1214/aop/1176996762
• Naor, Joseph; Naor, Moni (1990), "Small-bias Probability Spaces: efficient constructions and Applications", Proceedings of the 22nd annual ACM symposium on Theory of computing, STOC 1990: 213–223, doi:10.1145/100216.100244, ISBN 0897913612
• Amnon, Ta-Shma (2017), "Explicit, Almost Optimal, Epsilon-balanced Codes", Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing: 238––251, doi:10.1145/3055399.3055408, ISBN 9781450345286
Retrieved from "https://en.wikipedia.org/w/index.php?title=Small-bias_sample_space&oldid=836035402"
This content was retrieved from Wikipedia : http://en.wikipedia.org/wiki/Small-bias_sample_space
This page is based on the copyrighted Wikipedia article "Small-bias sample space"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA