【24h】

Generalized Group Testing

机译:Generalized Group Testing

获取原文
获取原文并翻译 | 示例
           

摘要

In the problem of classical group testing one aims to identify a small subset (of size $d$ ) of diseased individuals/defective items in a large population (of size $n$ ). This process is based on a minimal number of suitably-designed group tests on subsets of items, where the test outcome is positive iff the given test contains at least one defective item. Motivated by physical considerations, such as scenarios with imperfect test apparatus, we consider a generalized setting that includes as special cases multiple other group-testing-like models in the literature. In our setting the test outcome is governed by an arbitrary monotonically increasing (stochastic) test function $f(cdot)$ , with the test outcome being positive with probability $f(x)$ , where $x$ is the number of defectives tested in that pool. This formulation subsumes as special cases a variety of noiseless and noisy group-testing models in the literature. Our main contributions are as follows. Firstly, for any monotone test function $f(cdot)$ we present a non-adaptive scheme that with probability $1-varepsilon $ identifies all defective items. Our scheme requires at most ${mathcal{ O}}left ({{Psi (f)} dlog left ({frac {n}{varepsilon }}right)}right)$ tests, where ${Psi (f)}$ is a suitably defined “sensitivity parameter” of $f(cdot)$ , and is never larger than ${mathcal{ O}}(d^{1+o(1)})$ , but indeed can be substantially smaller for a variety of $f(cdot)$ . Secondly, we argue that any non-adaptive group testing scheme needs at least $Omega left ({(1-varepsilon) {psi (f)} dlog left ({frac {n} d}right)}right)$ tests to ensure high reliability recovery. Here ${psi (f)}$ is a suitably defined “concentration parameter” of $f(cdot)$ , and ${psi (f)}in Omega {(1)}$ . Thirdly, we prove that our sample-complexity bounds for generalized group testing are information-theoretically near-optimal for a variety of sparse-recovery group-testing models in the literature. That is, for any “noisy” test function $f(cdot)$ (i.e., $0 f(0) f(d) 1$ ), and for a variety of “(one-sided) noiseless” test functions $f(cdot)$ (i.e., either $f(0)=0$ , or $f(d)=1$ , or both) studied in the literature we show that $frac {Psi (f)} {psi (f)} in Theta (1)$ . As a by-product we tightly characterize the heretofore open information-theoretic order-wise sample-complexity for the well-studied model of threshold group-testing. For general (near)-noiseless test functions $f(cdot)$ we show that $frac {Psi (f)} {psi (f)} in {mathcal{ O}}(d^{1+o(1)})$ . We also demonstrate a “natural” test-function $f(cdot)$ whose sample complexity scales “extremally” as $Theta (d^{2}log n)$ , rather than $Theta (dlog n)$ as in the case of classical group-testing. Some of our techniques may be of independent interest – in particular our achievability requires a delicate saddle-point approximation, our impossibility proof relies on a novel bound relating the mutual information of pair of random variables with the mean and variance of a specific function, and as a by-product of our proof showing that our sample-complexity upper and lower bounds are close we derive novel structural results about monotone functions.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号