【24h】

The reliability of Big Data

机译:大数据的可靠性

获取原文

摘要

The era of Big Data is underway. Compared to Small Data, machine learning, computerized databases and other modern technologies makes it possible for Big Data to do handle massive data and reveal information in a way that individual bits of data can't. Random sampling's accuracy depends on ensuring randomness when collecting data from samples. Analyzing only a limited number of data points means errors may get amplified, which may reduce the accuracy of the overall results potentially. Meanwhile, Big Data gathers and analyzes massive data to produce some excellent results that we could never know when we are limited to smaller quantities, like address various societal ills, offer potential of new insights into diverse fields. But as the data comes from an environment of uncertainty and rapid change, bigger data may not a better data. Increasing the volume of data may lead to inaccuracy, but in return for relaxing the standards of allowable errors, which produces more valuable information and better results. This article elaborates the reliability of Big Data. Based on our analysis we have constructed a model to analyze the reliability of Big Data.
机译:大数据的时代正在进行中。与小数据相比,机器学习,计算机化数据库和其他现代技术使得大数据可以处理大规模数据并以各个数据的方式揭示信息。随机采样的准确性取决于从样本收集数据时确保随机性。仅分析有限数量的数据点意味着错误可能被放大,这可以降低潜在的整体结果的准确性。同时,大数据收集并分析了大规模数据,以产生一些优异的结果,我们永远无法知道我们当我们限制为较小的数量时,就像地址各种社会弊病一样,为不同的领域提供了新的洞察力的潜力。但随着数据来自不确定性和快速变化的环境,更大的数据可能不是更好的数据。增加数据量可能会导致不准确,但是为了放松允许错误的标准,这会产生更有价值的信息和更好的结果。本文阐述了大数据的可靠性。根据我们的分析,我们构建了一种分析大数据可靠性的模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号