...
首页> 外文期刊>International Journal of Computer Science Engineering and Information Technology Research >AN EFFICIENT AND SECURE OTP ENABLED FILE SHARING SERVICE OVER BIG DATA ENVIRONMENT
【24h】

AN EFFICIENT AND SECURE OTP ENABLED FILE SHARING SERVICE OVER BIG DATA ENVIRONMENT

机译:在大数据环境中有效和安全的OTP的文件共享服务

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The high volume, velocity, and variety of data being produced by diverse scientific and business domains challenge standard solutions of data management, requiring them to scale while ensuring security and dependability. A fundamental problem is, where and how to store the vast amount of data that is being continuously generated. Private infrastructures are the first option for many organizations. However, creating and maintaining data centers is expensive, requires specialized workforce, and can create hurdles to sharing. Conversely, attributes like cost-effectiveness, ease of use, and (almost) infinite scalability make public cloud services natural candidates to address data storage problems. File sharing has been an essential part of this century. Using various applications, files can be shared to large number of users. For the purpose of storage, the Hadoop Distributed File System (HDFS) can be used. HDFS is mainly used for the unstructured data analysis. The HDFS handles large size of files in a single server. Common sharing methods like removable media, servers or computer network, World Wide Web based hyperlink documents are widely used. In this proposed work, the files are merged using MapReduce programming model on Hadoop. This process improves the performance of Hadoop by rejecting the files which are larger than the size of Hadoop and reduces the memory size required by the NameNode.
机译:通过各种科学和商业领域生产的高卷,速度和各种数据挑战数据管理的标准解决方案,要求他们在确保安全性和可靠性的同时缩放。基本问题是,何处以及如何存储持续生成的大量数据。私人基础架构是许多组织的第一个选择。但是,创建和维护数据中心是昂贵的,需要专门的劳动力,并且可以创造障碍分享。相反,属性等成本效益,易用性和(差不多)无限可伸缩性使公共云服务自然候选解决数据存储问题。文件分享是本世纪的重要组成部分。使用各种应用程序,文件可以共享给大量用户。出于存储目的,可以使用Hadoop分布式文件系统(HDFS)。 HDFS主要用于非结构化数据分析。 HDFS在单个服务器中处理大尺寸的文件。广泛使用的常见共享方法,如可移动媒体,服务器或计算机网络,广泛使用万维网的超链接文档。在此建议的工作中,文件使用Hadoop上的MapReduce编程模型合并。该过程通过拒绝大于Hadoop大小的文件来提高Hadoop的性能,并降低了NameNode所需的存储器大小。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号