摘要:Smart contract has greatly improved the services and capabilities of blockchain,but it has become the weakest link of blockchain security because of its code nature.Therefore,efficient vulnerability detection of smart contract is the key to ensure the security of blockchain system.Oriented to Ethereum smart contract,the study solves the problems of redundant input and low coverage in the smart contract fuzz.In this paper,a taint analysis method based on EVM is proposed to reduce the invalid input,a dangerous operation database is designed to identify the dangerous input,and genetic algorithm is used to optimize the code coverage of the input,which construct the fuzzing framework for smart contract together.Finally,by comparing Oyente and ContractFuzzer,the performance and efficiency of the framework are proved.
摘要:On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy.
摘要:Chemical spectral analysis is contemporarily undergoing a revolution and drawing much attention of scientists owing to machine learning algorithms,in particular convolutional networks.Hence,this paper outlines the major machine learning and especially deep learning methods contributed to interpret chemical images,and overviews the current application,development and breakthrough in different spectral characterization.Brief categorization of reviewed literatures is provided for studies per application apparatus:X-Ray spectra,UV-Vis-IR spectra,Micro-scope,Raman spectra,Photoluminescence spectrum.End with the overview of existing circumstances in this research area,we provide unique insight and promising directions for the chemical imaging field to fully couple machine learning subsequently.
摘要:In large-scale image retrieval,deep features extracted by Convolutional Neural Network(CNN)can effectively express more image information than those extracted by traditional manual methods.However,the deep feature dimensions obtained by Deep Convolutional Neural Network(DCNN)are too high and redundant,which leads to low retrieval efficiency.We propose a novel image retrieval method,which combines deep features selection with improved DCNN and hash transform based on high-dimension features reduction to gain low-dimension deep features and realizes efficient image retrieval.Firstly,the improved network is based on the existing deep model to build a more profound and broader network by adding multiple groups of different branches.Therefore,it is named DFS-Net(Deep Feature Selection Network).The adaptive learning deep features of the Network can effectively alleviate the influence of over-fitting and improve the feature expression of image content.Secondly,the information gain rate method is used to filter the extracted deep features to reduce the feature dimension and ensure the information loss is small.The last step of the method,hash Transform,sparsifies and binarizes this representation to reduce the computation and storage pressure while maintaining the retrieval accuracy.Finally,the scheme is based on the distinguished ResNet50,InceptionV3,and MobileNetV2 models,and studied and evaluated deeply on the CIFAR10 and Caltech256 datasets.The experimental results show that the novel method can train the deep features with stronger recognition ability on limited training samples,and improve the accuracy and efficiency of image retrieval effectively.
摘要:Heart rate is an important data reflecting human vital characteristics and an important reference index to describe human physical and mental state.Currently,widely used heart rate measurement devices require direct contact with a person’s skin,which is not suitable for people with burns,delicate skin,newborns and the elderly.Therefore,the research of non-contact heart rate measurement method is of great significance.Based on the basic principle of Photoplethysmography,we use the camera of computer equipment to capture the face image,detect the face region accurately,and detect multiple faces in the image based on multi-target tracking algorithm.Then the region segmentation of the face image is carried out to further realize the signal acquisition of the region of interest.Finally,peak detection,Fourier analysis and wavelet analysis were used to detect the frequency of PPG and ECG signals.The experimental results show that the heart rate information can be quickly and accurately detected even in the case of monitoring multiple face targets.
摘要:Scene text detection is an important step in the scene text reading system.There are still two problems during the existing text detection methods:(1)The small receptive of the convolutional layer in text detection is not sufficiently sensitive to the target area in the image;(2)The deep receptive of the convolutional layer in text detection lose a lot of spatial feature information.Therefore,detecting scene text remains a challenging issue.In this work,we design an effective text detector named Adaptive Multi-Scale HyperNet(AMSHN)to improve texts detection performance.Specifically,AMSHN enhances the sensitivity of target semantics in shallow features with a new attention mechanism to strengthen the region of interest in the image and weaken the region of no interest.In addition,it reduces the loss of spatial feature by fusing features on multiple paths,which significantly improves the detection performance of text.Experimental results on the Robust Reading Challenge on Reading Chinese Text on Signboard(ReCTS)dataset show that the proposed method has achieved the state-of-the-art results,which proves the ability of our detector on both particularity and universality applications.
摘要:With the continuous development of face recognition network,the selection of loss function plays an increasingly important role in improving accuracy.The loss function of face recognition network needs to minimize the intra-class distance while expanding the inter-class distance.So far,one of our mainstream loss function optimization methods is to add penalty terms,such as orthogonal loss,to further constrain the original loss function.The other is to optimize using the loss based on angular/cosine margin.The last is Triplet loss and a new type of joint optimization based on HST Loss and ACT Loss.In this paper,based on the three methods with good practical performance and the joint optimization method,various loss functions are thoroughly reviewed.
摘要:Privacy protection is a hot research topic in information security field.An improved XGBoost algorithm is proposed to protect the privacy in classification tasks.By combining with differential privacy protection,the XGBoost can improve the classification accuracy while protecting privacy information.When using CART regression tree to build a single decision tree,noise is added according to Laplace mechanism.Compared with random forest algorithm,this algorithm can reduce computation cost and prevent overfitting to a certain extent.The experimental results show that the proposed algorithm is more effective than other traditional algorithms while protecting the privacy information in training data.
摘要:With the development of computation technology,the augmented reality(AR)is widely applied in many fields as well as the image recognition.However,the AR application on mobile platform is not developed enough in the past decades due to the capability of the mobile processors.In recent years,the performance of mobile processors has changed rapidly,which makes it comparable to the desktop processors.This paper proposed and realized an AR system to be used on the Android mobile platform based on the image recognition through EasyAR engine and Unity 3D development tools.In this system,the image recognition could be done locally and/or using cloud recognition.Test results show that the cloud-based recognition is more efficient and accuracy than the local recognition for the mobile AR when there are more images to be recognized at the same time.
摘要:As machine learning moves into high-risk and sensitive applications such as medical care,autonomous driving,and financial planning,how to interpret the predictions of the black-box model becomes the key to whether people can trust machine learning decisions.Interpretability relies on providing users with additional information or explanations to improve model transparency and help users understand model decisions.However,these information inevitably leads to the dataset or model into the risk of privacy leaks.We propose a strategy to reduce model privacy leakage for instance interpretability techniques.The following is the specific operation process.Firstly,the user inputs data into the model,and the model calculates the prediction confidence of the data provided by the user and gives the prediction results.Meanwhile,the model obtains the prediction confidence of the interpretation data set.Finally,the data with the smallest Euclidean distance between the confidence of the interpretation set and the prediction data as the explainable data.Experimental results show that The Euclidean distance between the confidence of interpretation data and the confidence of prediction data provided by this method is very small,which shows that the model's prediction of interpreted data is very similar to the model's prediction of user data.Finally,we demonstrate the accuracy of the explanatory data.We measure the matching degree between the real label and the predicted label of the interpreted data and the applicability to the network model.The results show that the interpretation method has high accuracy and wide applicability.
摘要:At present,the research of blockchain is very popular,but the practical application of blockchain is very few.The main reason is that the concurrency of blockchain is not enough to support application scenarios.After that,applications such as Intervalue increase the concurrency of blockchain transactions.However,due to the problems of network bandwidth and algorithm performance,there is always a broadcast storm,which affects the normal use of nodes in the whole network.However,the emergence of broadcast storms needs to rely on the node itself,which may be very slow.Even if developers debug the corresponding code,they cannot conduct an effective test in the whole network.Broadcast storm problem mainly occurs in scenarios with large transaction volume,such as the financial industry.Due to its characteristics,the concurrency of transactions in the financial industry will increase at a certain time.If there is no effective algorithm to deal with it,the broadcast storm will be triggered and the whole network will be paralyzed.To solve the problem of the broadcast storm,this paper combines blockchain,peer-to-peer network,artificial intelligence,and other technologies,and proposes a broadcast storm detection and processing method based on situation awareness.The purpose is to cut off the further spread of broadcast storms from the node itself and maintain the normal operation of the whole network nodes.
摘要:In order to quickly and accurately find the implementer of the network crime,based on the user portrait technology,a rapid detection method for users with abnormal behaviors is proposed.This method needs to construct the abnormal behavior rule base on various kinds of abnormal behaviors in advance,and construct the user portrait including basic attribute tags,behavior attribute tags and abnormal behavior similarity tags for network users who have abnormal behaviors.When a network crime occurs,firstly get the corresponding tag values in all user portraits according to the category of the network crime.Then,use the Naive Bayesian method matching each user portrait,to quickly locate the most likely network criminal suspects.In the case that no suspect is found,all users are audited comprehensively through matching abnormal behavior rule base.The experimental results show that,the accuracy rate of using this method for fast detection of network crimes is 95.9%,and the audit time is shortened to 1/35 of that of the conventional behavior audit method.
摘要:Object detection has been studied for many years.The convolutional neural network has made great progress in the accuracy and speed of object detection.However,due to the low resolution of small objects and the representation of fuzzy features,one of the challenges now is how to effectively detect small objects in images.Existing target detectors for small objects:one is to use high-resolution images as input,the other is to increase the depth of the CNN network,but these two methods will undoubtedly increase the cost of calculation and time-consuming.In this paper,based on the RefineDet network framework,we propose our network structure RF2Det by introducing Receptive Field Block to solve the problem of small object detection,so as to achieve the balance of speed and accuracy.At the same time,we propose a Medium-level Feature Pyramid Networks,which combines appropriate high-level context features with low-level features,so that the network can use the features of both the low-level and the high-level for multi-scale target detection,and the accuracy of the small target detection task based on the low-level features is improved.Extensive experiments on the MS COCO dataset demonstrate that compared to other most advanced methods,our proposed method shows significant performance improvement in the detection of small objects.
摘要:In traditional secret sharing schemes,all shared images containing secret segments are needed to recover secret information.In this paper,a reversible data hiding scheme based on Shamir secret sharing is used.Secret information can be recovered even if only part of the encrypted sharing is owned.This method can reduce the vulnerability of traditional encryption sharing schemes to attack.Before uploading the secret information to the cloud server,embed the encrypted n segments of secret information into n different pictures.The receiver downloads t images from the cloud server(t
摘要:The term“steganography”is derived from the Greek words steganos,which means“verified,concealed,or guaranteed”,and graphein,which means“writing”.The primary motivation for considering steganography is to prevent unapproved individuals from obtaining disguised data.With the ultimate goal of comprehending the fundamental inspiration driving the steganography procedures,there should be no significant change in the example report.The Least Significant Bit(LSB)system,which is one of the methodologies for concealing propelled picture data,is examined in this assessment.In this evaluation,another procedure for data stowing indefinitely is proposed with the ultimate goal of limiting the progressions occurring in the spread record while hiding the data with the LSB technique and making the best cover to make it difficult to get concealed data.The RGB(Red,Green,and Blue)pixel esteem based stegnography technique is proposed in this proposition.The claim to fame of this calculation is that,unlike other stegnography calculations,we do not change the pixels unless absolutely necessary.
摘要:Mainly introduces intelligent classification trash can be dedicated to solving indoor household garbage classification.The trash can is based on AT89S52 single-chip microcomputer as the main control chip.The single-chip microcomputer realizes the intelligent classification of garbage by controlling the voice module,mechanical drive module,and infrared detection module.The use of voice control technology and infrared detection technology makes the trash can have voice control and overflow alarm functions.The design has the advantages of simple and intelligent operation,simple structure,stable performance,low investment,etc.,which can further effectively isolate people and garbage,reduce human infection with bacteria,and is a feasible solution for classification at the source of garbage.
摘要:With the advent of the era of big data,the Provenance Method of electronic archives based on knowledge graph under the environment of big data has produced a large number of electronic archives due to the development of science and technology.How to guarantee the credential characteristics of electronic archives in the big data environment has attracted wide attention of the academic community.Provenance is an important technical means to guarantee the certification of electronic archives.In this paper,knowledge graph technology is used to provide the concept provenance of electronic archives in large data environment.It not only enriches the provenance method,but also guarantees the certification of electronic archives in the large data environment.
摘要:Behind the popularity of multimedia technology,the dispute over image copyright is getting worse.In the digital watermark prevention technology for copyright infringement,watermark technology is considered to be an important technology to overcome data protection problems and verify the relationship between data ownership.Among the many digital watermarking technologies,zero watermarking technology has been favored in recent years.However,the existing zero watermark technology in the implementation process often needs a trusted third party to store watermarks,which may make the data too central,data storage security is low and copyright registration costs are too high,which creates a rare problem.The decentivization and information cannot be tampered of blockchain technology’s nature find new methods for image copyright protection.This paper studies the role of zero watermark algorithm in the image copyright and its complete storage and certification scheme,proposes a zero watermark image protection framework based on blockchain,and builds a system according to the framework.Combined with blockchain and zero watermarking technology,the framework uses inter IPFS(Inter Planetary File System)to solve the problem of blockchain efficient storage and sharing of large files.In addition,the application of user copyright information,image image query and image trading in the system are realized based on smart contracts,which solves the problem of lack of trusted third parties.Experiments show that the scheme is feasible and robust to various attacks.
摘要:This paper focuses on the forward error correction(FEC),the basic parameters determination of the RS convolution code,Turbo code and the LDPC code,and the corresponding encoding and decoding algorithm in power line communication(PLC)standard.Simulation experiment which is designed for narrow-band power line communication system based on OFDM is done.The coding using RS convolution code,Turbo code and LDPC code are compared,and further it is determined that which encoding method is more suitable for power line communication in China.
摘要:Nowadays,machine learning(ML)algorithms cannot succeed without the availability of an enormous amount of training data.The data could contain sensitive information,which needs to be protected.Membership inference attacks attempt to find out whether a target data point is used to train a certain ML model,which results in security and privacy implications.The leakage of membership information can vary from one machine-learning algorithm to another.In this paper,we conduct an empirical study to explore the performance of membership inference attacks against three different machine learning algorithms,namely,K-nearest neighbors,random forest,support vector machine,and logistic regression using three datasets.Our experiments revealed the best machine learning model that can be more immune to privacy attacks.Additionally,we examined the effects of such attacks when varying the dataset size.Based on our observations for the experimental results,we propose a defense mechanism that is less prone to privacy attacks and demonstrate its effectiveness through an empirical evaluation.
摘要:With the increase of software complexity,the security threats faced by the software are also increasing day by day.So people pay more and more attention to the mining of software vulnerabilities.Although source code has rich semantics and strong comprehensibility,source code vulnerability mining has been widely used and has achieved significant development.However,due to the protection of commercial interests and intellectual property rights,it is difficult to obtain source code.Therefore,the research on the vulnerability mining technology of binary code has strong practical value.Based on the investigation of related technologies,this article firstly introduces the current typical binary vulnerability analysis framework,and then briefly introduces the research background and significance of the intermediate language;with the rise of artificial intelligence,a large number of machine learning methods have been tried to solve the problem of binary vulnerability mining.This article divides the current related binary vulnerabilities mining technology into traditional mining technology and machine learning mining technology,respectively introduces its basic principles,research status and existing problems,and briefly summarizes them.Finally,based on the existing research work,this article puts forward the prospect of the future research on the technology of binary program vulnerability mining.
摘要:Cloud Computing expands its usability to various fields that utilize data and store it in a common space that is required for computing and the purpose of analysis as like the IoT devices.These devices utilize the cloud for storing and retrieving data since the devices are not capable of storing processing data on its own.Cloud Computing provides various services to the users like the IaaS,PaaS and SaaS.The major drawback that is faced by cloud computing include the Utilization of Cloud services for the storage of data that could be accessed by all the users related to cloud.The use of Public Key Encryptions with keyword search(PEKS)provides security against the untrustworthy third-party search capability on publicly encryption keys without revealing the data’s contents.But the Security concerns of PEKs arise when Inside Keywords Guessing attacks(IKGA),is identified in the system due to the untrusted server presume the keyword in trapdoor.This issue could be solved by using various algorithms like the Certificateless Hashed Public Key Authenticated Encryption with Keyword Search(CL-HPAEKS)which utilizes the Modified Elliptic Curve Cryptography(MECC)along with the Mutation Centred flower pollinations algorithm(CM-FPA)that is used in enhancing the performance of the algorithm using the Optimization in keys.The additional use of Message Digests 5(MD5)hash function in the system enhances the security Level that is associated with the system.The system that is proposed achieves the security level performance of 96 percent and the effort consumed by the algorithm is less compared to the other encryption techniques.
摘要:With the development of Globe Energy Internet,quantum steganography has been used for information hiding to improve copyright protection.Based on secure quantum communication protocol,and flexible steganography,secret information is embedded in quantum images in covert communication.Under the premise of guaranteeing the quality of the quantum image,the secret information is transmitted safely with virtue of good imperceptibility.A novel quantum watermark algorithm is proposed in the paper,based on the shared group key value of the communication parties and the transmission of the selected carrier map pixel gray higher than 8 bits.According to the shared group key value of the communication parties,the two effective Bell state qubits of the carried quantum streak image are replaced with secret information.Compared with the existing algorithms,the new algorithm improves the robustness of the secret information itself and the execution efficiency of its embedding and extraction.Experimental simulation and performance analysis also show that the novel algorithm has an excellent performance in transparency,robustness and embedded capacity.
摘要:The widespread acceptance of machine learning,particularly of neural networks leads to great success in many areas,such as recommender systems,medical predictions,and recognition.It is becoming possible for any individual with a personal electronic device and Internet access to complete complex machine learning tasks using cloud servers.However,it must be taken into consideration that the data from clients may be exposed to cloud servers.Recent work to preserve data confidentiality has allowed for the outsourcing of services using homomorphic encryption schemes.But these architectures are based on honest but curious cloud servers,which are unable to tell whether cloud servers have completed the computation delegated to the cloud server.This paper proposes a verifiable neural network framework which focuses on solving the problem of data confidentiality and training integrity in machine learning.Specifically,we first leverage homomorphic encryption and extended diagonal packing method to realize a privacy-preserving neural network model efficiently,it enables the user training over encrypted data,thereby protecting the user’s private data.Then,considering the problem that malicious cloud servers are likely to return a wrong result for saving cost,we also integrate a training validation modular Proof-of-Learning,a strategy for verifying the correctness of computations performed during training.Moreover,we introduce practical byzantine fault tolerance to complete the verification progress without a verifiable center.Finally,we conduct a series of experiments to evaluate the performance of the proposed framework,the results show that our construction supports the verifiable training of PPNN based on HE without introducing much computational cost.
摘要:In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.
摘要:Firstly,this paper expounds the conceptual connotation of inservice assessment in the new system,then applies modeling and Simulation in the field of in-service assessment,establishes the conceptual model of inservice assessment and its process,and finally analyzes the application of modeling and simulation in the specific links of in-service assessment.
摘要:Deep learning related technologies,especially generative adversarial network,are widely used in the fields of face image tampering and forgery.Forensics researchers have proposed a variety of passive forensic and related anti-forensic methods for image tampering and forgery,especially face images,but there is still a lack of overview of anti-forensic methods at this stage.Therefore,this paper will systematically discuss the anti-forensic methods for face image tampering and forgery.Firstly,this paper expounds the relevant background,including the relevant tampering and forgery methods and forensic schemes of face images.The former mainly includes four aspects:conventional processing,fake face generation,face editing and face swapping;The latter is mainly the relevant forensic means based on spatial domain and frequency domain using deep learning technology.Then,this paper divides the existing anti-forensic works into three categories according to their method characteristics,namely hiding operation traces,forgery reconstruction and adversarial attack.Finally,this paper summarizes the limitations and prospects of the existing anti-forensic technologies.
摘要:With the rapid development of the Internet of Things(IoT),all kinds of data are increasing exponentially.Data storage and computing on cloud servers are increasingly restricted by hardware.This has prompted the development of fog computing.Fog computing is to place the calculation and storage of data at the edge of the network,so that the entire Internet of Things system can run more efficiently.The main function of fog computing is to reduce the burden of cloud servers.By placing fog nodes in the IoT network,the data in the IoT devices can be transferred to the fog nodes for storage and calculation.Many of the information collected by IoT devices are malicious traffic,which contains a large number of malicious attacks.Because IoT devices do not have strong computing power and the ability to detect malicious traffic,we need to deploy a system to detect malicious attacks on the fog node.In response to this situation,we propose an intrusion detection system based on distributed ensemble design.The system mainly uses Convolutional Neural Network(CNN)as the first-level learner.In the second level,the random forest will finally classify the prediction results obtained in the first level.This paper uses the UNSW-NB15 dataset to evaluate the performance of the model.Experimental results show that the model has good detection performance for most attacks.
摘要:With the rapid development of various applications of Information Technology,big data are increasingly generated by social network services(SNS)nowadays.The designers and providers of SNS distribute different client applications for PC,Mobile phone,IPTV etc.,so that users can obtain related service via mobile or traditional Internet.Good scalability and considerably short time delay are important indices for evaluating social network systems.As a result,investigating and mining the principle of users’behaviors is an important issue which can guide service providers to establish optimal systems with SNS.On the basis of analyzing the characteristics of social network system,this paper constructed a Stochastic Petri Net(SPN)model for describing the behaviors of three users for SNS.Moreover,the scalability of users’behaviors of SNS was studied by extending the SPN model of three users to the one of four users.Furthermore,average time delay was chosen as the performance index to evaluate the performance of these two constructed SPN models with Stochastic Petri Net Package(SPNP)6.0.For different parameters of number of connections,traffic load and buffer size,various trends and numerical results are derived thereby.The methodology of modeling and simulation in this paper can be further used to study the performance of SNS.
摘要:JPEG(Joint Image Experts Group)is currently the most widely used image format on the Internet.Existing cases show that many tampering operations occur on JPEG images.The basic process of the operation is that the JPEG file is first decompressed,modified in the null field,and then the tampered image is compressed and saved in JPEG format,so that the tampered image may be compressed several times.Therefore,the double compression detection of JPEG images can be an important part for determining whether an image has been tampered with,and the study of double JPEG compression anti-detection can further advance the progress of detection work.In this paper,we mainly review the literature in the field of double JPEG compression detection in recent years with two aspects,namely,the quantization table remains unchanged and the quantization table is inconsistent in the double JPEG compression process,Also,we will introduce some representative methods of double JPEG anti-detection in recent years.Finally,we analyze the problems existing in the field of double JPEG compression and give an outlook on the future development direction.