Due to manufacturing cost and portability limitations, the computing power, storage capacity, and energy of the Internet of Things (IoT) hardware are still slowly developing. From above, the proposed security system based on encryption must consider the resources, time, memory used, and the lifespan of related sensors. In addition, some applications need simple encryption, especially after the emergence of IoT and the Web of Things (WoT). Providing solutions suitable for resource-constrained devices can be achieved by using lightweight cryptography. In this paper, building a framework that includes proposals for producing lightweight security algorithms for cryptography primitives was highly recommended. For the block cipher, some suggestions have been applied to an example of block encryption, Advance Encryption Standard 128 (AES-128), to produce lightweight AES-128. For lightweight stream cipher, the system applied the proposals on Ronald Rivest Cryptography algorithms (RC4). Rivest–Shamir–Adleman (RSA) algorithm is used to produce a lightweight asymmetric cipher by key partition and using the Chinese Remainder Theorem (CRT) in the decryption process to produce a lightweight RSA algorithm. Several proposals have been used for hash functions, the most important of which is reducing the number of rounds and simplifying the functions in SHA-256. Depending on the proposed framework, all the produced lightweight algorithms passed the National Institute of Standards and Technology (NIST) statistical tests for test randomness. The produced algorithms showed better processing time than standard algorithms, less memory usage for a lightweight version of each standard algorithm, and higher throughput than standard algorithms.
Text classification has been a significant domain of study and research because of the increased volume of text datasets and documents available in digital format. Text classification is one of the major approaches used to arrange digital information via automatically allocating text dataset records or documents into predetermined classes depending on their contents. This paper proposes a technique that implements supervised machine learning algorithms such as KNN, Decision tree, Random Forest, Bernoulli Naive Bayes, and Multinomial Naive Bayes classifiers to classify a dataset into distinct classes. The proposed technique combines the above-mentioned machine learning classifiers with the TF-IDF feature extraction method as a vector space model to achieve more precise classification results. The proposed technique yields high accuracy, precision, recall, and f1-measure metric values for all the implemented classifiers. After comparing the obtained results of different classifiers, it is found that the Random Forest classifier is the best algorithm used to classify the textual dataset records with the highest accuracy value of 0.9995930.
Regardless of the data source and type (text, digital, photo group, etc.), they are usually unclean data. The term (unclean) means that data contains some bugs and paradoxes that can strongly impact machine learning processes. The nature of the input data of the dataset is the most important reason for the success of the learning algorithm. More than one factor influences machine learning results in a specific task. The characteristics and the nature of the data are the main reasons for the algorithm's success. This paper generally examines data processing entered into an algorithm to learn machines. The paper explains the operations of each stage of prior treatment data for the best achievement of its data set. In this paper, four models for teaching machines (SVM, Multiple Bayes - NB, and Bernoulli - NB) will be used. Best accuracy (Bernoulli - NB) model 89%. The pre-processing algorithm applied to the data set (dirty data) will be developed and compared to previous results before development. The Bernoulli-NB model reaches 91% accuracy and improves the value of the rest of the models used in this process.
The human activity recognition (HAR) field has recently become one of the trendiest research topics due to ready-made sensors such as accelerometers and gyroscopes equipped with smartphones and smartwatches as an embedded devices, decreasing the cost and power consumption. As a result, human activity is considered a time series classification problem. Now a day, deep learning approaches such as Convolutional Neural Network (CNN) have been successful when implemented with HAR to learn automatically higher-order features and, at the same time, work as a classifier. Recently, a one-dimensional Convolutional Neural Network (1D CNN) has been suggested and carried out at the best performance levels in numerous applications, such as the classification of personalized biomedical data and time series classification. This paper studies how to leverage a 1D single CNN model to produce an excellent performance on the human activity raw data. This is done by empirically tuning the values of hyperparameters, such as kernel size, filter maps, number of epochs, batch size, and promoting an advanced multi-headed 1D CNN by employing each convolutional layer with a different kernel size to gain an ensemble–like results. The selected hyper parameter's impact is evaluated on a publicly available dataset named UCI HAR collected from smartphone sensors to perform six activities. A significant determinant of better results depends on the hyperparameter that has been chosen. The results demonstrated that tuning the hyperparameter of 1D CNN increased activity recognition accuracy.
In the present study, the layers of porous Silicon (PS) have been produced from the p-type Silicon with a (100) orientation using the approach of electrochemical etching. The samples were anodized in a solution of HF concentration 18% and 99% C2
OH. Samples characteristics of PS were studied by etching time constant (15 min). In addition, the alteration of the current density value into (5, 10,15,20, and 25) mA/ cm2
was also studied. Samples were characterized by nanocrystalline porous Silicon via X-Ray Diffraction (XRD). The AFM (Atomic force microscope) analysis of PS shows the sponge-like structure. Also, a 39.76 nm average diameter was coordinated in the rod-like temperature variation, fabricated from prepared samples on the sensor's sensitivity, recovery time, and response time. The maximal level of the sensitivity has been approximately (20,11)% for porous Silicon of gas NO2
In this paper, we reported that the annealing at temperatures of 400 and 500 oC in the air for 2 hours led to the formation of rod-like structures of cupric oxide thin films prepared by the chemical bath deposition technique. The structure and the optical properties of the prepared thin films were studied to investigate the role of annealing on the films. The morphology of the as-deposited CuO films is almost structureless. However, the films are converted to rod-like shapes nano-structures after annealing, as confirmed by scanning electron microscopy. The x-ray analysis showed that the thin films of copper oxide nano-structures have a monoclinic crystallinity preferred in the (110), (002), and (111) directions, and the crystallinity increases after annealing. Furthermore, the bandgap values after annealing are reduced from 2.1 to 1.61 and 1.63 eV as determined by optical analysis utilizing UV–VIS spectroscopy.
Zn thin films have been successfully deposited on two different substrates, FTO and p-type Si (111), with thickness (112, 186) nm at (1 and 8) min, respectively, via DC sputtering technique in this work. Structural properties of the prepared thin films were studied using X-ray diffraction (XRD) and field-emission scanning electron microscopy (FESEM). XRD results showed that the samples have a hexagonal wurtzite structure. From the results of FESEM images, all the samples showed a uniform distribution of granular surface shape morphology. The grain sizes of the Zn thin films were estimated based on measured X-ray diffraction patterns. Zn thin film thicknesses were increased as the sputtering time increased for all substrates. The best result was the deposition of zinc nanoparticles on Si (p-type) at 1 min, where the particle size was at the peak of 7 nm.
Scene classification is an essential conception task used by robotics for understanding the environment. Like the street scene, the outdoor scene is composed of images with depth that has a greater variety than iconic object images. Image semantic segmentation is an important task for Autonomous driving and Mobile robotics applications because it introduces enormous information needed for safe navigation and complex reasoning. This paper provides a model for semantic segmentation of outdoor sense to classify each object in the scene. The proposed network model generates a hybrid model that combines U-NET with Xception networks to work on 2.5 dimensions cityscape dataset, which is used for 3D applications. This process contains two stages. The first is the pre-processing operation on the RGB-D dataset (data Augmentation and k- means cluster). The second stage designed the hybrid model, which achieves a pixel accuracy is 0.7874. The output module is generated using a computer with GPU memory NVIDIA GeForce RTX 2060 6G, programming with python 3.7.
Computer worms perform harmful tasks in network systems due to their rapid spread, which leads to harmful consequences on system security. However, existing worm detection algorithms are still suffered a lot to achieve good performance. The reasons for that are: First, a large number of irrelevant data impacts classification accuracy (irrelevant feature gives estimator new ways to go wrong without any expected benefit also can cause overfitting, which will generally lead to decreased accuracy). Second, the individual classifiers used extensively in the systems do not effectively detect all types of worms. Third, many systems are built based on old datasets, making them less suitable for new types of worms. The research aims to detect computer worms in the network based on data mining algorithms for their high ability to automatically and accurately detect new types of computer worms. The proposal uses misuse and anomaly detection techniques based on the UNSW_NB15 dataset to train and test the ensemble Ada Boosting algorithm using SVM and DT classifiers. To select the most important features, we propose to conduct the similar features selected by Correlation and Chi-Square feature selection (since correlation finds the relations between features and classes whereas Chi finds whether features and classes are independent or not). The contribution suggests using SVM in the boosting ensemble algorithm as base estimators instead of DT to efficiently detect various types of worms. The system achieved accuracy, reaching 100% with CFS+Chi2fs and 99.38, 99.89 with correlation and chi-square separately.
The close connection between mathematics, especially linear algebra, and computer science has greatly impacted the development of several fields, and the most important is image processing. Algebraic methods aroused interest in building digital image watermarking techniques and are used to find the features of the image to hide the watermark. This paper aims to use the algebraic Hessemberge decomposition method (HDM) for the first time as a transformation to extract the features of the image without using any popular transformation for building zero watermarking. To achieve the aim, two techniques are used, HDM with and without discrete cosine transform (DCT); both depend on the advantage of the algebraic HDM to convert the image to another domain in the YCbCr space. After applying eleven common attacks to images in both techniques, the results showed that the NC values under the influence of many attacks were higher in the second technique than the NC values in the first technique. In contrast, the NC values for salt and pepper attack in the first technique are higher than the NC values in the second technique.
Background subtraction is the most prominent technique applied in the domain of detecting moving objects. However, there is a wide range of different background subtraction models. Choosing the best model that addresses a number of challenges is still a vital research area.
Therefore, in this article we present a comparative analysis of three promising algorithms used in this domain, GMM, KNN and ViBe. CDnet 2014 is the benchmark dataset used in this analysis with several quantitative evaluation metrics like precession, recall, f-measures, false positive rate, false negative rate and PWC. In addition, qualitative evaluations are illustrated in snapshots to depict the visual scenes evaluation. ViBe algorithm outperform other algorithms for overall evaluations.