Print ISSN: 1681-6900

Online ISSN: 2412-0758

Main Subjects : Computer

Proposal Framework to Light Weight Cryptography Primitives

Mustafa M. Abd Zaid; Soukaena Hassan

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 516-526
DOI: 10.30684/etj.v40i4.1679

Due to manufacturing cost and portability limitations, the computing power, storage capacity, and energy of the Internet of Things (IoT) hardware are still slowly developing. From above, the proposed security system based on encryption must consider the resources, time, memory used, and the lifespan of related sensors. In addition, some applications need simple encryption, especially after the emergence of IoT and the Web of Things (WoT). Providing solutions suitable for resource-constrained devices can be achieved by using lightweight cryptography. In this paper, building a framework that includes proposals for producing lightweight security algorithms for cryptography primitives was highly recommended. For the block cipher, some suggestions have been applied to an example of block encryption, Advance Encryption Standard 128 (AES-128), to produce lightweight AES-128. For lightweight stream cipher, the system applied the proposals on Ronald Rivest Cryptography algorithms (RC4). Rivest–Shamir–Adleman (RSA) algorithm is used to produce a lightweight asymmetric cipher by key partition and using the Chinese Remainder Theorem (CRT) in the decryption process to produce a lightweight RSA algorithm. Several proposals have been used for hash functions, the most important of which is reducing the number of rounds and simplifying the functions in SHA-256. Depending on the proposed framework, all the produced lightweight algorithms passed the National Institute of Standards and Technology (NIST) statistical tests for test randomness. The produced algorithms showed better processing time than standard algorithms, less memory usage for a lightweight version of each standard algorithm, and higher throughput than standard algorithms.

Textual Dataset Classification Using Supervised Machine Learning Techniques

Hanan Q. Jaleel; Jane J. Stephan; Sinan A. Naji

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 527-538
DOI: 10.30684/etj.v40i4.1970

Text classification has been a significant domain of study and research because of the increased volume of text datasets and documents available in digital format. Text classification is one of the major approaches used to arrange digital information via automatically allocating text dataset records or documents into predetermined classes depending on their contents. This paper proposes a technique that implements supervised machine learning algorithms such as KNN, Decision tree, Random Forest, Bernoulli Naive Bayes, and Multinomial Naive Bayes classifiers to classify a dataset into distinct classes. The proposed technique combines the above-mentioned machine learning classifiers with the TF-IDF feature extraction method as a vector space model to achieve more precise classification results. The proposed technique yields high accuracy, precision, recall, and f1-measure metric values for all the implemented classifiers. After comparing the obtained results of different classifiers, it is found that the Random Forest classifier is the best algorithm used to classify the textual dataset records with the highest accuracy value of 0.9995930.

Improving Machine Learning Performance by Eliminating the Influence of Unclean Data

Murtadha B. Ressan; Rehab F. Hassan

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 546-539
DOI: 10.30684/etj.v40i4.2010

Regardless of the data source and type (text, digital, photo group, etc.), they are usually unclean data. The term (unclean) means that data contains some bugs and paradoxes that can strongly impact machine learning processes. The nature of the input data of the dataset is the most important reason for the success of the learning algorithm. More than one factor influences machine learning results in a specific task. The characteristics and the nature of the data are the main reasons for the algorithm's success. This paper generally examines data processing entered into an algorithm to learn machines. The paper explains the operations of each stage of prior treatment data for the best achievement of its data set. In this paper, four models for teaching machines (SVM, Multiple Bayes - NB, and Bernoulli - NB) will be used. Best accuracy (Bernoulli - NB) model 89%. The pre-processing algorithm applied to the data set (dirty data) will be developed and compared to previous results before development. The Bernoulli-NB model reaches 91% accuracy and improves the value of the rest of the models used in this process.

Tuning the Hyperparameters of the 1D CNN Model to Improve the Performance of Human Activity Recognition

Rana A. Lateef; Ayad R. Abbas

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 547-554
DOI: 10.30684/etj.v40i4.2054

The human activity recognition (HAR) field has recently become one of the trendiest research topics due to ready-made sensors such as accelerometers and gyroscopes equipped with smartphones and smartwatches as an embedded devices, decreasing the cost and power consumption. As a result, human activity is considered a time series classification problem. Now a day, deep learning approaches such as Convolutional Neural Network (CNN) have been successful when implemented with HAR to learn automatically higher-order features and, at the same time, work as a classifier. Recently, a one-dimensional Convolutional Neural Network (1D CNN) has been suggested and carried out at the best performance levels in numerous applications, such as the classification of personalized biomedical data and time series classification. This paper studies how to leverage a 1D single CNN model to produce an excellent performance on the human activity raw data. This is done by empirically tuning the values of hyperparameters, such as kernel size, filter maps, number of epochs, batch size, and promoting an advanced multi-headed 1D CNN by employing each convolutional layer with a different kernel size to gain an ensemble–like results. The selected hyper parameter's impact is evaluated on a publicly available dataset named UCI HAR collected from smartphone sensors to perform six activities. A significant determinant of better results depends on the hyperparameter that has been chosen. The results demonstrated that tuning the hyperparameter of 1D CNN increased activity recognition accuracy.

A Proposed WoT System for Diagnosing the Infection of Coronavirus (Covid-19)

Dalal M. Thair; Akbas E. Ali

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 563-572
DOI: 10.30684/etj.v40i4.2087

Coronavirus is one of the viruses that have broadly affected humans and the health system in general. The problem is that there is no treatment for the virus yet, and the virus spreads very quickly through coughing or touching. Therefore, patients infected with this virus must be isolated in their homes or designated care places. Therefore, the research aims to find appropriate methods to diagnose people with the virus remotely to avoid "mixing and trying to determine the virus's focus spread by presenting a new framework for e-health to identify Coronavirus patients. Since the web of things (WoT) is helpful in many areas of medical applications, it will be used as a technique to build a complete system for diagnosing those infected with the virus. Such an approach will provide advice for prevention and isolation. It is very important to check that you have the virus or if you only have a fever, to distance yourself from others who have been affected by Covid-19 when you go to the hospital. Therefore, you can check your health status remotely without going to the hospital. It will present a comprehensive WoT system for COVID-19 Virus Detection (CVD), which provides the most important needs of the infected people. Some of these vital needs are finding an easy way to detect infection by virus, contacting specialized doctors to provide consultations, contacting pharmacies to deliver treatment to the home, contacting Laboratories, mapping the spread of the virus over the world, and educating the citizen at home. In addition, it assists in articles related to the virus that will help the researchers and patients reach the newest details about the pandemic. In designing this system, a group of web design languages was used under the principle of the web of things, such as (HTML, HTML5, CSS, CSS3, JavaScript, Bootstrap) in addition to interactive graphic interfaces.

Building an Efficient System to Detect Computer Worms in Websites Based on Ensemble Ada Boosting and SVM Classifiers Algorithms

Ali K. Hilool; Soukaena H. Hashem; Shatha H. jafer

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 595-604
DOI: 10.30684/etj.v40i4.2148

Computer worms perform harmful tasks in network systems due to their rapid spread, which leads to harmful consequences on system security. However, existing worm detection algorithms are still suffered a lot to achieve good performance. The reasons for that are: First, a large number of irrelevant data impacts classification accuracy (irrelevant feature gives estimator new ways to go wrong without any expected benefit also can cause overfitting, which will generally lead to decreased accuracy). Second, the individual classifiers used extensively in the systems do not effectively detect all types of worms. Third, many systems are built based on old datasets, making them less suitable for new types of worms.  The research aims to detect computer worms in the network based on data mining algorithms for their high ability to automatically and accurately detect new types of computer worms. The proposal uses misuse and anomaly detection techniques based on the UNSW_NB15 dataset to train and test the ensemble Ada Boosting algorithm using SVM and DT classifiers. To select the most important features, we propose to conduct the similar features selected by Correlation and Chi-Square feature selection (since correlation finds the relations between features and classes whereas Chi finds whether features and classes are independent or not). The contribution suggests using SVM in the boosting ensemble algorithm as base estimators instead of DT to efficiently detect various types of worms. The system achieved accuracy, reaching 100% with CFS+Chi2fs and 99.38, 99.89 with correlation and chi-square separately.

Algebraic Decomposition Method for Zero Watermarking Technique in YCbCr Space

Nada S. Mohammed; Areej M. Abduldaim

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 605-616
DOI: 10.30684/etj.v40i4.2028

The close connection between mathematics, especially linear algebra, and computer science has greatly impacted the development of several fields, and the most important is image processing. Algebraic methods aroused interest in building digital image watermarking techniques and are used to find the features of the image to hide the watermark. This paper aims to use the algebraic Hessemberge decomposition method (HDM) for the first time as a transformation to extract the features of the image without using any popular transformation for building zero watermarking. To achieve the aim, two techniques are used, HDM with and without discrete cosine transform (DCT); both depend on the advantage of the algebraic HDM to convert the image to another domain in the YCbCr space. After applying eleven common attacks to images in both techniques, the results showed that the NC values ​​under the influence of many attacks were higher in the second technique than the NC values ​​in the first technique. In contrast, the NC values ​​for salt and pepper attack in the first technique are higher than the NC values ​​in the second technique.

Comparative Analysis of GMM, KNN, and ViBe Background Subtraction Algorithms Applied in Dynamic Background Scenes of Video Surveillance System

Maryam A. Yasir; Yossra H. Ali

Engineering and Technology Journal, 2022, Volume 40, Issue 4, Pages 617-626
DOI: 10.30684/etj.v40i4.2154

Background subtraction is the most prominent technique applied in the domain of detecting moving objects. However, there is a wide range of different background subtraction models. Choosing the best model that addresses a number of challenges is still a vital research area.
Therefore, in this article we present a comparative analysis of three promising algorithms used in this domain, GMM, KNN and ViBe. CDnet 2014 is the benchmark dataset used in this analysis with several quantitative evaluation metrics like precession, recall, f-measures, false positive rate, false negative rate and PWC. In addition, qualitative evaluations are illustrated in snapshots to depict the visual scenes evaluation. ViBe algorithm outperform other algorithms for overall evaluations.