Academia of Fundamental Computing Research http://journal.excelligentacademia.com/index.php/AFCR <p><strong>Academia of Fundamental Computing Research</strong></p> <p><strong>(online ISSN 2773-4927)</strong></p> en-US Academia of Fundamental Computing Research 2773-4927 KNN Classification in Diabetes Document http://journal.excelligentacademia.com/index.php/AFCR/article/view/72 <p>Information extraction is one of the technique in text mining which extract the information or knowledge from the unstructured form of document. This paper will compare the performance of two approaches in information extraction technique: statistical and linguistic approaches in classifying the diabetes document. Both approaches will extract the information related to the risk factor of diabetes from the biomedical literature. The comparison for both approaches will help improve the performance of the information extraction technique in extracting information from biomedical document. There are many research focus to extract information from natural language text document but little research focus on biomedical text literature. Comparing between two approaches of information extraction technique will help to improve the performance in extracting information from biomedical literature. This paper use two different tools to extract term from the abstract of the related journal. The tools used for statistical approach is fivefilters and linguistic approach is Flexiterm tool. The dataset collected in this project contain only the abstract and title part of the journal which related to diabetes disease retrieved from PubMed. The total dataset used in this research is 104 document. To measure the performance for the extracted term for both approaches, text classification technique is used with K-Nearest Neighbors classifier is applied for the classification process. The dataset is split into 70% for training which contribute to 73 document and 30% which equal to 31 document for testing data set. The result from the classification of both approaches return the average accuracy of statistical approach is 80.65% and linguistic approach with 85.71%. From the result obtained it showed that linguistic approach is the better approach to extract information from biomedical document compared to statistical approach.</p> Nor Shafiqah Mislani Rohayanti Hassan Rd Rohmat Saedudin Copyright (c) 2021 Academia of Fundamental Computing Research 2021-02-15 2021-02-15 1 2 Comparative Study in Classifying Fake Fingerprint Images http://journal.excelligentacademia.com/index.php/AFCR/article/view/57 <p>Biometrics is the measurement for body calculation and statistical analysis. It is commonly used as form of security technology. Fingerprints recognition is used as one of the biometrics due to its strong conditions as each individual has a unique fingerprint pattern. Unfortunately, it is also exposed to every type of attacks. Fake fingerprint in biometric devices. The biometrics devices needed to have a software algorithm to classify between real or fake fingerprint to combat breach of security that used biometrics. Due to this, liveness detection comes to light to differentiate live and fake fingerprints. The proposed method in this study is using Support Vector Machine and Naïve Bayes as a classifier to compare which classifier can produce the best accuracy on detecting the fake fingerprint. The classifiers will undergo few cross-validations that will be presented in this study to produce which classifiers have better accuracy on detecting fake fingerprints.</p> Nur Natasha Izzati Sulaiman Rohayanti Hassan Ashraf Osman Ibrahim Copyright (c) 2020 Academia of Fundamental Computing Research 2020-12-21 2020-12-21 1 2 Elastic SCAD SVM Cluster for The Selection of Significant Functional Connectivity in Autism Spectrum Disorder Classification http://journal.excelligentacademia.com/index.php/AFCR/article/view/49 <p>In the study of functional connectivity for autism spectrum disorder (ASD), the correlation is calculated from the magnetic resonance imaging data for many different pairs of brain regions. Due to the huge number of brain regions exists, the correlation matrix that served as the input for the classifier in machine learning is of high dimensionality. The fact that the correlation is calculated based on all brain regions shows that the correlation matrix might contains irrelevant functional connectivity for the study of ASD. To solve these problems, a framework based on penalized support vector machine (SVM) cluster is proposed, which will select the significant functional connectivity from the original functional connectivity, to be used as the input for several penalized SVM in the cluster, each of the penalized SVMs generated a set of significant feature IDs. A significant functional connectivity matrix is generated to be used as the input features for the final SVM. By comparing to the existing methods that used single SVM, the results show that the proposed method has greatly improved the classification performance in terms accuracy, specificity and sensitivity. Additionally, the selected features are proposed as the regions of interest in brain to study ASD. Through biological validation of these regions, it shows that there might be linking between motor and social and communicative abilities in ASD, in which this suggestion is also supported by other studies related to ASD.</p> <p>&nbsp;</p> Sin Yee Yap Weng Howe Chan Copyright (c) 2020 Academia of Fundamental Computing Research 2020-12-21 2020-12-21 1 2 Material Requirement Planning using LFL, EOQ and PPB Lot Sizing Technique http://journal.excelligentacademia.com/index.php/AFCR/article/view/58 <p>Inventory control is very important to maintain the right balance of stock of a company warehouse no matters how big or small an organization in any sector of the economy. Controlling the materials flow in the inventory can be challenging especially for a small-scale company. With the limited size of storage and never-ending incoming demands, the flow of inventory must be managed efficiently and systematically to minimize the lowest possible of ordering materials cost needed to produce high quality products. By using Material Requirement Planning (MRP) approach, this model allows an organization to plan manufacturing activities, delivery schedules, and purchasing activities for planned production and customer deliveries. This paper demonstrates the efficiency of MRP model using Lot for Lot (LFL), Economic Order Quantity (EOQ) and Part Period Balancing (PPB) lot sizing technique. While LFL orders as much as needed, EOQ and PPB controls the amount of an ideal quantity of order to minimize the overall cost of inventory and set up cost. The main objectives of this paper are to illustrate the basic calculation of MRP and how it helps organizations to plan their inventory flow using LFL, EOQ and PPB technique and how each of these techniques is utilized according to its specific demand pattern.&nbsp;&nbsp;</p> Nur Izah Atirah Ghulam Zaedi Hairudin Majid Alif Mokhtar Azurah A Samah Copyright (c) 2020 Academia of Fundamental Computing Research 2020-12-21 2020-12-21 1 2 Attribute Selection and Classification on Mental Illness DataSet http://journal.excelligentacademia.com/index.php/AFCR/article/view/53 <p>Mental illness is increasing rapidly through the world where people are now susceptible to mental disorders at some point in their lives. Mental disorders can also be a major cause of global disease burden over the coming year since the awareness among the world community on the importance of preventing mental disorders is still low. Thus, initial awareness of mental disturbances should be enhanced by identifying the early symptoms of the disease. By having early identification, people will tend to become more sensitive to prevent mental disorder. In this research, Mental Illness dataset that used are downloaded from Kaggle website that measures attitudes towards mental health and frequency of mental health disorders in the technology workplace. Mental illness dataset will be tested by using attribute selection techniques which is GainratioAttributeEval, CorrelationAttributeEval, CfsSubsetEval, InfoGainsubsetEval, and Wrapper selection in WEKA tool to identify the important attribute by removing the redundant and irrelevant attributes.&nbsp; The performances accuracy of the attribute selection was validated by using several classification techniques. There are four classifiers used to evaluate the performances such as Naïve Bayes, K-Neighbours Nearest, Decision Tree and Logistic regression.</p> Nurfarzana Ahmad Tajuddin Zuraini Ali Shah Mir Jamaluddin Copyright (c) 2020 Academia of Fundamental Computing Research 2020-12-21 2020-12-21 1 2