An Optimization-based Learning Black Widow Optimization Algorithm for Text Psychology
الموضوعات :Ali Hosseinalipour 1 , Farhad Soleimanian Gharehchopogh 2 , mohammad masdari 3 , ALi Khademi 4
1 - Department of Computer Engineering, Urmia Branch, Islamic Azad University, Urmia, IRAN
2 - Department of Computer Engineering, Urmia Branch, Islamic Azad University, Urmia, IRAN
3 - Department of Computer Engineering, Urmia Branch, Islamic Azad University, Urmia, Iran.
4 - Department of Psychology Science, Urmia Branch, Islamic Azad University, Urmia, IRAN.
الکلمات المفتاحية: black widow optimization algorithm, Meta-Heuristic Algorithm, text psychology, Feature Selection,
ملخص المقالة :
In recent years, social networks' growth has led to an increase in these networks' content. Therefore, text mining methods became important. As part of text mining, Sentiment analysis means finding the author's perspective on a particular topic. Social networks allow users to express their opinions and use others' opinions in other people's opinions to make decisions. Since the comments are in the form of text and reading them is time-consuming. Therefore, it is essential to provide methods that can provide us with this knowledge usefully. Black Widow Optimization (BWO) is inspired by black widow spiders' unique mating behavior. This method involves an exclusive stage, namely, cannibalism. For this reason, at this stage, species with an inappropriate evaluation function are removed from the circle, thus leading to premature convergence. In this paper, we first introduced the BWO algorithm into a binary algorithm to solving discrete problems. Then, to reach the optimal answer quickly, we base its inputs on the opposition. Finally, to use the algorithm in the property selection problem, which is a multi-objective problem, we convert the algorithm into a multi-objective algorithm. The 23 well-known functions were evaluated to evaluate the performance of the proposed method, and good results were obtained. Also, in evaluating the practical example, the proposed method was applied to several emotion datasets, and the results indicate that the proposed method works very well in the psychology of texts.
1. Gharehchopogh, F.S. and H. Gholizadeh, A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm and Evolutionary Computation, 2019. 48: p. 1-24.
2. Gharehchopogh, F.S., I. Maleki, and Z.A. Dizaji, Chaotic vortex search algorithm: metaheuristic algorithm for feature selection. Evolutionary Intelligence, 2021: p. 1-32.
3. Abdollahzadeh, B. and F.S. Gharehchopogh, A multi-objective optimization algorithm for feature selection problems. Engineering with Computers, 2021: p. 1-19.
4. Jafari, N. and F. Soleimanian Gharehchopogh, An Improved Bat Algorithm with Grey Wolf Optimizer for Solving Continuous Optimization Problems. Journal of Advances in Computer Engineering and Technology, 2020. 6(3): p. 119-130.
5. Mohmmadzadeh, H. and F.S. Gharehchopogh, An efficient binary chaotic symbiotic organisms search algorithm approaches for feature selection problems. The Journal of Supercomputing, 2021: p. 1-43.
6. Rahnema, N. and F.S. Gharehchopogh, An improved artificial bee colony algorithm based on whale optimization algorithm for data clustering. Multimedia Tools and Applications, 2020. 79(43): p. 32169-32194.
7. Hosseinalipour, A., et al., A novel binary farmland fertility algorithm for feature selection in analysis of the text psychology. Applied Intelligence: p. 1-36.
8. Sayed, S.A.-F., E. Nabil, and A. Badr, A binary clonal flower pollination algorithm for feature selection. Pattern Recognition Letters, 2016. 77: p. 21-27.
9. Gharehchopogh, F.S., H. Shayanfar, and H. Gholizadeh, A comprehensive survey on symbiotic organisms search algorithms. Artificial Intelligence Review, 2019: p. 1-48.
10. Zorarpacı, E. and S.A. Özel, A hybrid approach of differential evolution and artificial bee colony for feature selection. Expert Systems with Applications, 2016. 62: p. 91-103.
11. Hosseinalipour, A., et al., Toward text psychology analysis using social spider optimization algorithm. Concurrency and Computation: Practice and Experience. n/a(n/a): p. e6325.
12. Dong, H., et al., A novel hybrid genetic algorithm with granular information for feature selection and optimization. Applied Soft Computing, 2018. 65: p. 33-46.
13. Liu, B. and L. Zhang, A survey of opinion mining and sentiment analysis, in Mining text data. 2012, Springer. p. 415-463.
14. Nasukawa, T. and J. Yi. Sentiment analysis: Capturing favorability using natural language processing. in Proceedings of the 2nd international conference on Knowledge capture. 2003.
15. Asghar, M.Z., et al., A review of feature extraction in sentiment analysis. Journal of Basic and Applied Scientific Research, 2014. 4(3): p. 181-186.
16. Saeys, Y., I. Inza, and P. Larrañaga, A review of feature selection techniques in bioinformatics. bioinformatics, 2007. 23(19): p. 2507-2517.
17. Sharma, M. and P. Kaur, A Comprehensive Analysis of Nature-Inspired Meta-Heuristic Techniques for Feature Selection Problem. Archives of Computational Methods in Engineering, 2020: p. 1-25.
18. Emine, B. and E. Ülker, An efficient binary social spider algorithm for feature selection problem. Expert Systems with Applications, 2020. 146: p. 113185.
19. Hayyolalam, V. and A.A.P. Kazem, BWO algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Engineering Applications of Artificial Intelligence, 2020. 87: p. 103249.
20. Pang, B., L. Lee, and S. Vaithyanathan, Thumbs up? Sentiment classification using machine learning techniques. arXiv preprint cs/0205070, 2002.
21. Arora, S. and P. Anand, Binary butterfly optimization approaches for feature selection. Expert Systems with Applications, 2019. 116: p. 147-160.
22. Hussien, A.G., et al., S-shaped binary whale optimization algorithm for feature selection, in Recent trends in signal and image processing. 2019, Springer. p. 79-87.
23. Bennasar, M., Y. Hicks, and R. Setchi, Feature selection using joint mutual information maximisation. Expert Systems with Applications, 2015. 42(22): p. 8520-8532.
24. Mirjalili, S., Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications, 2016. 27(4): p. 1053-1073.
25. Yang, X.S. and A.H. Gandomi, Bat algorithm: a novel approach for global engineering optimization. Engineering computations, 2012.
26. Leskovec, J., A. Rajaraman, and J.D. Ullman, Mining of massive data sets. 2020: Cambridge university press.
27. Mafarja, M.M. and S. Mirjalili, Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing, 2017. 260: p. 302-312.
28. Liao, T.W. and R. Kuo, Five discrete symbiotic organisms search algorithms for simultaneous optimization of feature subset and neighborhood size of knn classification models. Applied Soft Computing, 2018. 64: p. 581-595.
29. Mafarja, M., et al., Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems. Knowledge-Based Systems, 2018. 145: p. 25-45.
30. Rajamohana, S. and K. Umamaheswari, Hybrid approach of improved binary particle swarm optimization and shuffled frog leaping for feature selection. Computers & Electrical Engineering, 2018. 67: p. 497-508.
31. Azar, A.T., et al., A random forest classifier for lymph diseases. Computer methods and programs in biomedicine, 2014. 113(2): p. 465-473.
1
Journal of Advances in Computer Engineering and Technology
An Optimization-based Learning Black Widow Optimization Algorithm for Text Psychology
Received (Day Month Year)
Revised (Day Month Year)
Accepted (Day Month Year)
Abstract— In recent years, social networks' growth has led to an increase in these networks' content. Therefore, text mining methods became important. As part of text mining, Sentiment analysis means finding the author's perspective on a particular topic. Social networks allow users to express their opinions and use others' opinions in other people's opinions to make decisions. Since the comments are in the form of text and reading them is time-consuming. Therefore, it is essential to provide methods that can provide us with this knowledge usefully.
Black Widow Optimization (BWO) is inspired by black widow spiders' unique mating behavior. This method involves an exclusive stage, namely, cannibalism. For this reason, at this stage, species with an inappropriate evaluation function are removed from the circle, thus leading to premature convergence.
In this paper, we first introduced the BWO algorithm into a binary algorithm to solving discrete problems. Then, to reach the optimal answer quickly, we base its inputs on the opposition. Finally, to use the algorithm in the property selection problem, which is a multi-objective problem, we convert the algorithm into a multi-objective algorithm. The 23 well-known functions were evaluated to evaluate the performance of the proposed method, and good results were obtained. Also, in evaluating the practical example, the proposed method was applied to several emotion datasets, and the results indicate that the proposed method works very well in the psychology of texts.
I. INTRODUCTION
N
nowadays, a large amount of textual data is available for processing, and the amount of this data is increasing every year. An essential part of the way human beings gather information is to understand what other human beings think. An essential part of the information that people use during their decision-making process is always based on the answer to the question, "What do others think?" With the accessibility and popularity of rich sources of ideas such as online review sites, personal blogs, and social networks, new opportunities and challenges have been created in this area. People can now use information technology to analyze the feelings of others. Hence, given the growing interest in systems that can understand the thoughts and opinions of others. It should be noted that the only motivation for people to pursue the topic of exploring ideas online, marketing business products and services. Access to political information, for example, is another critical application in this area.
The optimization process is to find the best arrangement of organisms that seek limited resources with pre-defined constraints. This process can be used in several research fields such as health, engineering, mathematics, economics, linguistics, and science to optimize (minimize or maximize) their purpose. There are two types of optimization methods based on determinism and approximation[1-3]. Traditionally, certainty-based methods have been used to deal with smaller size optimization problems and less complexity. Although they can find an exact solution to the optimization problem, they suffer from various problems, including the fact that they cannot solve NP problems. They can quickly get stuck in a local optimum[4-6]. Therefore, they are inefficient in dealing with real-world problems. As a result, optimization research communities turn their attention to using approximation methods to solve their optimization problems.
Due to the increase in the volume and dimensions of information, feature selection is essential when using machine learning and data mining methods. Feature selection is a practical and fundamental step that is considered a prerequisite for classification methods[7-9]. Large data sets for teaching a category may lead to a problem called over-fitting in learning methods. The over-fitting problem reduces the model's generalizability and reduces the accuracy of classification methods for new test samples. Moreover, a large data set requires more processing time to build the model from the training and testing data set [10]. As a result, feature selection aims to simplify and improve the data set's quality by selecting essential and critical features[11]. Also, feature selection can better understand the domain and, according to some criteria, retain only better and more appropriate features to describe the inherent patterns in the data and help reduce the effects of dimensions[12].
This paper's motivation is to propose a suitable algorithm in the psychology of texts, for which we have improved and used the BWO algorithm. To do this, we first use the BWO algorithm to improve its speed based on the opposition. Then, we convert the improved algorithm to a binary algorithm because we want to use this method in the discrete Feature selection problem. Because it is also a matter of selecting a multi-objective feature, we turn it into a multi-objective algorithm. Finally, we use this algorithm to classify psychological texts. In order to evaluate the performance of opposition-based, discrete, and multi-objective BWO, 23 well-known functions are used for evaluation criteria. The results show that it escapes the local optimum and strikes a balance between the exploitation and exploration stages compared to the other algorithms studied. For added validity, the proposed solution is used to analyze the feelings of the author of the text as a practical example. The simulation results show that the proposed solution is efficient.
The advantages of the proposed algorithm are: Ⅰ- They converge quickly. Ⅱ- Because the inputs are based on contradiction, they reach the optimal answer faster. Ⅲ- Also, in comparison with other methods in different evaluations shows that the proposed algorithm works better.
The remaining sections of this paper are as follows: Related work is provided in Section Ⅱ. Materials and methods are introduced in section Ⅲ. The proposed Multi-objective Opposition-based Binary BWO (MOBBWO) algorithm is introduced in Section Ⅳ. The performance of the proposed algorithm and evaluation and analysis are presented in Section Ⅴ. Section Ⅵ provides a practical example of the proposed method. Finally, the conclusion is shown in the last section.
II. AN OVERVIEW OF PREVIOUS METHODS
Extensive research has recently been conducted on the topic of Sentiment analysis and belief. Around 2001, there was widespread awareness of Sentiment analysis and ideology's research issues, and subsequently, thousands of papers in the field were published. In 2001, Das, Chen, and Tong (in separate papers) analyzed market sentiment. Subsequently, in 2002, papers published by Turney and Pang et al. At the annual meeting of the Association for Computational Linguistics (ACL) and the annual conference of Empirical Methods in Natural Language Processing (EMNLP) these words were used[13]. Nasukawa and Yi published a paper in 2003 entitled Sentiment analysis: Capturing favorability using natural language processing[14]. Some meta-heuristic algorithms have been used in feature selection in text psychology, with TABLE I showing some of the other meta-heuristic algorithms used in feature selection.
TABLE I
SEVERAL METHODS RELATED TO FEATURE SELECTION
Ref. | method | domain | description |
[15] | A review of feature extraction in sentiment analysis | Sentiment analysis | A brief review of feature selection techniques |
[16] | A review of feature selection techniques in bioinformatics | Bioinformatics | The primary classification of feature selection techniques |
[17] | A Comprehensive Analysis of Nature-Inspired Meta-Heuristic Techniques for Feature Selection Problem | Feature selection problem | A researcher who tends to design or analyze the performance of meta-innovative divergence methods in solving feature selection problem |
[18] | An efficient binary social spider algorithm for the feature selection problem | Binary optimization | Social spider algorithm for the feature selection problem |
Before addressing the proposed method, we must recall the challenges in implementing text author emotion analysis, none of which have yet been fully addressed in natural language processing. We should try to overshadow these cases in the proposed method. Some of these challenges are:
1. The user does not express his feelings directly and may express his opinion in various ways (negative actions or attributes). Consecutive sentences may complement each other.
"I was not feeling well at all today, or if you think I was feeling well, you are wrong."
2. The user may use sarcasm. Each of these items has an entirely different meaning in the sentence structure from the fundamental concept.
"I was so happy today that a lamb in the slaughterhouse is happy!"
"This computer is not as usable as a piece of brick."
3. Users often use complex sentence structures that do not necessarily fall within the general framework of language structure due to their informality.
4. For the above reasons, a word-for-word review can lead to erroneous results.
In the proposed method, we will use a different combination of methods. This method will be done in three steps, the first of which is preprocessing and normalization. The second step is to select the compelling features with the least number in the category of emotions, which will be done using the ultra-innovative algorithm of BWO. The last step is the labeling and categorization of emotions. This paper used BWO, an exemplary method compared to other meta-innovative methods because it has a high convergence speed. Moreover, it performs well in feature selection and can therefore overshadow existing challenges and eliminate a percentage of them.
III. Material and Methods
In this section, the BWO algorithm will be described, which is used as the basic algorithm, then the data set used in this paper is introduced.
1. The BWO algorithm
The BWO algorithm, like other evolutionary algorithms, begins with the initial population of spiders so that each spider represents a potential solution. These early spiders, in pairs, try to reproduce the new generation. The black widow eats the male spider during or after mating. He then carries the stored sperm into his sperm cavities and releases them into the egg sacs. Eleven days after fertilization, the spiders emerge from the egg sacs. They live on the mother net for several days to a week, during which time it is observed that weaker siblings are eaten. The main steps of the algorithm are described below[19].
1-1- The initial population
In the BWO, it is called a "widow." The potential solution to any problem is considered as a black widow spider. Each black widow spider represents the values of the problem variables. In this method, the structure must be considered as an array to solve the benchmark functions. In the next optimization problem, a widow represents an array that represents the solution to the problem. This array is defined as an Equation (1):
(1)
In Equation (1), widow means widow spiders and a subsequent array and x is the position of each widow spider. The widow evaluation function is obtained by evaluating the performance of the f evaluation function in widow a from. Therefore, the evaluation function is calculated as Equation (2):
(2)
In Equation (2), Fitness or f (widow) means the evaluation function of widow spiders. To start the optimization algorithm, a candidate widow matrix of size with the initial population of spiders is generated. The parent pair is then randomly selected to perform the calving stage by mating, in which the black widow eats the male during or after.
1-2- Reproduction
Because these pairs are not independent, each pair is separated from the others in parallel and nature to reproduce the new generation. In the real world, every mate produces about 1,000 eggs, but in the end, some of the more robust baby spiders survive. Now, here in this algorithm for reproduction, an array called alpha must be created. First, the widow array is filled with random numbers, and then the children are generated using 𝛼 with and Equation (3).
(3)
In Equation (3), and are the parents, and are the children. This process is repeated for times, while the randomly selected numbers should not be repeated. Eventually, children and mothers are added to an array and sorted by their evaluation function's value. According to the cannibalism ranking, some of the best people are added to the newly formed population. These steps apply to all pairs.
1-3- Cannibalism
Here we have three types of cannibalism. The first is sexual cannibalism, in which a black widow eats her husband during or after mating. In this algorithm, men and women are identified according to their organ evaluation function. Another type of cannibalism is sibling, in which healthy spiders eat their weaker siblings. In this algorithm, a cannibalism rating (CR) is set based on the number of survivors. In some cases, the third type of cannibalism is often seen, in which the baby eats its mother spider. The value of the limb evaluation function is used to determine solid or weak spiders[19].
1-4- Mutation
At this point, the Mutepop number is randomly selected from the population. Each of the selected solutions randomly exchanges two elements in the array. Muttpop is calculated with the mutation rate[19].
1-5- Convergence
Like other evolutionary algorithms, three-stop conditions can be considered[19]: (A) Pre-defined number of iterations. (B) Observe no change in the value of the best widow evaluation function for multiple iterations. (C) Achieve a certain level of accuracy.
1-6- Parameter setting
In the BWO algorithm, some parameters are necessary to achieve better results. These parameters include reproduction rate (PP), cannibalism rate (CR), and mutation rate (PM). The parameters must be adjusted appropriately to improve the algorithm's success in finding superior solutions. The better the parameters are adjusted, the more likely you will jump out of any global optimized search space. Hence, the right amount of parameters can ensure balance control between the exploitation and exploration stages.
2. Data set
Two sets of data are used in this paper: the details of each are as follows.
2-1- ISEAR data set
This dataset was developed by a large group of psychologists around the world during the 1990s. This 40-feature dataset includes psychological questions designed under the supervision of Klaus and Harald Wallenboot. In this test, respondent students (both psychologists and non-psychologists) were asked to report situations where they experienced seven primary emotions (happiness, fear, anger, sadness, hatred, shame, and guilt). In each case, the questions included assessing the situation and how the respondents reacted. The final data set is based on reports of seven primary emotions involving approximately 3,000 respondents in 37 countries on five continents.
2-2- Sentiment polarity datasets v2.0
The name of this collection is Movie Review Data, the second version called Sentiment polarity datasets v2.0. It is a film analysis data set from an emotional perspective. This collection includes movie review texts with two positive and negative classes. The data set consists of 2000 texts with specific tags, including a thousand texts with a positive view and a thousand texts with an opposing view[20].
IV. The proposed method
The psychology of texts is based on dynamic analysis and recognition of the author's behavior and is one of the new research areas that have recently been considered in some languages, especially English. One of the most critical challenges in distinguishing the author's mood and feeling from the text from other processing operations in natural language processing is that individual situations and their reflection in his speech and writing depend very strongly on the author's culture and nationality. However, the advantage that can be found in this category is that it is easy to identify, determine and compile the keywords, key sentences, and punctuation marks that determine these behavioral states in the relevant language, and based on that, the author's state at the time of writing the text recognized. In this paper, we first turn the BWO algorithm into a discrete algorithm. To solve the discrete problem, we can use feature selection, which is a discrete problem. Then turn it into a conflict-based algorithm to speed up the optimal answer. The resulting algorithm is then multi-objective because the feature selection problem is a multi-objective problem that pursues feature number and classification accuracy. Finally, it is used for the psychology of the text.
The structure of the proposed method is as follows: first, in subsection1, the BWO algorithm is binary, then in subsection 2, it is based on opposition to improve its speed, and in subsection 3, the algorithm obtained from the above steps is multi-algorithm The goal becomes. In Section Ⅵ, the resulting algorithm is used in text psychology.
1. Optimization-based learning for optimizing binary BWO based on sigmoid function
This section will introduce a new binary method for the BWO algorithm based on the sigmoid function. As stated in subsection 1 Ⅲ, the BWO algorithm moves in continuous space, and therefore all the solutions available in the population of this algorithm include serial numbers. Given that it is a matter of selecting or not selecting a Feature, the new binary solution must contain the numbers 0 and 1, where 1 indicates selecting a feature for the new dataset and zero indicates the non-selection of a feature. To do this, we will use the sigmoid function or the same[21, 22] S-shaped to move the processes of the BWO algorithm in binary space. Therefore, in this proposed model, the sigmoid function is used to continuously change the position of the solutions in the BWO algorithm to binary mode as an Equation (4):
(4)
In Equation (4), is the continuous value of solution i in the BWO algorithm's population in the dth dimension in iteration t. The output of the sigmoid transfer function is still in a continuous state between 0 and 1, so a threshold must be set to convert it to a binary value, which is the random threshold given in Equation (5) to convert the binary value solution to select the Feature in The sigmoid function is applied:
(5)
In Equation (5), represents the i solution's position in the BWO algorithm population in iteration t in the d dimension. Rand also represents a number between zero and one of the uniform distribution type. Thus, the solutions available in the BWO algorithm population are forced to move in a binary search space using Equations (4) and (5). Next, we place these relationships in more detail in the BWO algorithm.
2. opposition-based binary improved BWO algorithm
In the next step, the inputs of the algorithm are based on contradiction, according to Equation 6. In methods based on opposition to each member of the original population, a contradictory member is also produced. If the conflicting member cost function is less than the original member cost function, it can be substituted; otherwise, we will continue. Therefore, its members and opposing members are evaluated simultaneously to proceed with those that are more appropriate. Assuming that x is the position of the spider between a and b, the opposition-based is defined as Equation 6.
(6)
In Equation (6), X is the position of the spider between a and b, is based on the opposition. Given that feature selection is discrete, the BWO algorithm is inherently a continuous algorithm. Therefore, the mapping strategy is used to convert actual variables into correct variables. This operation is performed using the correct component function shown in Equation 6. In this formula, x is a fundamental variable between two numbers b and a, where b and a are two consecutive integers. This strategy can solve the problem of the continuity of the algorithm to become a discrete problem. Moreover, it is known as OBBWO after being the basis for opposing this algorithm.
3. Optimized binary BWO algorithm based on multi-objective conflict
This section describes the feature selection objective function for the proposed algorithm and other meta-heuristic algorithms in this paper. Feature selection can be considered a multi-objective optimization problem in which two conflicting goals are achieved, including the minimum number of features selected and higher categorization accuracy. Therefore, to define the feature selection problem's objective function, we need a classification algorithm and the simplest classification method, i.e., the KNN classifier. We also used this classification to define the objective function of the feature selection problem. Therefore, in the proposed method, we used the KNN classifier to evaluate the proposed algorithm's features and other algorithms more accurately. Each solution is evaluated based on the proposed multi-objective function, which depends on the KNN classifier. In the proposed multi-objective function to balance the number of features selected in each solution (minimum) and classification accuracy (maximum), the proportionality function in Equation (7) is used to evaluate a solution in any meta-heuristic algorithms.
TABLE Ⅱ Comparison of test functions between several optimization methods
|
In Equation (6), indicates the classification error rate of a classification method. Also, |R| Multi-linear subset is selected, and |N| is the total number of features in the dataset. Also, the parameter α is the importance of category quality, and the parameter β is the length of the subset. The values of these two parameters are α ∈ [0, 1] and β = (1 -α). In this study, the initial value of α is considered to be 0.99; B's value will be 0.01. Considering that the main algorithm is called BWO, we have named it discrete BBWO, and based on the contradiction, we have named it OBBWO, and finally, for several purposes, we have called this method MOBBWO.
V. Simulation and evaluation of results
1. Results with 23 evaluation functions
Twenty-three standard benchmarks have been considered to evaluate the performance of the proposed method. All of these test functions vary in size and complexity. Table P shows the main characteristics of the test functions used. It includes the name of the functions, the mathematical formula of each benchmark, the boundary defining the search space, and the function's dimensions. Categories for each benchmark are also provided: unimodal (U) and multimodal (M).
It should be noted that unimodal test functions have an optimization, while multimodal test functions have more than one optimization. Unimodal test functions are used to evaluate the ability to use optimization algorithms, while multi-mode test functions are used to evaluate the ability to explore optimization algorithms[23]. As shown in Table Ⅲ, F1-F7 are classified as single-state test functions, while F8-F23 are classified as multi-state test functions. Also, the dimensions of the F14-F23 test functions are fixed.
In general, the improvement of the proposed method in obtaining the exact value of the optimal point and the evaluation of the function is shown in Table Ⅱ based on the average values compared to the compared methods, which shows the efficiency of the proposed method in evaluating the optimization function. The accuracy of this method has not been reduced. The improved method is simulated in MATLAB, and the results of its comparison with its Gray Wolf Optimization (GWO)[4], Dragonfly Algorithm (DA)[24], and Bat Algorithm (BA)[25] are shown in Table Ⅱ. As can be seen, in most cases, the proposed method performs better than the compared methods.
TABLE Ⅲ Features of 23 BENCHMARKS. (n: dimension, c: Category U: unimodal, M: multimodal)
Accuracy, one of the most important parameters for evaluating a classifier, is defined by monitoring and indicating the classifier's accuracy.
The sensitivity or reminder, also called the tp rate, refers to the probability of positive diagnoses out of the total number of diagnoses. Specificity, also known as the tn rate, is probably due to correct negative diagnoses. Accuracy, also called a positive prediction rate, refers to the percentage of relevant predictions and identified as relevant. F-measure is a combination of accuracy and reminder, and the closer to 1 it is, the better. 1. Evaluation of practical example The proposed algorithm in the MATLAB R2014a is simulated with a computer with a 64-bit i5 CPU and 4G memory. For this purpose, two sets of data called ISEAR and Sentiment polarity datasets v2.0 have been used. We use 80% of the data for training and the remaining 20% for testing. Table 5 shows the classifier performance of the proposed text with the existing basic algorithms. It should be noted that some of the algorithms implemented in MATLAB have been used for comparable algorithms. Table Ⅴ, it can be seen that the proposed algorithm, the improved BWO algorithm, has several better performance objectives in feature selection. The following table Ⅵ shows the proposed algorithm and other algorithms' results according to the average number of feature selections.
Table Ⅵ shows the MOBBWO approach results and other algorithms in terms of the average number of feature selections that the MOBBWO approach has performed very well. So, this approach has proven its superiority in most data sets. Of course, it should be said that in the objective function, both the number of features and the accuracy of the classification are considered, so it is not possible to obtain a smaller number of features in the entire data set. The following compares the MOBBWO approach and other comparative algorithms in terms of classification accuracy and is shown in Table Ⅶ.
Table Ⅶ shows the results of the MOBBWO approach and other algorithms in terms of classification accuracy. In addition to selecting the better feature shown in Table Ⅴ, the MOBBWO approach also shows high performance in terms of classification accuracy. This approach has worked better in most datasets in terms of classification accuracy. These results are that in the objective function, both feature selection and categorization accuracy are considered. Table Ⅷ shows the proposed algorithm and other algorithms' results according to the average number of feature selections.
VII. General evaluation and conclusion In this paper, BWO is transformed into a discrete algorithm, then the same algorithm is based on contradiction, and finally, after multi-purpose, it is used in text analysis and psychology. The simulation results show that MOBBWO, as a new algorithm, scored 23 benchmarks and performed well in fifty different implementations. The average number of evaluations in these fifty implementations has been used to evaluate the degree of convergence and evaluate the optimization function. The proposed method results show that the improvement in the convergence of the algorithm has occurred by reducing the number of evaluations of the optimization function, and this improvement has generally been more than twenty percent in the twenty-three benchmarks. The evaluation of the applied section in the text's psychological analysis also showed that the proposed method's classification accuracy is better than the compared methods and has a good performance in selecting features. In summary, the new method's advantages are: discretizing the BWO, converting the BWO to the conflict-based, multi-objective new method, using the choice of psychological features of Sentiment analysis. References
1. Gharehchopogh, F.S. and H. Gholizadeh, A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm and Evolutionary Computation, 2019. 48: p. 1-24. 2. Gharehchopogh, F.S., I. Maleki, and Z.A. Dizaji, Chaotic vortex search algorithm: metaheuristic algorithm for feature selection. Evolutionary Intelligence, 2021: p. 1-32. 3. Abdollahzadeh, B. and F.S. Gharehchopogh, A multi-objective optimization algorithm for feature selection problems. Engineering with Computers, 2021: p. 1-19. 4. Jafari, N. and F. Soleimanian Gharehchopogh, An Improved Bat Algorithm with Grey Wolf Optimizer for Solving Continuous Optimization Problems. Journal of Advances in Computer Engineering and Technology, 2020. 6(3): p. 119-130. 5. Mohmmadzadeh, H. and F.S. Gharehchopogh, An efficient binary chaotic symbiotic organisms search algorithm approaches for feature selection problems. The Journal of Supercomputing, 2021: p. 1-43. 6. Rahnema, N. and F.S. Gharehchopogh, An improved artificial bee colony algorithm based on whale optimization algorithm for data clustering. Multimedia Tools and Applications, 2020. 79(43): p. 32169-32194. 7. Hosseinalipour, A., et al., A novel binary farmland fertility algorithm for feature selection in analysis of the text psychology. Applied Intelligence: p. 1-36. 8. Sayed, S.A.-F., E. Nabil, and A. Badr, A binary clonal flower pollination algorithm for feature selection. Pattern Recognition Letters, 2016. 77: p. 21-27. 9. Gharehchopogh, F.S., H. Shayanfar, and H. Gholizadeh, A comprehensive survey on symbiotic organisms search algorithms. Artificial Intelligence Review, 2019: p. 1-48. 10. Zorarpacı, E. and S.A. Özel, A hybrid approach of differential evolution and artificial bee colony for feature selection. Expert Systems with Applications, 2016. 62: p. 91-103. 11. Hosseinalipour, A., et al., Toward text psychology analysis using social spider optimization algorithm. Concurrency and Computation: Practice and Experience. n/a(n/a): p. e6325. 12. Dong, H., et al., A novel hybrid genetic algorithm with granular information for feature selection and optimization. Applied Soft Computing, 2018. 65: p. 33-46. 13. Liu, B. and L. Zhang, A survey of opinion mining and sentiment analysis, in Mining text data. 2012, Springer. p. 415-463. 14. Nasukawa, T. and J. Yi. Sentiment analysis: Capturing favorability using natural language processing. in Proceedings of the 2nd international conference on Knowledge capture. 2003. 15. Asghar, M.Z., et al., A review of feature extraction in sentiment analysis. Journal of Basic and Applied Scientific Research, 2014. 4(3): p. 181-186. 16. Saeys, Y., I. Inza, and P. Larrañaga, A review of feature selection techniques in bioinformatics. bioinformatics, 2007. 23(19): p. 2507-2517. 17. Sharma, M. and P. Kaur, A Comprehensive Analysis of Nature-Inspired Meta-Heuristic Techniques for Feature Selection Problem. Archives of Computational Methods in Engineering, 2020: p. 1-25. 18. Emine, B. and E. Ülker, An efficient binary social spider algorithm for feature selection problem. Expert Systems with Applications, 2020. 146: p. 113185. 19. Hayyolalam, V. and A.A.P. Kazem, BWO algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Engineering Applications of Artificial Intelligence, 2020. 87: p. 103249. 20. Pang, B., L. Lee, and S. Vaithyanathan, Thumbs up? Sentiment classification using machine learning techniques. arXiv preprint cs/0205070, 2002. 21. Arora, S. and P. Anand, Binary butterfly optimization approaches for feature selection. Expert Systems with Applications, 2019. 116: p. 147-160. 22. Hussien, A.G., et al., S-shaped binary whale optimization algorithm for feature selection, in Recent trends in signal and image processing. 2019, Springer. p. 79-87. 23. Bennasar, M., Y. Hicks, and R. Setchi, Feature selection using joint mutual information maximisation. Expert Systems with Applications, 2015. 42(22): p. 8520-8532. 24. Mirjalili, S., Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications, 2016. 27(4): p. 1053-1073. 25. Yang, X.S. and A.H. Gandomi, Bat algorithm: a novel approach for global engineering optimization. Engineering computations, 2012. 26. Leskovec, J., A. Rajaraman, and J.D. Ullman, Mining of massive data sets. 2020: Cambridge university press. 27. Mafarja, M.M. and S. Mirjalili, Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing, 2017. 260: p. 302-312. 28. Liao, T.W. and R. Kuo, Five discrete symbiotic organisms search algorithms for simultaneous optimization of feature subset and neighborhood size of knn classification models. Applied Soft Computing, 2018. 64: p. 581-595. 29. Mafarja, M., et al., Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems. Knowledge-Based Systems, 2018. 145: p. 25-45. 30. Rajamohana, S. and K. Umamaheswari, Hybrid approach of improved binary particle swarm optimization and shuffled frog leaping for feature selection. Computers & Electrical Engineering, 2018. 67: p. 497-508. 31. Azar, A.T., et al., A random forest classifier for lymph diseases. Computer methods and programs in biomedicine, 2014. 113(2): p. 465-473.
|