Abstract
The Manta Ray Foraging Optimization Algorithm (MRFO) is a metaheuristic algorithm for solving real-world problems. However, MRFO suffers from slow convergence precision and is easily trapped in a local optimal. Hence, to overcome these deficiencies, this paper proposes an Improved MRFO algorithm (IMRFO) that employs Tent chaotic mapping, the bidirectional search strategy, and the Levy flight strategy. Among these strategies, Tent chaotic mapping distributes the manta ray more uniformly and improves the quality of the initial solution, while the bidirectional search strategy expands the search area. The Levy flight strategy strengthens the algorithm’s ability to escape from local optimal. To verify IMRFO’s performance, the algorithm is compared with 10 other algorithms on 23 benchmark functions, the CEC2017 and CEC2022 benchmark suites, and five engineering problems, with statistical analysis illustrating the superiority and significance of the difference between IMRFO and other algorithms. The results indicate that the IMRFO outperforms the competitor optimization algorithms.
Similar content being viewed by others
Introduction
Under certain constraints, the optimization problem involves finding the objective function's best fitness. As real-world optimization problems have become more complex in recent years, deterministic algorithms have encountered significant performance limitations1. On the contrary, stochastic algorithms perform much better at solving these problems. Besides, the Metaheuristic (MH) algorithms have proven highly effective in tackling complex problems2. This is because MH algorithms generate a feasible solution space with a stochastic algorithm, search the space for solutions in each iteration, evaluate individual fitness through the fitness function, and perform updates to produce the optimal solution3. The MH algorithms have shown advantages in many fields, including global optimization4,5, feature selection6,7, sentiment classification8,9,10, and case forecasting11. Considering the no free lunch (NFL) theorem, no algorithm can perform well on every optimization problem12, it is significant to study the MH algorithms.
Several MH algorithms were inspired by the various behaviors or patterns formed by the natural evolution of organisms. For instance, the Genetic Algorithm (GA)13 adopts the Darwinian biological evolution, natural selection, and the genetic mechanism of the biological evolution process. The particle swarm optimization algorithm (PSO)14 originated from biologists' observations of and studies of birds' foraging behavior. Bacterial Foraging Optimization (BFO)15 Mimics the eating habits of E. coli in the human gut. Glowworm Swarm Optimization (GSO)16 is inspired by the behavior of fireflies that are attracted and moved by light during their life habits of foraging, courtship, and vigilance. The Cuckoo Search Algorithm (CS)17 is inspired by a parasitic habit, i.e., some cuckoos lay eggs in other birds' nests until the young hatch. Gray wolf prey hunting activities inspired the Grey Wolf Optimizer (GWO)18. The Whale Optimization Algorithm (WOA)19 is inspired by humpback whale hunting behavior, while Pigeon-inspired Optimization (PIO)20 is inspired by pigeons' intelligence and collaborative abilities in spatial navigation and social behavior. The Slime Mould algorithm (SMA) Group21 is inspired by the behavior capable of efficiently finding food and establishing communication networks in a space environment. Furthermore, the Marine Predators Algorithm (MPA)22 is inspired by predator behavior in Marine life. The Bald Eagle Search (BES) algorithm23 is inspired by changes in the predatory behavior of bald eagles, and the Grasshopper Optimization Algorithm (GOA)24 is inspired by the mobile foraging behavior of grasshoppers. The Mayfly Algorithm (MA)25 is inspired by the mayfly's short life span and genetic behavior. The multiple population hybrid equalization optimizer (MHEO)26 is inspired by the population distributed mechanism.
The Manta Ray Foraging Optimization algorithm (MRFO)27 is a meta-heuristic algorithm that imitates the chain, cyclone, and somersault foraging modes of manta rays in the group foraging process. Figure 1a depicts a manta ray, and Fig. 1b illustrates its body structure. In nature, manta rays have three main parts during their foraging process. Firstly, the mantas line up, forming an orderly chain. The smaller male rests on the back of the female and moves in tandem with the beat of her pectoral fins. Thus, this mechanism allows them to maximize their foraging efficiency. Secondly, manta rays cluster as cyclones to filter the prey layer when plankton concentrations are high. Finally, somersault foraging is conducted if the densest food spot is found. Because the somersault phase coexists with randomness and periodicity, it helps the mantas control their food intake. MRFO has certain advantages, such as fast convergence speed and a strong ability to search for global optimal, which is widely used in various fields, such as economic load dispatching problems28, image segmentation problems29, minimization of energy consumption30, and radial distribution networks31. Although compared to other algorithms, MRFO shows good performance, defects emerge due to the lack of disturbance in the exploration and exploitation phase, such as low solving precision and easily trapped into local optimal.
Spurred by the above deficiencies, this paper extends the MRFO by incorporating Tent chaotic mapping, bidirectional search, and the Levy flight strategy. The bidirectional search strategy aims to start the search from the starting point and select one node from both directions for expansion at a time. This strategy reduces the search space searching from both directions simultaneously and finds a solution faster. It has been employed in algorithm improvement32, tourism demand forecasting33, and Model for Web Crawling34. However, it has never been used in the MRFO algorithm. In the IMRFO algorithm, the bidirectional search strategy not only searches along the direction of the fitness value decrease but also in the opposite direction. In the algorithm-solving process, especially in some multimodal and composite functions, the improving strategy can prevent the algorithm from being trapped into the local optimal and enhance global search ability. Moreover, we introduce the Levy flight strategy into MRFO to help the algorithm jump out from local optimal during exploitation.
The contributions of this paper are as follows. (1) During the algorithm’s initialization phase, Tent chaos mapping provides the initial solution with better ergodicity, uniformity, and randomness in the search space. (2) After the cyclone foraging phase, the bidirectional search strategy helps the algorithm search bidirectionally, which can enlarge the search scope and help the manta ray jump out of local optimal. (3) During the somersault foraging stage, the Levy flight strategy strengthens the algorithm’s ability to escape from local optimal. (4) The proposed algorithm is evaluated on 23 benchmark functions, the CEC2017 and CEC2022 benchmark suites, and five engineering problems. (5) Various evaluation measurements illustrate the superiority of our proposed IMRFO.
The remainder of this paper is organized as follows. "MRFO algorithm" section briefly presents MRFO. "Improved strategy for MRFO" section introduces three improvement strategies and the proposed IMRFO. "Experimental results and discussion" section presents the experimental results on 23 benchmark functions and the CEC2017 and CEC2022 benchmark suites. "IMRFO for engineering problems" section solves the engineering problems, and "Conclusion" section concludes this work.
MRFO algorithm
The manta ray’s chain, cyclone, and somersault foraging process are related to the three stages of the manta ray foraging algorithm.
Chain foraging
During the manta rays’ chain foraging, they line up to form an orderly chain, and the mathematical model is:
where \(r \in \left( {0,1} \right)\) is a random number, \({\varvec{x}}_{i}^{d} \left( t \right)\) is the current position of the \(d - {\text{th}}\) dimension of the \(i - {\text{th}}\) individual, and \({\varvec{x}}_{best}^{d} \left( t \right)\) is the best position at tth iteration of the current dth dimension, i.e., the position with the highest concentration of plankton. The update of the current manta ray individual position \({\varvec{x}}_{i}^{d} \left( t \right)\) is determined by the current optimal individual position \({\varvec{x}}_{best}^{d} \left( t \right)\) and the previous individual position \({\varvec{x}}_{i - 1}^{d} \left( t \right)\), \(\alpha\) is the weight coefficient, and \(N\) is the population size. Figure 2 illustrates the chain foraging behavior sectional drawing.
Cyclone foraging
In the cyclone foraging phase, the plankton concentration is high, and individual manta rays follow the previous individual and move along the cyclone path toward food. This is mathematically modeled as follows:
where \(r \in rand\left( {0,1} \right)\), \(\beta\) is the inertia weight, \(r_{1} \in \left[ {0,1} \right]\) is the uniformly distributed random number, \(t\) and \(T\) are the current and maximum number of iterations, respectively. When \(\frac{t}{T} \prec r\), in order to ensure the diversity of individuals, all individuals randomly assign a new position as their reference position in the whole search space, formulated as follows:
where \({\varvec{x}}_{rand}^{d} \left( t \right)\) is the random location of random production, \({\varvec{L}}_{b}^{d}\) and \({\varvec{U}}_{b}^{d}\) are the lower and upper bounds of the search space, respectively. A sectional drawing of the cyclone foraging behavior is presented in Fig. 3.
Somersault foraging
When the manta rays find the densest spot of food, they start to forage, forming a somersault, and the mathematical model is as follows:
where \(s\) is the somersault factor, representing the manta ray somersault intensity. Generally, \(s = 2\)35, and \(r_{2} ,r_{3} \in \left( {0,1} \right)\) is a random number. Figure 4 depicts the somersault foraging behavior sectional drawing. The MRFO pseudo code of the model above is presented in MRFO pseudo code.
Improved strategy for MRFO
Tent mapping
For strong randomness and ergodicity, chaos theory has been widely applied in the optimization process of various algorithms36, as it can increase the search space compared with random theory. Tent Chaos mapping37 is a chaotic mathematical model with uniform ergodicity, making the population more uniform and improving the initial solution's quality. It is mathematically expressed as follows:
The Tent map is in a chaotic state in the range of \((0,1)\), but for \(\mu = 0.5\) it is a periodic distribution. To ensure the randomness and ergodicity of the Tent map, \(\mu \ne 0.5\) is taken. Figure 5 presents the distribution of Tent Mapping for \(\mu = 0.509\). In this paper, the steps of initialization of manta rays by Tent chaotic mapping are as follows:
-
Step 1 Set the manta ray population of \(N\), dimension \(D\), and maximum iterations of \(k\), randomly generate the initial population value \(x(i,j)\), generate \(\mu (j) \in rand\left( {0,1} \right)\), and \(\mu \ne 0.5\). The initial value of \(i,j,k\) is 1.
-
Step 2 Iterate according to formula (9), \(j \to j + 1\), \(k \to k + 1\), and generate the \(x_{k,j}\) sequence. The initial population \(x_{i,j}\) sequence is generated by iterating \(i \to i + 1\) according to Eq. (8).
-
Step 3 Determine the maximum number of iterations. If \(k\) is reached, output the \(x\) sequence. Otherwise, return to Step 2 and continue the iteration.
The bidirectional search strategy
We employ the bidirectional search strategy38 to enlarge the search scope and prevent the algorithm from searching along a fixed direction. This strategy is presented in Fig. 6 and is formulated as follows:
where \({\varvec{x}}_{best}^{d}\) and \({\varvec{x}}_{worst}^{d}\) are the current optimal solution and the worst solution respectively. In the IMRFO algorithm, after the end of the chain foraging stage, the fitness value of the \(t - {\text{th}}\) iteration is \(f\left( {{\varvec{x}}_{i}^{d} \left( t \right)} \right)\), the fitness value of the \(\left( {t + 1} \right)\)th iteration is \(f\left( {{\varvec{x}}_{i}^{d} \left( {t + 1} \right)} \right)\), and the optimal solution is \(f_{\min } = f({\varvec{x}}_{best}^{d} )\). If a bidirectional search strategy is not employed when \(f\left( {{\varvec{x}}_{i}^{d} \left( t \right)} \right) > f\left( {{\varvec{x}}_{i}^{d} \left( {t + 1} \right)} \right)\), the search will be conducted along the Search Direction (the direction of the arrows in Fig. 6a). In this case, the algorithm will find the local optimal solution in the \(\left( {t + 1} \right)\)th iteration while adding the bidirectional search strategy. Additionally, the algorithm will search along the bidirectional directions, jump out the local optimal solution, and find the optimal solution \(f({\varvec{x}}_{best}^{d} )\) in Fig. 6b. Thus, the bidirectional search strategy could expand the search scope effectively.
Levy flight strategy
The Levy Flight is related to chaos theory39 and has a wide range of applications in the measurement and simulation of random and pseudo-random natural phenomena40. The Levy flight is a random walking process whose action trajectory is a combination of size and size steps, which is essentially a non-Gaussian random process. In the somersault foraging stage of the IMRFO algorithm, the Levy flight is added to renew the population, thereby improving the solution's richness, increasing the search scope, and enhancing the optimization ability.
The Levy flight is formulated as follows:
where \({\varvec{x}}_{i}^{t}\) is the position of the \(t\)th iteration, \(\oplus\) indicates the point-to-point multiplication, \(\alpha\) is the step control parameter, and \(Levy\left( \lambda \right)\) is the random search path. The following conditions should be met:
The random search step size of the Levy Flight is:
where \(s\) is the flight search step size, \(\beta\)\(\in\) \(\left( {1,2} \right]\), usually \(\beta = 1.5\), and \(\mu ,\upsilon\) follows the normal distribution, with \(\mu \sim N(0,\sigma_{\mu }^{2} )\) and \(\upsilon \sim N(0,\sigma_{\upsilon }^{2} )\). The IMRFO pseudo code of the model above is presented in IMRFO pseudo code. Figure 7 depicts the path of the Levy flight, and Fig. 8 shows the IMRFO algorithm flowchart.
Exploitation and exploration analysis
The operators \(\alpha ,\beta ,r,s\) in the original MRFO allow the search agents to update their position based on the location of \({\varvec{x}}_{i}^{d} \left( t \right)\) and \({\varvec{x}}_{i - 1}^{d} \left( t \right)\). However, the MRFO easily falls into local solutions. Therefore, we introduced Tent chaos mapping into the algorithm’s initialization phase to make the initial solution have better ergodicity, uniformity, and randomness in the search space. Furthermore, the bidirectional search strategy helps the algorithm search bidirectionally and thus enlarges the search scope. The two strategies strengthen IMRFO’s exploitation ability when \(t/T < 0.5\). In IMRFO’s somersault foraging stage, the search agents have reached the highest concentration of food, and the Levy flight strategy takes advantage of the randomness of the search step size to escape from the local optimal. Moreover, the location update strategy \(abs\left| {{\varvec{Levy}} \cdot {\varvec{x}}_{i}^{d} \left( {t + 1} \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right|\) enhances IMRFO’s exploration ability.
Experimental results and discussion
This section challenges the performance of IMRFO on three classes of widely recognized algorithms. (1) Classical algorithm, namely, PSO14, GWO18, GA41. (2) Advanced algorithms, such as SO42, SMA21, BWO43, SFO44, Chimp45. (3) New meta-heuristic algorithms, namely, AOA46.
To increase the experiment's credibility, we tested IMRFO and 10 other state-of-the-art algorithms under the same test environment on 23 benchmark functions, the CEC 2017 and CEC 2022 benchmark suites. Additionally, we tested three variants of IMRFO and MRFO on 23 benchmark functions simultaneously to analyze the performance of the IMRFO modifications. During the test process on the 23 benchmark functions, F1-F13 refer to Dim = 30, and F14-F23 are fixed dimensions. For the CEC 2017 and CEC2022 benchmark suites, all functions are Dim = 30, all algorithms run 30 times, and the best (Best), average (Mean), and standard deviation (Std) were recorded, respectively. The best results and standard deviation values are highlighted in bold.
The last three lines in Tables 1, 2, 3, 4 and 5 involve symbol analysis and Friedman mean analysis to obtain meaningful statistical results. When solving the tested function, the analysis of the symbols (W|T|L) represents the algorithms’ statistical number with win, tie, and loss. The Friedman mean analysis includes Friedman's mean value and ranking, which indicate the comprehensive ability of all algorithms to solve the test function.
Comparisons and results
Comparison with Benchmark Functions
Table 1 highlights that the unimodal functions F1–F7 can be used to test the local search ability. The IMRFO algorithm obtained 5 of the best results in F1–F7, and the MRFO algorithm obtained 2, showing obvious advantages over MRFO and the other competitors. The results indicate that the chaotic mapping strategy in the algorithm initialization phase and bidirectional search strategy after the chain foraging phase effectively enhance the global optimization ability of IMRFO.
The F8–F23 functions have many local optimal values, making finding the global optimal more difficult. According to Table 1, IMRFO finds 14 of the best results for the F8–F23 benchmark functions, and MRFO obtains 8, while SMA obtains 7 best results among other competitors and 11 global optimal values. The results indicate a bidirectional search strategy after the chain foraging phase and that the Levy flight in the somersault foraging phase effectively enhances the ability of IMRFO to jump out of a local optimum.
The (W|T|L) result of IMRFO is (20|3|0), and IMRFO achieved the best result. The second best is the SO algorithm, and the third best is MRFO. The IMRFO’s Friedman mean value is 1.30, ranking first. The iterative convergence behaviors of all 23 functions are depicted in Fig. 9, which reveals that IMRFO performs well for most functions.
Figure 10 presents the convergence behaviors of 23 benchmark functions to demonstrate the convergence of IMRFO. The first column presents a two-dimensional shape of the benchmark function. The second column presents the convergence curve, which is approximately linear or stepped. The stepped convergence curve proves that it is necessary to jump out of the local optimal to reach the global value in solving some benchmark functions, indicating that the optimal value can be reached without iteration. IMRFO demonstrates convergence. The third column presents the search agent’s trajectory in the first dimension. Specifically, the search agent starts fluctuating in the early iteration and ends once it converges and stabilizes during the iterations. This demonstrates that IMRFO performs well. The fourth column shows the changes in average fitness values throughout the iterations, which are unstable and large. The value becomes small and steady for the iteration \(T = 150\), indicating the quick convergence of IMRFO. In the last column, the red point location is the best solution, and the search agents are gradually approaching it. Figure 10 demonstrates IMRFO’s good performance.
Additionally, a box plot analysis scheme is used to analyze the distribution characteristics of the above functions by all 11 algorithms. Figure 11 reveals that IMRFO performs well in most functions compared to the competitor methods. The median, maximum, and minimum values of the objective functions obtained by IMRFO are almost the same as the optimal solutions, especially for functions F3, F9, F10, F11, F14, F15, F16, F17, F18, F19, F20, F21, F22 and F23.
Comparison with enhanced versions of different algorithms
For better credibility of the proposed IMRFO, we compared the proposed method with enhanced versions of different algorithms on classic benchmark functions. Table 2 infers that IMRFO has the best performance.
Comparison with CEC 2017 benchmark suite
Table 3 reports the results of 11 algorithms on the CEC 2017 benchmark suite, evaluating further the performance of IMRFO. Specifically, Table 2 highlights that IMRFO achieves the most wins without any losses, obtaining 15 best results and 20 global optimal values for the F1-F29 CEC 2017 benchmark suite. The Friedman mean ranking value of IMRFO is 1.79, ranking first. Figure 12 presents the iterative convergence curves of the optimization process of all 29 algorithms, revealing that IMRFO performs well for most functions. The effectiveness and superiority of our IMRFO are thus confirmed.
Moreover, a box plot analysis is conducted to study and analyze the distribution characteristics of the CEC 2017 benchmark suite solved by IMRFO and other competitors. Figure 13 reveals that the proposed IMRFO performs well in most functions compared to competitors. The valid values of the objective functions obtained by IMRFO are almost identical to the optimal solutions, especially for functions F2, F3, F4, F11, F14 and F19.
Comparison with CEC 2022 benchmark suite
The results of 11 algorithms on the CEC 2022 benchmark suite are listed in Table 4, indicating that IMRFO achieves the most wins without any losses, obtaining 7 global optimal values for the CEC 2022 benchmark suite. The Friedman mean ranking value is 2.33, ranking first. The iterative convergence curves of the optimization process of all 12 algorithms are depicted in Fig. 14, which shows that IMRFO performs well for most functions. The effectiveness and superiority of our IMRFO are thus confirmed.
Moreover, a box plot analysis is conducted, with the corresponding results illustrated in Fig. 15. The valid values of the objective functions obtained by IMRFO are almost the same as the optimal solutions, especially for functions F1 and F3.
Impact analysis of the modifications
Next, to verify the effectiveness of the proposed strategy in the IMRFO algorithm, we compared IMRFO with MRFO and three other modifications on 23 benchmark functions. Precisely, IMRFO1 (improved algorithm using reverse search strategy and Tent chaos mapping), IMRFO2 (improved algorithm using bidirectional search strategy), IMRFO3 (improved algorithm using bidirectional search strategy and Levy flight strategy), IMRFO (improved algorithm using Tent Chaos mapping, bidirectional search strategy, and Levy flight improvements simultaneously).
According to Table 5, IMRFO finds 20 best results and 21 global optimal values for F1–F23 benchmark functions, MRFO obtains 10 best results and 11 global optimal values, among other modifications, IMRFO1 obtains 11 best results, IMRFO2 obtains 10 best results, and IMRFO3 finds 11 best results and 12 global optimal values. The results reveal that IMRFO using Tent Chaos mapping, bidirectional search strategy, and Levy flight improvements simultaneously have more advantages over MRFO and other modifications, achieving a good balance between search for global optimal and accelerated convergence.
According to Table 5, the (W|T|L) result of IMRFO is (20|3|1), which is the best result. The Friedman mean ranking value of IMRFO is 1.22, ranking first. The value of IMRFO3 is 2.09, ranking second, the value of IMRFO1 is 2.17, ranking third, and the value of IMRFO2 and MRFO are 2.52 and 2.57, respectively, ranking fourth and last. To further compare the convergence of various algorithms in the optimization process, we draw the iterative convergence curves of the optimization process of all 23 algorithms, as depicted in Fig. 16, which suggests that IMRFO performs well for most functions among the modifications.
Wilcoxon rank sum test
Wilcoxon’s rank sum test evaluates the difference between IMRFO and competitor methods56. The significance level value is set at 0.05, and Table 6 reports the significant differences between our proposed IMRFO and the other algorithms in most functions. The results are 116/14/0, 79/20/1, 280/3/7, and 117/1/2.
As indicated by all the results, the results of our proposed IMRFO have significant differences compared with the other 10 algorithms. Combined with the above tables and figures, our proposed IMRFO is superior to the competitor algorithms, especially for global optimization problems.
Detailed analysis of the experimental results
Among these experimental results, the unimodal functions of benchmark functions, CEC2017 and CEC2022, can be used to test the local search ability. The IMRFO algorithm obtains the most optimal results, which indicate a chaotic mapping strategy in the algorithm initialization phase and bidirectional search strategy after the chain foraging phase, effectively enhancing the global optimization ability of IMRFO.
Other benchmark functions, CEC2017 and CEC2022, have several local optimal values, making finding the global optimal more difficult. IMRFO finds the most optimal results, which indicate a bidirectional search strategy after the chain foraging phase. Besides, the Levy flight in the somersault foraging phase effectively enhances the ability of IMRFO to jump out local optimal. Tables 1, 2, 3, 4, 5 and 6 illustrate that these three strategies can effectively improve IMRFO’s searchability.
IMRFO for engineering problems
This section employs IMRFO to solve 5 engineering problems with 10 mentioned algorithms and other new algorithms. The 5 engineering problems aim to find the minimum objective value under certain restrictions. Each problem is solved by setting \(T = 500\) and the population to \(N = 50\). Table 7 presents the experimental results of 11 algorithms on engineering problems. IMRFO finds the optimal solution in all five engineering problems. Compared with other algorithms, IMRFO’s advantage is very obvious. The iterative convergence curve of IMRFO and the competitors is presented in Fig. 17.
Engineering problem 1: Tension/compression spring design problem (TCSD)
TCSD involves finding the minimum weight of a tension/compression spring, as depicted in Fig. 18a. The mathematical description of TCTD is presented below. Table 8 presents the optimal results on the TCSD problem compared with 10 new optimization algorithms, namely, MVO57, EEGWO58, CSA59, GSA60, SO42, OBLGOA61, SMONM62, BGWO63, MGPEA64, TTAO65. The result is \(\overrightarrow {x} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {d \, D \, N} \right]\)\(= \left[ {0.05200,0.36500,10.82120} \right]\), and the minimum weight is \(f\left( {\vec{\user2{x}}} \right) = 0.012665\).
Consider: \(\overrightarrow {x} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {d \, D \, N} \right]\) Minimize: \(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = \left( {x_{3} + 2} \right)x_{2} x_{1}^{2}\).
Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{4} }} \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566\left( {x_{2} x_{1}^{3} - x_{1}^{4} } \right)}} + \frac{1}{{5108x_{1}^{2} }} \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = 1 - \frac{{140.45x_{1} }}{{x_{2}^{3} x_{3} }} \le 0\), \(g_{4} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0\).
Parameters range: \(0 \le x_{1} \le 2\), \(0.25 \le x_{2} \le 1.3\), \(2 \le x_{3} \le 15\).
Engineering problem 2: Pressure vessel design problem (PVD)
The PVD problem66 considers minimizing the manufacturing cost of pressure vessels (Fig. 18b). As is shown in Table 9, the optimal results on the PVD problem compared with 8 new optimization algorithms, namely, GA267, CPSO68, MPA22, MRFO27, AO46, SFS69, AAA70, iDEaSm71. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} \, x_{4} } \right] = \left[ {T_{s} \, T_{h} \, R \, L} \right] = \left[ {0.7787 \, 0.3854 \, 40.3410 \, 199.9296} \right]\), and the manufacturing cost is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 5885.5000\).
Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} \, x_{4} } \right] = \left[ {T_{s} \, T_{h} \, R \, L} \right]\).
Minimize: \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 0.6224x_{1} x_{3} x_{4} + 1.7781x_{2} x_{3}^{2} + 3.1661x_{1}^{2} x_{4} + 19.84x_{1}^{2} x_{3}\).
Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = - x_{1} + 0.0193x_{3} \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = - x_{3} + 0.00954x_{3} \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = - \pi x_{3}^{2} x_{4} - \frac{4}{3}\pi x_{3}^{3} + 1296000 \le 0\), \(g_{4} \left( {\overrightarrow {{\varvec{x}}} } \right) = x_{4} - 240 \le 0\).
Parameters range: \(0 \le x_{1} ,x_{2} \le 99\), \(0 \le x_{3} ,x_{4} \le 200\).
Engineering problem 3: Three-bar truss design problem (TBTD)
The three-bar truss design problem (TBTD) is to minimize the total weight of the structure (Fig. 18c). Table 10 shows the optimal results of the TBTD problem compared with 8 new optimization algorithms, namely, DSS-MDE72, HEA-ACT73, DELC74, MDPEA64, WSA75, SSA76, GOA24, TTAO27,65, MRFO27. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {l \, P{ = }\sigma } \right] = \left[ {0.788613317782741 \, 0.408423616744838} \right]\), and the minimum of total weight of the structure is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 263.895833882434\).
Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {l \, P{ = }\sigma } \right]\). Minimize:\(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = \left( {2\sqrt 2 x_{1} + x_{2} } \right) \cdot l\).
Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{\sqrt 2 x_{1} + x_{2} }}{{\left( {\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} } \right)}}P - \sigma \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{x_{2} }}{{\left( {\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} } \right)}}P - \sigma \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{1}{{\left( {\sqrt 2 x_{2} + x_{1} } \right)}}P - \sigma \le 0\). Parameters range: \(0 \le x_{1} ,x_{2} ,x_{3} \le 1\).
Parameters \(l = 100\;{\text{cm}},P = \sigma = 2\;{\text{kN}}/\left( {{\text{cm}}^{2} } \right)\).
Engineering problem 4: welded beam design problem (WBD)
The welded beam design problem (WBD) is to minimize the fabrication cost of a welded beam (Fig. 19). Table 11 reports the optimal results of the WBD problem compared with 11 new optimization algorithms, namely, EO77, LFD78, SHO79, HGSO80, AOS81, CDE82, OMGSCA83, RO84, PFA85, MRFO27, SO42. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {h \, l \, t \, b} \right] = \left[ {0.20572964 \, 3.23491935 \, 9.03662391 \, 0.20572964} \right]\), and the minimum of the fabrication cost is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 1.6927682655\).
Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {h \, l \, t \, b} \right]\). Minimize: \(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = 1.10471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4} \left( {14 + x_{2} } \right)\).
Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = \tau \left( {\overrightarrow {{\varvec{x}}} } \right) - \tau_{\max } \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = \sigma \left( {\overrightarrow {{\varvec{x}}} } \right) - \sigma_{\max } \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = \delta \left( {\overrightarrow {{\varvec{x}}} } \right) - \delta_{\max } \le 0\),\(g_{4} \left( {\overrightarrow {{\varvec{x}}} } \right) = x_{1} - x_{4} \le 0\), \(g_{5} \left( {\overrightarrow {{\varvec{x}}} } \right) = P - P_{c} \left( {\overrightarrow {{\varvec{x}}} } \right) \le 0\), \(g_{6} \left( {\overrightarrow {{\varvec{x}}} } \right) = 0.125 - x_{1} \le 0\), \(g_{7} \left( {\overrightarrow {{\varvec{x}}} } \right) = 1.10471x_{1}^{2} + 0.04811x_{3} x_{4} \left( {14 + x_{2} } \right) - 5 \le 0\). Parameters range: \(0.1 \le x_{1} ,x_{4} \le 2,0.1 \le x_{2} ,x_{3} \le 10\).
Parameters: \(\tau \left( {\overrightarrow {{\varvec{x}}} } \right) = \sqrt {\left( {\tau{\prime} } \right)^{2} + \tau{\prime} \tau^{^{\prime\prime}} \frac{{x_{2} }}{R} + \left( {\tau^{^{\prime\prime}} } \right)^{2} }\), \(\tau{\prime} = \frac{P}{{\sqrt {2x_{1} x_{2} } }}\), \(\tau^{^{\prime\prime}} = \frac{MR}{J}\), \(M = P\left( {L + \frac{{x_{2} }}{2}} \right)\), \(R = \sqrt {\frac{{x_{2}^{2} + \left( {x_{1} + x_{3} } \right)^{2} }}{4}}\), \(J = \sqrt 2 x_{1} x_{2} \left( {\frac{{x_{2}^{2} }}{6} + \frac{{\left( {x_{1} + x_{3} } \right)^{2} }}{2}} \right)\), \(\sigma \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{6PL}{{x_{4} x_{3}^{2} }}\), \(\delta \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{4PL^{3} }}{{Ex_{4} x_{3}^{3} }}\), \(P = 60\;{\text{lb}}\), \(P_{c} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }}\left( {1 - \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} } \right)\), \(L = 14\;{\text{in}}\),\(\delta_{\max } = 0.25\;{\text{in}}\), \(E = 3 \times 10^{7} \;{\text{psi}}\), \(G = 12 \times 10^{6} \;{\text{psi}}\), \(\tau_{\max } = 13600\;{\text{psi}}\), \(\sigma_{\max } = 30000\;{\text{psi}}\).
Engineering problem 5: Gear train design problem (GTD)
The gear train design problem (GTD) is to minimize the specific transmission costs of the gear train (Fig. 20). Table 12 shows the optimal results of the GTD problem compared with 9 new optimization algorithms, namely, MFO86, ABC87, MBA87, CBO88, NNA87, SCA89, SO42, AOA90, MRFO27. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {N_{a} \, N_{b} \, N_{c} \, N_{d} } \right] = \left[ {42.91783 \, 18.58480 \, 16.30035 \, 49.23052} \right]\), and the minimum of the specific transmission costs of the gear train is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 2.7009 \, E - 12\).
Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {N_{a} \, N_{b} \, N_{d} \, N_{f} } \right]\). Minimize: \(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = \left( {\frac{1}{6.391} - \frac{{x_{2} x_{3} }}{{x_{1} x_{4} }}} \right)^{2}\). Parameters range: \(12 \le x_{i} \le 60,i = 1,2,3,4\).
The results in Tables 8, 9, 10, 11 and 12 highlight that IMRFO is generally better than MRFO and the other optimizers. IMRFO can improve the global search ability, improve solution accuracy, and solve engineering design problems effectively.
Conclusion
This paper overcomes the defects of MRFO, such as low solving precision and easily trapped into local optimal, by proposing the IMRFO algorithm, which extends MRFO by incorporating Tent Chaos mapping, bidirectional search, and the Levy flight strategy. The Tent chaos mapping strategy in the algorithm’s initialization phase helps the manta ray to distribute more uniformly and improve the quality of the initial solution. After the cyclone foraging phase, the bidirectional search strategy into the algorithm helps IMRFO for a bidirectional search, expanding the search area and preventing the algorithm from being trapped in a local optimum. During the somersault foraging stage, the Levy flight strategy uses a random step size, strengthening the algorithm’s ability to escape from a local optimum.
To verify IMRFO’s performance, it is evaluated on 23 benchmark functions and the CEC2017 and CEC2022 benchmark suites. The corresponding results highlight that IMRFO has high solving precision and a strong ability to avoid the local optimal compared with the competitors. Secondly, we compared IMRFO with MRFO and three IMRFO modifications on 23 benchmark functions to test the effectiveness of the proposed strategies. The results indicate that the three strategies introduced into MFRO simultaneously can improve the algorithm’s capability compared to one or two strategies. Thirdly, we use statistical analysis such as Friedman mean ranking and the Wilcoxon rank sum test to increase the credibility of the results. The results further confirm the proposed IMRFO's superior performance. Moreover, IMRFO and other algorithms are implemented for 5 engineering design problems. The results demonstrate the competitiveness and applicability of IMRFO compared to other advanced algorithms (Supplemenatry material).
Although IMRFO has been proven competitive, it is still underperforming in some areas, specifically in some hybrid function composition functions. Thus, future research will consider adding other strategies to improve the algorithm. Moreover, IMRFO will be used to solve several real-world problems, such as logistics distribution route planning, laser cutting path planning, and 3D printing layout problems.
Data availability
All data generated or analyzed during this study are included in this published article [and its supplementary information files].
References
Abdollahzadeh, B., Soleimanian Gharehchopogh, F. & Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 36(10), 5887–5958 (2021).
Fu, S. et al. Improved dwarf mongoose optimization algorithm using novel nonlinear control and exploration strategies. Expert Syst. Appl. 233, 120904 (2023).
Yang, X. Swarm intelligence-based algorithms: A critical analysis. Evol. Intell. 7(1), 17–28 (2014).
Wu, X. et al. Global and local moth-flame optimization algorithm for UAV formation path planning under multi-constraints. Int. J. Control Autom. Syst. 21(3), 1032–1047 (2023).
Li, X. et al. A partition-based convergence framework for population-based optimization algorithms. Inf. Sci. 627, 169–188 (2023).
Samieiyan, B. et al. Novel optimized crow search algorithm for feature selection. Expert Syst. Appl. 204, 117486 (2022).
Wang, H. et al. Semisupervised bacterial heuristic feature selection algorithm for high-dimensional classification with missing labels. Int. J. Intell. Syst. 2023, 1–20 (2023).
Rahab, H., Haouassi, H. & Laouid, A. Rule-based Arabic sentiment analysis using binary equilibrium optimization algorithm. Arab. J. Sci. Eng. 48(2), 2359–2374 (2023).
Al-Deen, M. S. et al. Study on sentiment classification strategies based on the fuzzy logic with crow search algorithm. Soft Comput. 26(22), 12611–12622 (2022).
Tripathy, A., Anand, A. & Kadyan, V. Sentiment classification of movie reviews using GA and NeuroGA. Multimed. Tools Appl. 82(6), 7991–8011 (2023).
Prakash, N., Vaikundaselvan, B. & Sivaraju, S. S. Short-term load forecasting for smart power systems using swarm intelligence algorithm. J. Circuits Syst. Comput. 31(11), 2250189 (2022).
Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Trans. Evol. Commun. 1(1), 67–82 (1997).
Holland, J. H. Genetic algorithms. Sci. Am. 267(1), 66–73 (1992).
Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proceedings of ICNN’95 - International Conference on Neural Networks, 1944(4), 1942–1948 (1995).
Passino, K. M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 22(3), 52–67 (2002).
Krishnanand, K. N. Glowworm swarm optimization: a new method for optimizing multi-modal functions. Int. J. Comput. Intell. Stud. 1(1), 93–119 (2009).
Yang, X. & Deb, S. Cuckoo search via Lévy flights. In World Congress on Nature & Biologically Inspired Computing (NaBIC), vol. 210 (2009).
Mirjalili, S., Mirjalili, S. M. & Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014).
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016).
Duan, H. & Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 7(1), 2–37 (2014).
Li, S. et al. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 111, 300–323 (2020).
Faramarzi, A. et al. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 152, 113377 (2020).
Alsattar, H. A., Zaidan, A. A. & Zaidan, B. B. Novel meta-heuristic bald eagle search optimization algorithm. Artif. Intell. Rev. 53(3), 2237–2264 (2020).
Saremi, S., Mirjalili, S. & Lewis, A. Grasshopper optimization algorithm: Theory and application. Adv. Eng. Softw. 30, 30–47 (2017).
Zervoudakis, K. & Tsafarakis, S. A mayfly optimization algorithm. Comput Ind Eng 145, 106559 (2020).
Tang, A., Han, T. & Zhou, H., et al. An improved equilibrium optimizer with application in unmanned aerial vehicle path planning. Sensors-Basel 21(5), 1814 (2021).
Zhao, W., Zhang, Z. & Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 87, 103300 (2020).
Zhang, X. et al. Manta ray foraging optimization algorithm with mathematical spiral foraging strategies for solving economic load dispatching problems in power systems. Alex. Eng. J. 70, 613–640 (2023).
Houssein, E. H., Emam, M. M. & Ali, A. A. Improved manta ray foraging optimization for multi-level thresholding using COVID-19 CT images. Neural Comput. Appl. 33(24), 16899–16919 (2021).
Feng, J. et al. Minimization of energy consumption by building shape optimization using an improved Manta-Ray Foraging Optimization algorithm. Energy Rep. 7, 1068–1078 (2021).
Hemeida, M. G. et al. Optimal allocation of distributed generators DG based Manta Ray Foraging Optimization algorithm (MRFO). Ain Shams Eng. J. 12(1), 609–619 (2021).
Li, X. et al. Time-series production forecasting method based on the integration of Bidirectional Gated Recurrent Unit (Bi-GRU) network and Sparrow Search Algorithm (SSA). J. Pet. Sci. Eng. 208, 109309 (2022).
Kulshrestha, A., Krishnaswamy, V. & Sharma, M. Bayesian BILSTM approach for tourism demand forecasting. Ann. Tour. Res. 83, 102925 (2020).
Neelakandan, S. et al. An automated word embedding with parameter tuned model for web crawling. Intell. Autom. Soft Comput. 32(3), 1617–1632 (2022).
Li, S. et al. Short-term electrical load forecasting using hybrid model of manta ray foraging optimization and support vector regression. J. Clean. Prod. 388, 135856 (2023).
Andi, T. et al. Chaotic multi-leader whale optimization algorithm. Beijing Hangkong Hangtian Daxue Xuebao 47(7), 1481–1494 (2021).
Chen, L., Song, N. & Ma, Y. Harris hawks optimization based on global cross-variation and tent mapping. J. Supercomput. 79(5), 5576–5614 (2023).
Holte, R. C. et al. MM: A bidirectional search algorithm that is guaranteed to meet in the middle. Artif. Intell. 252, 232–266 (2017).
Wang, Z. et al. A novel particle swarm optimization algorithm with Lévy flight and orthogonal learning. Swarm Evol. Comput. 75, 101207 (2022).
Heidari, A. A. & Pahlavani, P. An efficient modified grey wolf optimizer with Lévy flight for optimization tasks. Appl. Soft Comput. 60, 115–134 (2017).
Holland, J. H. Adaptation in Natural and Artificial Systems (University of Michigan Press, 1975).
Hashim, F. A. & Hussien, A. G. Snake optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 242, 108320 (2022).
Zhong, C., Li, G. & Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 251(5), 109215 (2022).
Shadravan, S., Naji, H. R. & Bardsiri, V. K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 80, 20–34 (2019).
Khishe, M. & Mosavi, M. R. Chimp optimization algorithm. Expert Syst. Appl. 149, 113338 (2020).
Abualigah, L. et al. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250 (2021).
Minh, H. et al. A variable velocity strategy particle swarm optimization algorithm (VVS-PSO) for damage assessment in structures. Eng. Comput.-Germany 39(2), 1055–1084 (2023).
Tan, W. & Mohamad-Saleh, J. A hybrid whale optimization algorithm based on equilibrium concept. Alex. Eng. J. 68, 763–786 (2023).
Wei, H. et al. The Strategic Random Search (SRS)—A new global optimizer for calibrating hydrological models. Environ. Modell. Softw. 172, 105914 (2024).
Hu, G. et al. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Method Appl. Mech. Eng. 394, 114901 (2022).
Zhang, X. et al. Gaussian mutational chaotic fruit fly-built optimization and feature selection. Expert Syst. Appl. 141, 112976 (2020).
Dhawale, D., Kamboj, V. K. & Anand, P. An effective solution to numerical and multi-disciplinary design optimization problems using chaotic slime mold algorithm. Eng. Comput.-Germany 38(S4), 2739–2777 (2022).
Wang, S., Rao, H. & Wen, C., et al. Improved remora optimization algorithm with mutualistic strategy for solving constrained engineering optimization problems. Processes 10(12), 2606 (2022).
Hu, G. et al. Hybrid chimp optimization algorithm for degree reduction of ball Said-Ball curves. Artif Intell Rev 56(9), 10465–10555 (2023).
Tang, A. et al. A modified manta ray foraging optimization for global optimization problems. IEEE Access 9, 128702–128721 (2021).
Latorre A, Molina D, Osaba E, et al. Fairness in Bio-inspired Optimization Research: A Prescription of Methodological Guidelines for Comparing Meta-heuristics. Neural and Evolutionary Computing, 2020.
Mirjalili, S., Mirjalili, S. M. & Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 27(2), 495–513 (2016).
Long, W. et al. An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng. Appl. Artif. Intell. 68, 63–80 (2018).
Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 169, 1–12 (2016).
Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. G. S. A. A gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009).
Ewees, A. A., Abd Elaziz, M. & Houssein, E. H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 112, 156–172 (2018).
Singh, P. R., Elaziz, M. A. & Xiong, S. Modified Spider Monkey Optimization based on Nelder–Mead method for global optimization. Expert Syst. Appl. 110, 264–289 (2018).
Fan, Q. et al. Beetle antenna strategy based grey wolf optimization. Expert Syst. Appl. 165, 113882 (2021).
Xu, X. et al. Multivariable grey prediction evolution algorithm: A new metaheuristic. Appl. Soft Comput. 89, 106086 (2020).
Zhao, S. et al. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst. Appl. 238, 121744 (2024).
Kannan, B. K. & Kramer, S. N. An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 116(2), 405–411 (1994).
Coello, C. A. & Mezura, M. E. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 16(3), 193–203 (2002).
He, Q. & Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 20(1), 89–99 (2007).
Stochastic, S. H. & Search, F. A powerful metaheuristic algorithm. Knowl.-Based Syst. 75, 1–18 (2015).
Uymaz, S. A., Tezel, G. & Yel, E. Artificial algae algorithm (AAA) for nonlinear global optimization. Appl. Soft Comput. 31, 153–171 (2015).
Awad, N. H., Ali, M. Z., Mallipeddi R., et al. An improved differential evolution algorithm using efficient adapted surrogate model for numerical optimization. Inf. Sci., 451–452, 326–347 (2018).
Zhang, M., Luo, W. & Wang, X. Differential evolution with dynamic stochastic selection for constrained optimization. Inf. Sci. 178(15), 3043–3074 (2008).
Wang, Y. et al. Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique. Struct. Multidiscip. Optim. 37(4), 395–413 (2009).
Wang, L. & Li, L. An effective differential evolution with level comparison for constrained engineering design. Struct. Multidiscip. Optim. 41(6), 947–963 (2010).
Kaveh, A. & Dadras, E. A. Water strider algorithm: A new metaheuristic and applications. Structures 25, 520–541 (2020).
Mirjalili, S. et al. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017).
Hashim, F. A. et al. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 51(3), 1531–1551 (2021).
Houssein, E. H. et al. Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 94, 103731 (2020).
Dhiman, G. & Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 114, 48–70 (2017).
Hashim, F. A. et al. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 101, 646–667 (2019).
Azizi, M. Atomic orbital search: A novel metaheuristic algorithm. Appl. Math. Model. 93, 657–683 (2021).
Huang, F., Wang, L. & He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 186(1), 340–356 (2007).
Chen, H. et al. Advanced orthogonal learning-driven multi-swarm sine cosine optimization: Framework and case studies. Expert Syst. Appl. 144, 113113 (2020).
Kaveh, A. & Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 112–113, 283–294 (2012).
Yapici, H. & Cetinkaya, N. A new meta-heuristic optimizer: Pathfinder algorithm. Appl. Soft Comput. 78, 545–568 (2019).
Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 89, 228–249 (2015).
Sadollah, A., Sayyaadi, H. & Yadav, A. A dynamic metaheuristic optimization model inspired by biological nervous systems: Neural network algorithm. Appl. Soft. Comput. 71, 747–782 (2018).
Kaveh, A. & Mahdavi, V. R. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 139, 18–27 (2014).
Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 96, 120–133 (2016).
Abualigah, L. et al. The arithmetic optimization algorithm. Comput. Method. Appl. Mech. Eng. 376, 113609 (2021).
Acknowledgements
Thanks for the computing support of the State Key Laboratory of Public Big Data, Guizhou University. The authors would like to express their gratitude to EditSprings (https://www.editsprings.cn) for the expert linguistic services provided.
Funding
This research was funed by the National Natural Science Foundation of China (No.52065010 and No.52165063), Guizhou Provincial Key Technology R&D Program ([2023] No. G094, [2023] No. G125), Guizhou Provincial Basic Research Program (Natural Science) ([2022] No. G140) Guizhou Provincial Major Scientific and Technological Program ([2022] No. K024), Guizhou Provincial Department of Education Youth Project ([2022] No.274 and [2018] No.243).
Author information
Authors and Affiliations
Contributions
P.Q.: conceptualization, methodology, writing, data testing, reviewing, software. Q.Y.: conceptualization, supervision, formal analysis. F.D.: conceptualization, resources. Q.G.: conceptualization, reviewing.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Qu, P., Yuan, Q., Du, F. et al. An improved manta ray foraging optimization algorithm. Sci Rep 14, 10301 (2024). https://doi.org/10.1038/s41598-024-59960-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-59960-1
Keywords
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.