Introduction

Under certain constraints, the optimization problem involves finding the objective function's best fitness. As real-world optimization problems have become more complex in recent years, deterministic algorithms have encountered significant performance limitations1. On the contrary, stochastic algorithms perform much better at solving these problems. Besides, the Metaheuristic (MH) algorithms have proven highly effective in tackling complex problems2. This is because MH algorithms generate a feasible solution space with a stochastic algorithm, search the space for solutions in each iteration, evaluate individual fitness through the fitness function, and perform updates to produce the optimal solution3. The MH algorithms have shown advantages in many fields, including global optimization4,5, feature selection6,7, sentiment classification8,9,10, and case forecasting11. Considering the no free lunch (NFL) theorem, no algorithm can perform well on every optimization problem12, it is significant to study the MH algorithms.

Several MH algorithms were inspired by the various behaviors or patterns formed by the natural evolution of organisms. For instance, the Genetic Algorithm (GA)13 adopts the Darwinian biological evolution, natural selection, and the genetic mechanism of the biological evolution process. The particle swarm optimization algorithm (PSO)14 originated from biologists' observations of and studies of birds' foraging behavior. Bacterial Foraging Optimization (BFO)15 Mimics the eating habits of E. coli in the human gut. Glowworm Swarm Optimization (GSO)16 is inspired by the behavior of fireflies that are attracted and moved by light during their life habits of foraging, courtship, and vigilance. The Cuckoo Search Algorithm (CS)17 is inspired by a parasitic habit, i.e., some cuckoos lay eggs in other birds' nests until the young hatch. Gray wolf prey hunting activities inspired the Grey Wolf Optimizer (GWO)18. The Whale Optimization Algorithm (WOA)19 is inspired by humpback whale hunting behavior, while Pigeon-inspired Optimization (PIO)20 is inspired by pigeons' intelligence and collaborative abilities in spatial navigation and social behavior. The Slime Mould algorithm (SMA) Group21 is inspired by the behavior capable of efficiently finding food and establishing communication networks in a space environment. Furthermore, the Marine Predators Algorithm (MPA)22 is inspired by predator behavior in Marine life. The Bald Eagle Search (BES) algorithm23 is inspired by changes in the predatory behavior of bald eagles, and the Grasshopper Optimization Algorithm (GOA)24 is inspired by the mobile foraging behavior of grasshoppers. The Mayfly Algorithm (MA)25 is inspired by the mayfly's short life span and genetic behavior. The multiple population hybrid equalization optimizer (MHEO)26 is inspired by the population distributed mechanism.

The Manta Ray Foraging Optimization algorithm (MRFO)27 is a meta-heuristic algorithm that imitates the chain, cyclone, and somersault foraging modes of manta rays in the group foraging process. Figure 1a depicts a manta ray, and Fig. 1b illustrates its body structure. In nature, manta rays have three main parts during their foraging process. Firstly, the mantas line up, forming an orderly chain. The smaller male rests on the back of the female and moves in tandem with the beat of her pectoral fins. Thus, this mechanism allows them to maximize their foraging efficiency. Secondly, manta rays cluster as cyclones to filter the prey layer when plankton concentrations are high. Finally, somersault foraging is conducted if the densest food spot is found. Because the somersault phase coexists with randomness and periodicity, it helps the mantas control their food intake. MRFO has certain advantages, such as fast convergence speed and a strong ability to search for global optimal, which is widely used in various fields, such as economic load dispatching problems28, image segmentation problems29, minimization of energy consumption30, and radial distribution networks31. Although compared to other algorithms, MRFO shows good performance, defects emerge due to the lack of disturbance in the exploration and exploitation phase, such as low solving precision and easily trapped into local optimal.

Figure 1
figure 1

(a) Picture a manta ray, and (b) itsbody structure.

Spurred by the above deficiencies, this paper extends the MRFO by incorporating Tent chaotic mapping, bidirectional search, and the Levy flight strategy. The bidirectional search strategy aims to start the search from the starting point and select one node from both directions for expansion at a time. This strategy reduces the search space searching from both directions simultaneously and finds a solution faster. It has been employed in algorithm improvement32, tourism demand forecasting33, and Model for Web Crawling34. However, it has never been used in the MRFO algorithm. In the IMRFO algorithm, the bidirectional search strategy not only searches along the direction of the fitness value decrease but also in the opposite direction. In the algorithm-solving process, especially in some multimodal and composite functions, the improving strategy can prevent the algorithm from being trapped into the local optimal and enhance global search ability. Moreover, we introduce the Levy flight strategy into MRFO to help the algorithm jump out from local optimal during exploitation.

The contributions of this paper are as follows. (1) During the algorithm’s initialization phase, Tent chaos mapping provides the initial solution with better ergodicity, uniformity, and randomness in the search space. (2) After the cyclone foraging phase, the bidirectional search strategy helps the algorithm search bidirectionally, which can enlarge the search scope and help the manta ray jump out of local optimal. (3) During the somersault foraging stage, the Levy flight strategy strengthens the algorithm’s ability to escape from local optimal. (4) The proposed algorithm is evaluated on 23 benchmark functions, the CEC2017 and CEC2022 benchmark suites, and five engineering problems. (5) Various evaluation measurements illustrate the superiority of our proposed IMRFO.

The remainder of this paper is organized as follows. "MRFO algorithm" section briefly presents MRFO. "Improved strategy for MRFO" section introduces three improvement strategies and the proposed IMRFO. "Experimental results and discussion" section presents the experimental results on 23 benchmark functions and the CEC2017 and CEC2022 benchmark suites. "IMRFO for engineering problems" section solves the engineering problems, and "Conclusion" section concludes this work.

MRFO algorithm

The manta ray’s chain, cyclone, and somersault foraging process are related to the three stages of the manta ray foraging algorithm.

Chain foraging

During the manta rays’ chain foraging, they line up to form an orderly chain, and the mathematical model is:

$${\varvec{x}}_{i}^{d} \left( {t + 1} \right) = \left\{ \begin{gathered} {\varvec{x}}_{i}^{d} \left( t \right) + r \cdot \left( {{\varvec{x}}_{best}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right) + \alpha \cdot \left( {{\varvec{x}}_{best}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 1 \hfill \\ {\varvec{x}}_{i}^{d} \left( t \right) + r \cdot \left( {{\varvec{x}}_{i - 1}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right) + \alpha \cdot \left( {{\varvec{x}}_{best}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 2,3, \ldots ,N \hfill \\ \end{gathered} \right.$$
(1)
$$\alpha = 2 \cdot r \cdot \sqrt {\left| {\log \left( r \right)} \right|}$$
(2)

where \(r \in \left( {0,1} \right)\) is a random number, \({\varvec{x}}_{i}^{d} \left( t \right)\) is the current position of the \(d - {\text{th}}\) dimension of the \(i - {\text{th}}\) individual, and \({\varvec{x}}_{best}^{d} \left( t \right)\) is the best position at tth iteration of the current dth dimension, i.e., the position with the highest concentration of plankton. The update of the current manta ray individual position \({\varvec{x}}_{i}^{d} \left( t \right)\) is determined by the current optimal individual position \({\varvec{x}}_{best}^{d} \left( t \right)\) and the previous individual position \({\varvec{x}}_{i - 1}^{d} \left( t \right)\), \(\alpha\) is the weight coefficient, and \(N\) is the population size. Figure 2 illustrates the chain foraging behavior sectional drawing.

Figure 2
figure 2

Chain foraging behavior sectional drawing.

Cyclone foraging

In the cyclone foraging phase, the plankton concentration is high, and individual manta rays follow the previous individual and move along the cyclone path toward food. This is mathematically modeled as follows:

$${\varvec{x}}_{i}^{d} \left( {t + 1} \right) = \left\{ \begin{gathered} {\varvec{x}}_{best}^{d} \left( t \right) + r \cdot \left( {{\varvec{x}}_{best}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right) + \beta \cdot \left( {{\varvec{x}}_{best}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 1 \hfill \\ {\varvec{x}}_{best}^{d} \left( t \right) + r \cdot \left( {{\varvec{x}}_{i - 1}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right) + \beta \cdot \left( {{\varvec{x}}_{best}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 2,3, \ldots ,N \hfill \\ \end{gathered} \right.$$
(3)
$$\beta = 2 \cdot e^{{\frac{{r_{1} \left( {T - t + 1} \right)}}{T}}} \cdot \sin \left( {2\pi r_{1} } \right)$$
(4)

where \(r \in rand\left( {0,1} \right)\), \(\beta\) is the inertia weight, \(r_{1} \in \left[ {0,1} \right]\) is the uniformly distributed random number, \(t\) and \(T\) are the current and maximum number of iterations, respectively. When \(\frac{t}{T} \prec r\), in order to ensure the diversity of individuals, all individuals randomly assign a new position as their reference position in the whole search space, formulated as follows:

$${\varvec{x}}_{rand}^{d} = {\varvec{L}}_{b}^{d} + r \cdot \left( {{\varvec{U}}_{b}^{d} - {\varvec{L}}_{b}^{d} } \right)$$
(5)
$${\varvec{x}}_{i}^{d} \left( {t + 1} \right) = \left\{ \begin{gathered} {\varvec{x}}_{rand}^{d} \left( t \right) + r \cdot \left( {{\varvec{x}}_{rand}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right) + \beta \cdot \left( {{\varvec{x}}_{rand}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 1 \hfill \\ {\varvec{x}}_{rand}^{d} \left( t \right) + r \cdot \left( {{\varvec{x}}_{i - 1}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right) + \beta \cdot \left( {{\varvec{x}}_{rand}^{d} \left( t \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 2,3, \cdots ,N \hfill \\ \end{gathered} \right.$$
(6)

where \({\varvec{x}}_{rand}^{d} \left( t \right)\) is the random location of random production, \({\varvec{L}}_{b}^{d}\) and \({\varvec{U}}_{b}^{d}\) are the lower and upper bounds of the search space, respectively. A sectional drawing of the cyclone foraging behavior is presented in Fig. 3.

Figure 3
figure 3

Cyclone foraging behavior sectional drawing.

Somersault foraging

When the manta rays find the densest spot of food, they start to forage, forming a somersault, and the mathematical model is as follows:

$${\varvec{x}}_{i}^{d} \left( {t + 1} \right) = {\varvec{x}}_{i}^{d} \left( t \right) + s \cdot \left( {r_{2} \cdot {\varvec{x}}_{best}^{d} - r_{3} \cdot {\varvec{x}}_{i}^{d} \left( t \right)} \right),i = 1,2, \ldots ,N$$
(7)

where \(s\) is the somersault factor, representing the manta ray somersault intensity. Generally, \(s = 2\)35, and \(r_{2} ,r_{3} \in \left( {0,1} \right)\) is a random number. Figure 4 depicts the somersault foraging behavior sectional drawing. The MRFO pseudo code of the model above is presented in MRFO pseudo code.

Figure 4
figure 4

Somersault foraging behavior sectional drawing.

figure a

Improved strategy for MRFO

Tent mapping

For strong randomness and ergodicity, chaos theory has been widely applied in the optimization process of various algorithms36, as it can increase the search space compared with random theory. Tent Chaos mapping37 is a chaotic mathematical model with uniform ergodicity, making the population more uniform and improving the initial solution's quality. It is mathematically expressed as follows:

$$\left\{ \begin{gathered} x_{k + 1} = x_{k} /\mu ,0 < x_{k} < \mu \hfill \\ x_{k + 1} = (1 - x_{k} )/(1 - \mu ),\mu \le x_{k} \le 1 \hfill \\ \end{gathered} \right.$$
(8)
$$x_{i,j} = x_{\min ,j} + x_{k,j} \times (x_{\max ,j} - x_{\min ,j} )$$
(9)

The Tent map is in a chaotic state in the range of \((0,1)\), but for \(\mu = 0.5\) it is a periodic distribution. To ensure the randomness and ergodicity of the Tent map, \(\mu \ne 0.5\) is taken. Figure 5 presents the distribution of Tent Mapping for \(\mu = 0.509\). In this paper, the steps of initialization of manta rays by Tent chaotic mapping are as follows:

  • Step 1 Set the manta ray population of \(N\), dimension \(D\), and maximum iterations of \(k\), randomly generate the initial population value \(x(i,j)\), generate \(\mu (j) \in rand\left( {0,1} \right)\), and \(\mu \ne 0.5\). The initial value of \(i,j,k\) is 1.

  • Step 2 Iterate according to formula (9), \(j \to j + 1\), \(k \to k + 1\), and generate the \(x_{k,j}\) sequence. The initial population \(x_{i,j}\) sequence is generated by iterating \(i \to i + 1\) according to Eq. (8).

  • Step 3 Determine the maximum number of iterations. If \(k\) is reached, output the \(x\) sequence. Otherwise, return to Step 2 and continue the iteration.

Figure 5
figure 5

Distribution of Tent Mapping when \(\mu = 0.509\).

The bidirectional search strategy

We employ the bidirectional search strategy38 to enlarge the search scope and prevent the algorithm from searching along a fixed direction. This strategy is presented in Fig. 6 and is formulated as follows:

$${\varvec{x}}_{i + 1}^{d} = {\varvec{x}}_{i}^{d} + rand*\left( {{\varvec{x}}_{best}^{d} - {\varvec{x}}_{i}^{d} } \right) - rand*\left( {{\varvec{x}}_{worst}^{d} - {\varvec{x}}_{i}^{d} } \right)$$
(10)
$$\left\{ \begin{gathered} f\left( {{\varvec{x}}_{i}^{d} \left( {t + 1} \right)} \right) < f\left( {{\varvec{x}}_{i}^{d} \left( t \right)} \right),{\varvec{x}}_{i}^{d} \left( {t + 1} \right)\user2{ = x}_{i}^{d} \left( {t + 1} \right) \hfill \\ f\left( {{\varvec{x}}_{i}^{d} \left( {t + 1} \right)} \right) \ge f\left( {{\varvec{x}}_{i}^{d} \left( t \right)} \right),{\varvec{x}}_{i}^{d} \left( {t + 1} \right)\user2{ = x}_{i}^{d} \left( t \right) \hfill \\ \end{gathered} \right.$$
(11)

where \({\varvec{x}}_{best}^{d}\) and \({\varvec{x}}_{worst}^{d}\) are the current optimal solution and the worst solution respectively. In the IMRFO algorithm, after the end of the chain foraging stage, the fitness value of the \(t - {\text{th}}\) iteration is \(f\left( {{\varvec{x}}_{i}^{d} \left( t \right)} \right)\), the fitness value of the \(\left( {t + 1} \right)\)th iteration is \(f\left( {{\varvec{x}}_{i}^{d} \left( {t + 1} \right)} \right)\), and the optimal solution is \(f_{\min } = f({\varvec{x}}_{best}^{d} )\). If a bidirectional search strategy is not employed when \(f\left( {{\varvec{x}}_{i}^{d} \left( t \right)} \right) > f\left( {{\varvec{x}}_{i}^{d} \left( {t + 1} \right)} \right)\), the search will be conducted along the Search Direction (the direction of the arrows in Fig. 6a). In this case, the algorithm will find the local optimal solution in the \(\left( {t + 1} \right)\)th iteration while adding the bidirectional search strategy. Additionally, the algorithm will search along the bidirectional directions, jump out the local optimal solution, and find the optimal solution \(f({\varvec{x}}_{best}^{d} )\) in Fig. 6b. Thus, the bidirectional search strategy could expand the search scope effectively.

Figure 6
figure 6

The bidirectional search strategy.

Levy flight strategy

The Levy Flight is related to chaos theory39 and has a wide range of applications in the measurement and simulation of random and pseudo-random natural phenomena40. The Levy flight is a random walking process whose action trajectory is a combination of size and size steps, which is essentially a non-Gaussian random process. In the somersault foraging stage of the IMRFO algorithm, the Levy flight is added to renew the population, thereby improving the solution's richness, increasing the search scope, and enhancing the optimization ability.

The Levy flight is formulated as follows:

$${\varvec{x}}_{i}^{t + 1} = {\varvec{x}}_{i}^{t} + \alpha \oplus Levy(\lambda )$$
(12)

where \({\varvec{x}}_{i}^{t}\) is the position of the \(t\)th iteration, \(\oplus\) indicates the point-to-point multiplication, \(\alpha\) is the step control parameter, and \(Levy\left( \lambda \right)\) is the random search path. The following conditions should be met:

$$Levy \sim u = t^{ - \lambda } ,1 < \lambda \le 3$$
(13)

The random search step size of the Levy Flight is:

$$s = \frac{\mu }{{\left| \upsilon \right|^{{\frac{1}{\beta }}} }}$$
(14)
$$\left\{ \begin{gathered} \sigma_{\mu } = \left\{ {\frac{\Gamma (1 + \beta )\sin (\pi \beta /2)}{{\Gamma \left[ {(1 + \beta )/2} \right]\beta 2^{{\frac{(\beta - 1)}{2}}} }}} \right\} \hfill \\ \sigma_{\upsilon } = 1 \hfill \\ \end{gathered} \right.^{{\frac{1}{\beta }}}$$
(15)

where \(s\) is the flight search step size, \(\beta\)\(\in\) \(\left( {1,2} \right]\), usually \(\beta = 1.5\), and \(\mu ,\upsilon\) follows the normal distribution, with \(\mu \sim N(0,\sigma_{\mu }^{2} )\) and \(\upsilon \sim N(0,\sigma_{\upsilon }^{2} )\). The IMRFO pseudo code of the model above is presented in IMRFO pseudo code. Figure 7 depicts the path of the Levy flight, and Fig. 8 shows the IMRFO algorithm flowchart.

Figure 7
figure 7

The path of Levy flight when \(\beta = 1.40\).

Figure 8
figure 8

The IMRFO algorithm’s flowchart.

figure b

Exploitation and exploration analysis

The operators \(\alpha ,\beta ,r,s\) in the original MRFO allow the search agents to update their position based on the location of \({\varvec{x}}_{i}^{d} \left( t \right)\) and \({\varvec{x}}_{i - 1}^{d} \left( t \right)\). However, the MRFO easily falls into local solutions. Therefore, we introduced Tent chaos mapping into the algorithm’s initialization phase to make the initial solution have better ergodicity, uniformity, and randomness in the search space. Furthermore, the bidirectional search strategy helps the algorithm search bidirectionally and thus enlarges the search scope. The two strategies strengthen IMRFO’s exploitation ability when \(t/T < 0.5\). In IMRFO’s somersault foraging stage, the search agents have reached the highest concentration of food, and the Levy flight strategy takes advantage of the randomness of the search step size to escape from the local optimal. Moreover, the location update strategy \(abs\left| {{\varvec{Levy}} \cdot {\varvec{x}}_{i}^{d} \left( {t + 1} \right) - {\varvec{x}}_{i}^{d} \left( t \right)} \right|\) enhances IMRFO’s exploration ability.

Experimental results and discussion

This section challenges the performance of IMRFO on three classes of widely recognized algorithms. (1) Classical algorithm, namely, PSO14, GWO18, GA41. (2) Advanced algorithms, such as SO42, SMA21, BWO43, SFO44, Chimp45. (3) New meta-heuristic algorithms, namely, AOA46.

To increase the experiment's credibility, we tested IMRFO and 10 other state-of-the-art algorithms under the same test environment on 23 benchmark functions, the CEC 2017 and CEC 2022 benchmark suites. Additionally, we tested three variants of IMRFO and MRFO on 23 benchmark functions simultaneously to analyze the performance of the IMRFO modifications. During the test process on the 23 benchmark functions, F1-F13 refer to Dim = 30, and F14-F23 are fixed dimensions. For the CEC 2017 and CEC2022 benchmark suites, all functions are Dim = 30, all algorithms run 30 times, and the best (Best), average (Mean), and standard deviation (Std) were recorded, respectively. The best results and standard deviation values are highlighted in bold.

The last three lines in Tables 1, 2, 3, 4 and 5 involve symbol analysis and Friedman mean analysis to obtain meaningful statistical results. When solving the tested function, the analysis of the symbols (W|T|L) represents the algorithms’ statistical number with win, tie, and loss. The Friedman mean analysis includes Friedman's mean value and ranking, which indicate the comprehensive ability of all algorithms to solve the test function.

Table 1 Experimental results of 11 algorithms on the benchmark functions.
Table 2 Results of IMRFO compared with other enhanced algorithms.
Table 3 Experimental results of 11 algorithms on CEC 2017 benchmark suite.
Table 4 Experimental results of 11 algorithms on CEC 2022 benchmark suite.
Table 5 The comparison and results on benchmark functions of IMRFO and modifications.

Comparisons and results

Comparison with Benchmark Functions

Table 1 highlights that the unimodal functions F1–F7 can be used to test the local search ability. The IMRFO algorithm obtained 5 of the best results in F1–F7, and the MRFO algorithm obtained 2, showing obvious advantages over MRFO and the other competitors. The results indicate that the chaotic mapping strategy in the algorithm initialization phase and bidirectional search strategy after the chain foraging phase effectively enhance the global optimization ability of IMRFO.

The F8–F23 functions have many local optimal values, making finding the global optimal more difficult. According to Table 1, IMRFO finds 14 of the best results for the F8–F23 benchmark functions, and MRFO obtains 8, while SMA obtains 7 best results among other competitors and 11 global optimal values. The results indicate a bidirectional search strategy after the chain foraging phase and that the Levy flight in the somersault foraging phase effectively enhances the ability of IMRFO to jump out of a local optimum.

The (W|T|L) result of IMRFO is (20|3|0), and IMRFO achieved the best result. The second best is the SO algorithm, and the third best is MRFO. The IMRFO’s Friedman mean value is 1.30, ranking first. The iterative convergence behaviors of all 23 functions are depicted in Fig. 9, which reveals that IMRFO performs well for most functions.

Figure 9
figure 9

Iterative convergence curves of benchmark functions F1–F23.

Figure 10 presents the convergence behaviors of 23 benchmark functions to demonstrate the convergence of IMRFO. The first column presents a two-dimensional shape of the benchmark function. The second column presents the convergence curve, which is approximately linear or stepped. The stepped convergence curve proves that it is necessary to jump out of the local optimal to reach the global value in solving some benchmark functions, indicating that the optimal value can be reached without iteration. IMRFO demonstrates convergence. The third column presents the search agent’s trajectory in the first dimension. Specifically, the search agent starts fluctuating in the early iteration and ends once it converges and stabilizes during the iterations. This demonstrates that IMRFO performs well. The fourth column shows the changes in average fitness values throughout the iterations, which are unstable and large. The value becomes small and steady for the iteration \(T = 150\), indicating the quick convergence of IMRFO. In the last column, the red point location is the best solution, and the search agents are gradually approaching it. Figure 10 demonstrates IMRFO’s good performance.

Figure 10
figure 10

Convergence behaviors of IMRFO on benchmark functions in the search process.

Additionally, a box plot analysis scheme is used to analyze the distribution characteristics of the above functions by all 11 algorithms. Figure 11 reveals that IMRFO performs well in most functions compared to the competitor methods. The median, maximum, and minimum values of the objective functions obtained by IMRFO are almost the same as the optimal solutions, especially for functions F3, F9, F10, F11, F14, F15, F16, F17, F18, F19, F20, F21, F22 and F23.

Figure 11
figure 11

Box plot analysis for benchmark functions F1–F23.

Comparison with enhanced versions of different algorithms

For better credibility of the proposed IMRFO, we compared the proposed method with enhanced versions of different algorithms on classic benchmark functions. Table 2 infers that IMRFO has the best performance.

Comparison with CEC 2017 benchmark suite

Table 3 reports the results of 11 algorithms on the CEC 2017 benchmark suite, evaluating further the performance of IMRFO. Specifically, Table 2 highlights that IMRFO achieves the most wins without any losses, obtaining 15 best results and 20 global optimal values for the F1-F29 CEC 2017 benchmark suite. The Friedman mean ranking value of IMRFO is 1.79, ranking first. Figure 12 presents the iterative convergence curves of the optimization process of all 29 algorithms, revealing that IMRFO performs well for most functions. The effectiveness and superiority of our IMRFO are thus confirmed.

Figure 12
figure 12

Iterative convergence curves of CEC 2017 benchmark suite of IMRFO and competitors.

Moreover, a box plot analysis is conducted to study and analyze the distribution characteristics of the CEC 2017 benchmark suite solved by IMRFO and other competitors. Figure 13 reveals that the proposed IMRFO performs well in most functions compared to competitors. The valid values of the objective functions obtained by IMRFO are almost identical to the optimal solutions, especially for functions F2, F3, F4, F11, F14 and F19.

Figure 13
figure 13

Box plot analysis for CEC 2017 benchmark suite.

Comparison with CEC 2022 benchmark suite

The results of 11 algorithms on the CEC 2022 benchmark suite are listed in Table 4, indicating that IMRFO achieves the most wins without any losses, obtaining 7 global optimal values for the CEC 2022 benchmark suite. The Friedman mean ranking value is 2.33, ranking first. The iterative convergence curves of the optimization process of all 12 algorithms are depicted in Fig. 14, which shows that IMRFO performs well for most functions. The effectiveness and superiority of our IMRFO are thus confirmed.

Figure 14
figure 14

Iterative convergence curve of CEC 2022 benchmark suite of IMRFO and competitors.

Moreover, a box plot analysis is conducted, with the corresponding results illustrated in Fig. 15. The valid values of the objective functions obtained by IMRFO are almost the same as the optimal solutions, especially for functions F1 and F3.

Figure 15
figure 15

Box plot analysis for CEC 2022 benchmark suite.

Impact analysis of the modifications

Next, to verify the effectiveness of the proposed strategy in the IMRFO algorithm, we compared IMRFO with MRFO and three other modifications on 23 benchmark functions. Precisely, IMRFO1 (improved algorithm using reverse search strategy and Tent chaos mapping), IMRFO2 (improved algorithm using bidirectional search strategy), IMRFO3 (improved algorithm using bidirectional search strategy and Levy flight strategy), IMRFO (improved algorithm using Tent Chaos mapping, bidirectional search strategy, and Levy flight improvements simultaneously).

According to Table 5, IMRFO finds 20 best results and 21 global optimal values for F1–F23 benchmark functions, MRFO obtains 10 best results and 11 global optimal values, among other modifications, IMRFO1 obtains 11 best results, IMRFO2 obtains 10 best results, and IMRFO3 finds 11 best results and 12 global optimal values. The results reveal that IMRFO using Tent Chaos mapping, bidirectional search strategy, and Levy flight improvements simultaneously have more advantages over MRFO and other modifications, achieving a good balance between search for global optimal and accelerated convergence.

According to Table 5, the (W|T|L) result of IMRFO is (20|3|1), which is the best result. The Friedman mean ranking value of IMRFO is 1.22, ranking first. The value of IMRFO3 is 2.09, ranking second, the value of IMRFO1 is 2.17, ranking third, and the value of IMRFO2 and MRFO are 2.52 and 2.57, respectively, ranking fourth and last. To further compare the convergence of various algorithms in the optimization process, we draw the iterative convergence curves of the optimization process of all 23 algorithms, as depicted in Fig. 16, which suggests that IMRFO performs well for most functions among the modifications.

Figure 16
figure 16

Iterative convergence curve of benchmark functions F1–F23 of IMRFO and modifications.

Wilcoxon rank sum test

Wilcoxon’s rank sum test evaluates the difference between IMRFO and competitor methods56. The significance level value is set at 0.05, and Table 6 reports the significant differences between our proposed IMRFO and the other algorithms in most functions. The results are 116/14/0, 79/20/1, 280/3/7, and 117/1/2.

Table 6 Wilcoxon’s rank sum test statistical results.

As indicated by all the results, the results of our proposed IMRFO have significant differences compared with the other 10 algorithms. Combined with the above tables and figures, our proposed IMRFO is superior to the competitor algorithms, especially for global optimization problems.

Detailed analysis of the experimental results

Among these experimental results, the unimodal functions of benchmark functions, CEC2017 and CEC2022, can be used to test the local search ability. The IMRFO algorithm obtains the most optimal results, which indicate a chaotic mapping strategy in the algorithm initialization phase and bidirectional search strategy after the chain foraging phase, effectively enhancing the global optimization ability of IMRFO.

Other benchmark functions, CEC2017 and CEC2022, have several local optimal values, making finding the global optimal more difficult. IMRFO finds the most optimal results, which indicate a bidirectional search strategy after the chain foraging phase. Besides, the Levy flight in the somersault foraging phase effectively enhances the ability of IMRFO to jump out local optimal. Tables 1, 2, 3, 4, 5 and 6 illustrate that these three strategies can effectively improve IMRFO’s searchability.

IMRFO for engineering problems

This section employs IMRFO to solve 5 engineering problems with 10 mentioned algorithms and other new algorithms. The 5 engineering problems aim to find the minimum objective value under certain restrictions. Each problem is solved by setting \(T = 500\) and the population to \(N = 50\). Table 7 presents the experimental results of 11 algorithms on engineering problems. IMRFO finds the optimal solution in all five engineering problems. Compared with other algorithms, IMRFO’s advantage is very obvious. The iterative convergence curve of IMRFO and the competitors is presented in Fig. 17.

Table 7 Experimental results of 11 algorithms on engineering problems.
Figure 17
figure 17

Iterative convergence curve of 5 engineering problems of IMRFO and competitors.

Engineering problem 1: Tension/compression spring design problem (TCSD)

TCSD involves finding the minimum weight of a tension/compression spring, as depicted in Fig. 18a. The mathematical description of TCTD is presented below. Table 8 presents the optimal results on the TCSD problem compared with 10 new optimization algorithms, namely, MVO57, EEGWO58, CSA59, GSA60, SO42, OBLGOA61, SMONM62, BGWO63, MGPEA64, TTAO65. The result is \(\overrightarrow {x} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {d \, D \, N} \right]\)\(= \left[ {0.05200,0.36500,10.82120} \right]\), and the minimum weight is \(f\left( {\vec{\user2{x}}} \right) = 0.012665\).

Figure 18
figure 18

(a) Tension/compression spring design problem, (b) pressure vessel design problem and (c) three-bar truss design problem.

Table 8 The optimal results on TCSD problem.

Consider: \(\overrightarrow {x} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {d \, D \, N} \right]\) Minimize: \(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = \left( {x_{3} + 2} \right)x_{2} x_{1}^{2}\).

Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{4} }} \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566\left( {x_{2} x_{1}^{3} - x_{1}^{4} } \right)}} + \frac{1}{{5108x_{1}^{2} }} \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = 1 - \frac{{140.45x_{1} }}{{x_{2}^{3} x_{3} }} \le 0\), \(g_{4} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0\).

Parameters range: \(0 \le x_{1} \le 2\), \(0.25 \le x_{2} \le 1.3\), \(2 \le x_{3} \le 15\).

Engineering problem 2: Pressure vessel design problem (PVD)

The PVD problem66 considers minimizing the manufacturing cost of pressure vessels (Fig. 18b). As is shown in Table 9, the optimal results on the PVD problem compared with 8 new optimization algorithms, namely, GA267, CPSO68, MPA22, MRFO27, AO46, SFS69, AAA70, iDEaSm71. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} \, x_{4} } \right] = \left[ {T_{s} \, T_{h} \, R \, L} \right] = \left[ {0.7787 \, 0.3854 \, 40.3410 \, 199.9296} \right]\), and the manufacturing cost is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 5885.5000\).

Table 9 Optimal results on PVD problem.

Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} \, x_{4} } \right] = \left[ {T_{s} \, T_{h} \, R \, L} \right]\).

Minimize: \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 0.6224x_{1} x_{3} x_{4} + 1.7781x_{2} x_{3}^{2} + 3.1661x_{1}^{2} x_{4} + 19.84x_{1}^{2} x_{3}\).

Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = - x_{1} + 0.0193x_{3} \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = - x_{3} + 0.00954x_{3} \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = - \pi x_{3}^{2} x_{4} - \frac{4}{3}\pi x_{3}^{3} + 1296000 \le 0\), \(g_{4} \left( {\overrightarrow {{\varvec{x}}} } \right) = x_{4} - 240 \le 0\).

Parameters range: \(0 \le x_{1} ,x_{2} \le 99\), \(0 \le x_{3} ,x_{4} \le 200\).

Engineering problem 3: Three-bar truss design problem (TBTD)

The three-bar truss design problem (TBTD) is to minimize the total weight of the structure (Fig. 18c). Table 10 shows the optimal results of the TBTD problem compared with 8 new optimization algorithms, namely, DSS-MDE72, HEA-ACT73, DELC74, MDPEA64, WSA75, SSA76, GOA24, TTAO27,65, MRFO27. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {l \, P{ = }\sigma } \right] = \left[ {0.788613317782741 \, 0.408423616744838} \right]\), and the minimum of total weight of the structure is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 263.895833882434\).

Table 10 Optimal results on TBTD Problem.

Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} } \right] = \left[ {l \, P{ = }\sigma } \right]\). Minimize:\(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = \left( {2\sqrt 2 x_{1} + x_{2} } \right) \cdot l\).

Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{\sqrt 2 x_{1} + x_{2} }}{{\left( {\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} } \right)}}P - \sigma \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{x_{2} }}{{\left( {\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} } \right)}}P - \sigma \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{1}{{\left( {\sqrt 2 x_{2} + x_{1} } \right)}}P - \sigma \le 0\). Parameters range: \(0 \le x_{1} ,x_{2} ,x_{3} \le 1\).

Parameters \(l = 100\;{\text{cm}},P = \sigma = 2\;{\text{kN}}/\left( {{\text{cm}}^{2} } \right)\).

Engineering problem 4: welded beam design problem (WBD)

The welded beam design problem (WBD) is to minimize the fabrication cost of a welded beam (Fig. 19). Table 11 reports the optimal results of the WBD problem compared with 11 new optimization algorithms, namely, EO77, LFD78, SHO79, HGSO80, AOS81, CDE82, OMGSCA83, RO84, PFA85, MRFO27, SO42. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {h \, l \, t \, b} \right] = \left[ {0.20572964 \, 3.23491935 \, 9.03662391 \, 0.20572964} \right]\), and the minimum of the fabrication cost is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 1.6927682655\).

Figure 19
figure 19

Welded beam design problem.

Table 11 Optimal results on WBD problem.

Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {h \, l \, t \, b} \right]\). Minimize: \(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = 1.10471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4} \left( {14 + x_{2} } \right)\).

Subject to: \(g_{1} \left( {\overrightarrow {{\varvec{x}}} } \right) = \tau \left( {\overrightarrow {{\varvec{x}}} } \right) - \tau_{\max } \le 0\), \(g_{2} \left( {\overrightarrow {{\varvec{x}}} } \right) = \sigma \left( {\overrightarrow {{\varvec{x}}} } \right) - \sigma_{\max } \le 0\), \(g_{3} \left( {\overrightarrow {{\varvec{x}}} } \right) = \delta \left( {\overrightarrow {{\varvec{x}}} } \right) - \delta_{\max } \le 0\),\(g_{4} \left( {\overrightarrow {{\varvec{x}}} } \right) = x_{1} - x_{4} \le 0\), \(g_{5} \left( {\overrightarrow {{\varvec{x}}} } \right) = P - P_{c} \left( {\overrightarrow {{\varvec{x}}} } \right) \le 0\), \(g_{6} \left( {\overrightarrow {{\varvec{x}}} } \right) = 0.125 - x_{1} \le 0\), \(g_{7} \left( {\overrightarrow {{\varvec{x}}} } \right) = 1.10471x_{1}^{2} + 0.04811x_{3} x_{4} \left( {14 + x_{2} } \right) - 5 \le 0\). Parameters range: \(0.1 \le x_{1} ,x_{4} \le 2,0.1 \le x_{2} ,x_{3} \le 10\).

Parameters: \(\tau \left( {\overrightarrow {{\varvec{x}}} } \right) = \sqrt {\left( {\tau{\prime} } \right)^{2} + \tau{\prime} \tau^{^{\prime\prime}} \frac{{x_{2} }}{R} + \left( {\tau^{^{\prime\prime}} } \right)^{2} }\), \(\tau{\prime} = \frac{P}{{\sqrt {2x_{1} x_{2} } }}\), \(\tau^{^{\prime\prime}} = \frac{MR}{J}\), \(M = P\left( {L + \frac{{x_{2} }}{2}} \right)\), \(R = \sqrt {\frac{{x_{2}^{2} + \left( {x_{1} + x_{3} } \right)^{2} }}{4}}\), \(J = \sqrt 2 x_{1} x_{2} \left( {\frac{{x_{2}^{2} }}{6} + \frac{{\left( {x_{1} + x_{3} } \right)^{2} }}{2}} \right)\), \(\sigma \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{6PL}{{x_{4} x_{3}^{2} }}\), \(\delta \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{4PL^{3} }}{{Ex_{4} x_{3}^{3} }}\), \(P = 60\;{\text{lb}}\), \(P_{c} \left( {\overrightarrow {{\varvec{x}}} } \right) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }}\left( {1 - \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} } \right)\), \(L = 14\;{\text{in}}\),\(\delta_{\max } = 0.25\;{\text{in}}\), \(E = 3 \times 10^{7} \;{\text{psi}}\), \(G = 12 \times 10^{6} \;{\text{psi}}\), \(\tau_{\max } = 13600\;{\text{psi}}\), \(\sigma_{\max } = 30000\;{\text{psi}}\).

Engineering problem 5: Gear train design problem (GTD)

The gear train design problem (GTD) is to minimize the specific transmission costs of the gear train (Fig. 20). Table 12 shows the optimal results of the GTD problem compared with 9 new optimization algorithms, namely, MFO86, ABC87, MBA87, CBO88, NNA87, SCA89, SO42, AOA90, MRFO27. The result is \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {N_{a} \, N_{b} \, N_{c} \, N_{d} } \right] = \left[ {42.91783 \, 18.58480 \, 16.30035 \, 49.23052} \right]\), and the minimum of the specific transmission costs of the gear train is \(f\left( {\overrightarrow {{\varvec{x}}} } \right) = 2.7009 \, E - 12\).

Figure 20
figure 20

Gear train design problem.

Table 12 Optimal results on GTD Problem.

Consider: \(\overrightarrow {{\varvec{x}}} = \left[ {x_{1} \, x_{2} \, x_{3} {\text{ x}}_{{4}} } \right] = \left[ {N_{a} \, N_{b} \, N_{d} \, N_{f} } \right]\). Minimize: \(\, f\left( {\overrightarrow {{\varvec{x}}} } \right) = \left( {\frac{1}{6.391} - \frac{{x_{2} x_{3} }}{{x_{1} x_{4} }}} \right)^{2}\). Parameters range: \(12 \le x_{i} \le 60,i = 1,2,3,4\).

The results in Tables 8, 9, 10, 11 and 12 highlight that IMRFO is generally better than MRFO and the other optimizers. IMRFO can improve the global search ability, improve solution accuracy, and solve engineering design problems effectively.

Conclusion

This paper overcomes the defects of MRFO, such as low solving precision and easily trapped into local optimal, by proposing the IMRFO algorithm, which extends MRFO by incorporating Tent Chaos mapping, bidirectional search, and the Levy flight strategy. The Tent chaos mapping strategy in the algorithm’s initialization phase helps the manta ray to distribute more uniformly and improve the quality of the initial solution. After the cyclone foraging phase, the bidirectional search strategy into the algorithm helps IMRFO for a bidirectional search, expanding the search area and preventing the algorithm from being trapped in a local optimum. During the somersault foraging stage, the Levy flight strategy uses a random step size, strengthening the algorithm’s ability to escape from a local optimum.

To verify IMRFO’s performance, it is evaluated on 23 benchmark functions and the CEC2017 and CEC2022 benchmark suites. The corresponding results highlight that IMRFO has high solving precision and a strong ability to avoid the local optimal compared with the competitors. Secondly, we compared IMRFO with MRFO and three IMRFO modifications on 23 benchmark functions to test the effectiveness of the proposed strategies. The results indicate that the three strategies introduced into MFRO simultaneously can improve the algorithm’s capability compared to one or two strategies. Thirdly, we use statistical analysis such as Friedman mean ranking and the Wilcoxon rank sum test to increase the credibility of the results. The results further confirm the proposed IMRFO's superior performance. Moreover, IMRFO and other algorithms are implemented for 5 engineering design problems. The results demonstrate the competitiveness and applicability of IMRFO compared to other advanced algorithms (Supplemenatry material).

Although IMRFO has been proven competitive, it is still underperforming in some areas, specifically in some hybrid function composition functions. Thus, future research will consider adding other strategies to improve the algorithm. Moreover, IMRFO will be used to solve several real-world problems, such as logistics distribution route planning, laser cutting path planning, and 3D printing layout problems.