Rain-Fall Optimization Algorithm with new parallel implementations

Rainfall Optimization Algorithm (RFO) is a nature-inspired metaheuristic optimization algorithm. RFO mimics the movement of water drops generated during rainfall to optimize a function. The paper study new implementations for RFO to offer more reliable results. Moreover, it studies three restarting techniques that can be applied to the algorithm with multithreading. The different implementations for the RFO are benchmarked to test and verify the performance and accuracy of the solutions. The paper presents and compares the results using several multidimensional testing functions, as well as the visual behavior of the raindrops inside the benchmark functions. The results confirm that the movement of the artificial drops corresponds to the natural behavior of raindrops. The results also show the effectiveness of this behavior to minimize an optimization function and the advantages of parallel computing restarting techniques to improve the quality of the solutions. Received on 29 February 2020; accepted on 06 April 2020; published on 15 April 2020


Introduction
The optimal solution for a specific function can be obtained with different methods. In the last decades, people started to research new methods to find global maximums and minimums without using traditional algorithms. Researchers have found that heuristicbased algorithms could find the global optimum for highly complex functions in less time than the traditional way. And in the real world, the efficiency and precision of the algorithm are crucial. Moreover, these algorithms are needed in many different fields, like physics, engineering, computer science, industrial processes, demography, and economics. Some application models are represented with functions that could take a lot of time to find their global optimums with the traditional methods, and that is why researches started to develop new optimization algorithms.
A vast number of approaches have been suggested. But not all the algorithms are as precise as they * Juan Manuel Guerrero-Valadez. Email: juanmanuel.guerrerovaladez@up.edu.mx are required to be in a real-world problem. Some algorithms fail to find the optimum solution in specific functions. To attend those problems, researches have to design meta-heuristic optimization algorithms. Meta-heuristic algorithms are problem-independent algorithms that can be applied to solve an optimization problem. A well-written meta-heuristic optimization algorithm will adapt to a given specific problem without a lot of modifications. [1]. The research and proposal presented in this paper are based on a nature-inspired optimization algorithm named as Rainfall Optimization Algorithm (RFO). Natureinspired algorithm terminology is used for all the algorithms that were inspired on a biological, physical or chemical process that can be found in nature. Nature-inspired algorithms used to solve optimization problems have been very successful in the last decades [2] and have allowed researchers to have a large number of sources of inspiration [3].
RFO was written by Kaboli, Sevbaraj and Rahim and it was inspired by the behavior of raindrops. The algorithm performs fast in multi-variable functions, but it sometimes fails to find the global optimum in some functions. It might get results that are not as precise as other optimization algorithms because it can run out of iterations,or when raindrops start moving in the wrong direction or get stuck on local optima. The main goal of this paper is to introduce new implementations that can help RFO to make it more precise. The proposed implementations exploits the advantages of multi-threaded processors found in modern computers. The RFO of this paper can perform parallel executions of the algorithm with different restarting techniques: restart to the best, genetic restart to the best and simulated annealing. Restart to the best is a method that can help the quality of a solution of an optimization problem by getting more precise results with a decent computation time by the use of parallel threads. The other techniques are an improvement of the restart to the best, because they can make RFO more metaheuristic based, and as a result, RFO will perform better in more functions.
If multi-threaded computers are used correctly, they can offer high computational power which makes them useful for different science and engineering problems. The new approach discussed in this paper presents the advantages of the implementation of parallelization in RFO. These implementations intend to use the benefits of parallel computing by running the algorithm in different parallel processes at the same time with fewer iterations on the threads. The results obtained by each thread can be compared to obtain the global optimum or continue the process with the restarting techniques previously mentioned to find better quality results.
The rest of the paper is structured as follows. First, the description and flowchart of the RFO are presented. In section 3, the parallel implementations with the proposed restart techniques are described. Next, the performance of the different implementations of the algorithm is evaluated by the use of benchmark functions. Finally, the conclusions are outlined in the last section.

Rain-Fall Optimization Algorithm
The RFO is inspired by the behavior of drops during a rainfall. In nature, raindrops drip down from a peak to form streams and rivers that reach the sea or lakes [1]. This process can be implemented in optimization algorithms because the behavior of the raindrops is similar to an exploration process that can be used to find the minimums of a mathematical function. Raindrops will keep falling until they reach a place where they cannot continue to fall. In nature, they can get stuck before reaching the sea level (the global minimum), such as ponds or lakes that can be translated to an optimization problem as local minimums. Also, RFO simulates the tendency of drops to move towards the steepest slopes [1].
In the first iteration of the algorithm, a population of raindrops is generated in random positions of the optimization function, which can be associated with the geographical terrain. This first process represents the simulation of the new raindrops produced by rainfalls. Next, it is necessary to simulate the natural movement of raindrops. After being generated, the artificial drops must move to the steepest slope of the radius that surrounds them. Their next position can be determined using several methods. For a similar process in other optimization algorithms, gradient descent is used. But in RFO, a method called random search is used. For this method, the algorithm has to generate neighbor points around every artificial drop. The area where the neighbor points are generated can be called the neighborhood of the drop. After generating the neighbor points of a drop, the algorithm evaluates them according to the optimization function. The result of each neighbor point is compared with the previous position of the drop to decide which point corresponds to the lowest position of the neighborhood. For each drop, this process will continue to execute until the drop reaches the lowest point of the terrain (the global optimum) or gets stuck in a puddle (local optima).

Description of the algorithm
The flowchart in Figure 1 describes the main algorithm of RFO. It is based on the original algorithm written by Kaboli, Selvaraj, and Rahim, but with some improvements. To understand the algorithm, it is necessary to explain some concepts.
Raindrop: A rainfall generates a population of artificial drops that contain the following attributes: • X k It refers to the position of the raindrop at the k th dimension of the optimization problem.
• Status: It is a flag that marks the drop as active or inactive. When the artificial drop is inactive, it means that the drop is stuck or far from the global optimum. Hence, the raindrop stops moving.
• Value: It represents the fitness of the solution by evaluating the position of the drop at iteration i.
• Rank: It is an attribute that determines the position of the raindrop at the merit-order list at iteration i, and it is calculated using the following equation:  -C1 i t is the rate of change of value at iteration t for raindrop D i with respect to its initial value.
-C2 i t is the current value for raindrop D i -OrderC1 i t and OrderC1 i t are sorted in ascending order.
w 1 and w 2 are the weighting coefficients that have a constant value of 0.5 [1]. -Rank i t returns the rank for raindrop D i at iteration t.

•
Step size: It is necessary to use random search to define the next position of the drop. The step size is the property that defines the area of the neighborhood in which neighbor points can be generated. To simulate the descent of a drop from a peak, the neighborhood size must change according to the speed of the drop at iteration i. If the delta between the fitness of the last iterations is greater, it means that it is moving to a steeper area. And the speed of the drops is directly proportional to the slope of the terrain. To simulate that behavior in the artificial raindrops, Equation 2 was designed. Where: step 0 is the initial step size given as a parameter for the solution.
-R s is a random number between 0 and 1.
step i−1 is the step size at the previous iteration.
Neighborhood: As mentioned before, the neighborhood is the space where the neighbor points of the N th raindrop can be evaluated. According to Equation 3 [1], the size of the neighborhood is wider when the drop is moving faster. This allows the drop to do bigger jumps in the exploration phase and move slower in the exploitation phase to produce high-quality solutions [4]. Where: • N k 0 is the initial neighborhood size of all the drops at k th dimension according to Equation 4.
• D k i is the position of the artificial raindrop at k th dimension in the iteration i.
• D i step is the step size at of the drop at the current iteration. Where: • up k is the upper limit of the search space at k it dimension.
• low k is the lower limit of the search space at k it dimension.
Neighbor points: These new points that spawn in a random position inside the neighborhood of a drop represent the possible positions of the drop at iteration i + 1. Each neighbor point generated at iteration i must be evaluated to determine the next dominant position of the drop. The position of the neighbor point N P i j of raindrop D i is generated using the following equation [1]: Explosion process: It is carried out when an artificial drop does not have a dominant neighbor point. This situation occurs when the raindrop is stuck in a local minimum or when an insufficient amount of neighbor points are generated [1]. The first suggestion to release a raindrop from this situation is to generate more neighbor points. But in practice, this technique was not helping in all situations. To solve that, a new way to generate the neighbor points can be implemented during the explosion process. The new method is inspired by the expansion of a blast, as shown in Figure 2. To simulate this behavior, Equation 6 can be used. In consequence, the positions of the neighbor points generated during the explosion process can be distributed evenly from the center of the neighborhood (where the raindrop is stuck) to its edges.
Where E base is equal to the next equation: Where: • a: can be defined by Equation 8.
• Ec: is the number of explosion processes that have been carried out in the current drop.
Merit-order list: It is a list that sorts in ascending order the raindrops at iteration i according to their rank. The positional index of a raindrop at the merit-order list can determine its status. For better performance of the algorithm, the drop with the worst rank will change its status to inactive. A raindrop can also become inactive if the explosion process is carried out and fails to find a dominant point. And the drops with higher ranks can iterate more during the explosion processes before getting inactive.

Parallel Implementations for the Rain-Fall Optimization Algorithm
The paper proposes parallel computing to obtain better quality and better performance of the algorithm. Instead of initializing one rainfall, multi-threading gives the ability to generate multiple rainfalls with an independent population of raindrops. But the objective of the parallel implementation is to share information between threads and use it to enhance the route of the raindrops towards the global minimum. As a result, the algorithm needs less amount of iterations to find a suitable or high-quality optimum value with the same or lower CPU time.
With parallel implementations, a new iteration counter called H must be declared. In Figure 3, the H counter indicates the number of cuts in which the  algorithm joins all threads to share information about the artificial drops and their fitness value. At every cut, new rainfalls can be generated in each thread.
In the first iteration of RFO (H 1 ), all raindrops of all threads are generated and positioned randomly inside the search space and according to the original algorithm described in Figure 1. But if H is greater than 1, the positions of the drops will be based on the results that H K−1 threads obtained. Furthermore, the way the new drops are generated may vary depending on the restarting technique chosen. The paper suggests three ways to handle the cuts: restart to the best, genetic restart to the best and annealing restart to the best.

Restart to the best
Restart to the best is a technique that is easy to implement. The algorithm compares among all the bestpositioned drops, according to their fitness, at H K−1 and picks the best drop as the starting position, as it is shown in Algorithm 1. In other words, after selecting the best drop, instead of generating new random positions for the H K iteration, all raindrops of all threads can take advantage of the results of H K−1 and start at the position of the best drop of all threads of the last H iteration.
In practice, this method can be effective to obtain more reliable results than the ones from the original RFO. But restart to the best technique is not the best approach. If the algorithm follows the metaheuristic proposal, the algorithm must be adaptive for all optimization problems. And restart to the best could behave as a glutton algorithm because it always chooses the drop that seems to be the best candidate based on its fitness [5]. But in some circumstances or some optimization functions, a drop with a worse fitness at iteration H might also be closer to the global optimum of the function than the one with the best fitness value. As a result, the algorithm might do regression in the process in some H iterations.

Genetic restart to the best
Another restarting technique that can be implemented is a genetic restart. The method is a hybridization of the genetic algorithm (GA) and the restart to the best technique that was previously mentioned. This approach was nature inspired by the behavior of two different genes that form a new and better gene. The algorithm uses the crossover of two artificial genes that define the fitness of a raindrop [6,7]. Before every restart, Algorithm ?? is performed. It mixes the positions of some random dimensions of the best drops of each thread into a new artificial drop with better genetics (Figure 4). The quality of the gene is determined by its fitness value inside the optimization function.  of metals [8]. In nature, this process occurs when the heat source of molten metal is removed from it. Consequentially, the temperature of the metal starts to decrease. This process will continue to happen until the metal has reached the ambient temperature. At this point, the energy has reached the lowest value and the state of the metal should be fully solid [9,10].
The SA algorithm as a restarting technique uses the Equation 9 and a random number R a between 0 and 1. T represents the temperature of the metal, t i the time that has elapsed from the beginning of the annealing process, and t MAX the time that has to pass for the metal to become completely solid. In RFO, the elapsed time t i is equal to the H K iteration counter, and t MAX can be associated with the total number of H iterations that will be carried out according to the given parameter in the solution. Finally, if the temperature T is greater than the random number R a , the algorithm will perform the 6 EAI Endorsed Transactions on Energy Web 07 2020 -09 2020 | Volume 7 | Issue 29 | e3 restart to the best technique. Otherwise, the algorithm should not do anything in the H K cut. By using the Equation 9 in SA, the scale of the temperature T must be between 0 and 1.
The equation was designed to behave similar to the cooldown temperature of the metal at time t multiplied by a constant K. This process produces a large perturbation in the initial stages of the exploration, ensuring that the positions of the raindrops are well-tuned at the final stages of the optimization [11]. This happens because the probability P to fulfill the condition where R i a < T i is higher at the first H iterations when the temperature of the metal is higher. And the probability P should get lower as the temperature decreases. SA technique can be summarized in Algorithm 3.

Experiments
This section shows the performance results of the RFO for continuous optimization problems. The benchmark functions and configurations that were used to obtain the presented results are also disclosed. The experimental results are then analyzed and compared between the different purposed implementations and between other optimization problems.

Experimental Setup
To evaluate the performance and quality of the results given by the algorithm, it was tested with 5 different benchmark functions. The testing software was written with CSharp and Windows Forms. It can run both single-threaded and multithreaded RFO. The software has a user interface were the different configurations are set up. Table 1 shows the benchmark functions that were used to test RFO with different configurations. The Rosenbrock function [12], the Ackley function [13], the Sphere function [14], the Griewank function [15] and the Kowalik function [16]. The first four  benchmark functions are multidimensional, and the Kowalik function has four dimensions. The 3D shapes of the testing functions can be visualized in Table 5. All four multidimensional functions were optimized with up to 30 dimensions. The only restriction that determined the search space was the suggested range of the benchmark function. All the dimensions had the same range according to the table. The mathematical representations of the benchmark functions are expressed in Table 3. [17].
The suggested parameters were determined after doing some trials in all functions, The configuration for RFO shown in Table 2 should work with any function.

Experimental results
The search history diagrams that have been drowned in Table 5 were obtained after recording the positions of the artificial raindrops at the first 100 iterations on the 2D version of the benchmark functions. These visual results show that the random distribution that set the initial positions of the drops, and how they move inside the search space towards the coordinates of the global minimum. As expected, the drawings shows how almost all raindrops cluster around the global minimum. Table 5 also shows the convergence curves for each benchmark function. The x axis represents the iterations and the y axis shows the fitness value. The fitness history of the drop that had the best fitness value of all threads at the end of the algorithm is saved. And the recorded fitness values were plotted with MATLAB after each iteration inside the convergence graphs. The convergence curves of all the testing functions were plotted using the original RFO (single-threaded) and the new algorithm with all the purposed restarting techniques. The complete graphics can help to visually compare the behavior of the different variations of the algorithm. But the main fact that can be analyzed is the speed of the algorithm. The convergence curves showed that RFO is a fast algorithm because it does not need a lot of iterations to optimize near the global optimum value. Because of that, it can be inferred that the parallelization only affects the standard deviation and the quality of the optimization results using the configuration that was used for the experimental analysis of this paper. Having more rainfalls decreases 7 EAI Endorsed Transactions on Energy Web 07 2020 -09 2020 | Volume 7 | Issue 29 | e3 Table 3. Equations for the benchmark functions  the possibility of failing to optimize the function and the restarting techniques became relevant in the exploitation processes. The convergence curves of the figures of Table 5 also confirm that the behavior of the artificial raindrops moving towards the steepest slope is fulfilled. The curves clearly show descending behavior in all five benchmark functions.
The trajectory of the best raindrop in the first dimension was also drawn in Table 5. This diagram records the position of the best drop of all threads of the first variable after every iteration. As expected, the raindrops move uniformly and without abrupt changes towards the optimum value of the variable. Table 4 shows the performance between RFO and other metaheuristic optimization algorithms. The results of the RFO shown on the table represent the average global minimum obtained after running the algorithm more than 1000 times for the multidimensional functions and 50 times for the Kowalik function. Almost all the individual results of RFO were better than the ones from the other algorithms, but the average penalized the global results when the drops of a rainfall got stuck in local minimums. This behavior only occurs with functions that have many local minimums, like the Ackley function, but can be solved with the parallel implementations as they generate more raindrops, decreasing the probability of getting all drops stuck. Tables 7, 8, and 9 represent the average results obtained after running the algorithm several times using the parallel implementations: restart to the best, genetic restart to the best and annealing restart to the best respectively. The parameters used for these tests are defined in Table 2, and the max value of H was 30 in all runs. All four multidimensional functions were minimized with 10 and 30 dimensions. The population and number of threads were different in some runs to determine the population and the number of threads that can help the quality of the result without hurting the performance. For each configuration, the algorithm was tested 50 times.
After comparing the quality and iterations between the 10-dimensional functions and the 30-dimensional functions in Table 6, it can be said that the more variables the function has, the harder it will be for the algorithm to get near to the global minimum. On average, it was found that the algorithm needs twice the amount of iterations to find the global minimum with the benchmark functions set to 30 dimensions. Also, the more population of raindrops the rainfall has, the 8 EAI Endorsed Transactions on Energy Web 07 2020 -09 2020 | Volume 7 | Issue 29 | e3

Conclusions
In this work, new implementations for the Rainfall Optimization Algorithm (RFO) are presented. Also, the RFO used for this paper has some improvements from the original algorithm by mimicking other natural behaviors such as the flow speed of descending raindrop and the distribution of the particles during a blast. These modifications allow a dynamic search radius during the random search process. In addition to that, the algorithm can run multiple rainfalls in different threads at the same time during the optimization process. Moreover, all the rainfalls during the execution of the algorithm work as a team by sharing information between all the threads. The paper proposed three restarting techniques to share the positions and the fitness values of all raindrops of all threads. As shown in Table 5, RFO is a fast algorithm and the parallel implementations seek to improve the quality of the results without hurting the performance. Running the algorithm with multiple threads and using 9 EAI Endorsed Transactions on Energy Web 07 2020 -09 2020 | Volume 7 | Issue 29 | e3  To ensure that all the different implementations of the algorithm perform at least as good as the original RFO, the testing results are also presented. All the different variations of the algorithm with different configurations of the number of threads, dimensions, and populations were carried out with different benchmark functions. The chosen functions vary in shape and behavior to ensure that the algorithm is capable to optimize any continuous optimization problem. The results that were shown can validate all the presented variations of the RFO as metaheuristic algorithms.
After analyzing the results of the benchmarking experiments given by the different implementations of the algorithm, it can be concluded that parallelization 10 EAI Endorsed Transactions on Energy Web 07 2020 -09 2020 | Volume 7 | Issue 29 | e3 improves the solutions given by RFO. With multithreading, more raindrops can be generated and satisfy the CPU time of the Equation 10. Having more artificial drops decreases the proportion of stuck drops and increases the probability of having drops exploring towards the global minimum. And the restarting techniques allowed raindrops that were heading in the wrong direction to restart to a position closer to the global minimum. As a result, all raindrops were exploring near the global minimum at the end of the execution.
if (N threads <= CP U threads ); then: t = K · N drops · N threads = K · N drops (10) In general terms, simulated annealing restart to the best was the restarting technique that returned the best quality solutions. But the behavior of raindrops in some functions can be benefited by the use of the other restarting techniques, especially the genetic restart to the best. In terms of that, possible further investigations about the hybridization between the different restarting techniques presented in the paper are worth to be done. Studying the hybrid restarting techniques as new approaches for RFO can improve even more the solution accuracy.
In conclusion, if the purposed parallel approaches are to be considered as new implementations for RFO, this nature-inspired algorithm can satisfy the needs to solve real-world optimization problems. And the paper demonstrated the benefits of parallelization and restarting techniques to offer reliable solutions for metaheuristic optimization algorithms. 11 EAI Endorsed Transactions on Energy Web 07 2020 -09 2020 | Volume 7 | Issue 29 | e3