A new heuristic with a multi-threaded implementation of a modiﬁed Fireﬂy Algorithm

In this article, we present a modiﬁed version of the Fireﬂy Algorithm implemented in a multi-threaded model to improve the results obtained by the original algorithm signiﬁcantly. This multi-threaded algorithm allows the threads to obtain di ﬀ erent results by the independent execution of the heuristic method in each of them, although for keeping all the threads with signiﬁcant executions, the algorithm performs some crossover techniques, explained in detail in this article, for the threads to learn between them while maintaining its independence. For testing the new algorithm, we use the six benchmark functions used in the literature for testing the original Fireﬂy Algorithm, and to prove that the improved results are signiﬁcant, we perform the Wilcoxon test to the results obtained. The results obtained with this new heuristic proved to be signiﬁcantly better while taking advantage of today’s commercial processors


Introduction
As optimization becomes more complex, the need for algorithms capable of delivering reliable results has increased. What makes optimization problems harder to solve is the number of variables (dimensions) involved in the formulation of the problem and that the values for these variables are englobed in a continuous domain [1].
Traditional methods and algorithms fail when trying to find the optimal values for specific functions, or have really long execution times. When optimization is so complex that the optimal values cannot be computed by regular algorithms in a considerable amount of time, heuristics are used for trying to approach the best solution [2]. Although heuristic methods are able to get a very reliable approach, they do not get the exact values of the parameters for getting the optimum. Researchers have exhaustively developed new heuristic algorithms that improve the results and execution times.
A widely developed category of heuristic methods is nature-inspired algorithms [3,4]. For these algorithms, researchers try to formulate algorithms that simulate Swarm-Intelligent based algorithms are inspired in the collective behavior of insects and animals that, despite individually they are not good on finding objectives, the whole swarm has an organization to achieve their objective. These algorithms are the most used because of the ability of agents to learn of their own experience and between them through iterations.
Bio-inspired algorithms contain SI algorithms as a subset since not all bio-inspired algorithms use the swarm collective behavior. These kinds of algorithms are based on biological individual behaviors like flowers pollination or genetic processes.

Considerations for the Firefly Algorithm
The Firefly Algorithm, presented by Yang in 2009 [15], is a Swarm-Intelligence based algorithm that uses the fireflies' characteristic flashing as inspiration. This flashing helps the fireflies to find mating partners (communication) and to attract prey. As the characteristic of SI algorithms is the cluster's communication and interaction, the first purpose of the flashing is used. In some species, fireflies get attracted to each other thanks to the flashing patterns of male fireflies; for other species of fireflies, this is used as a technique to confuse male fireflies (they believe there is a potential mate) and eat them. In any of these variants, the flashing clearly represents a form of organization between the members of the swarm. The light intensity from the fireflies' flashings I is less visible as the distance r between the observer and the light source increases, obeying the inverse square law: I ∝ 1/r 2 . Additional to this, light from the environment also decreases the flashing perception.
The distance and the light absorption factors are considered while implementing the algorithm but also, for simplifying the description of the algorithm, three rules are set: 1. Fireflies are unisex, so they can be attracted between them regardless of their sex.
2. The attractiveness of a firefly is proportional to their brightness, so between two fireflies, the less Rank the fireflies and update the best solution found so far 14 t ← t + 1 15 end 16 Return the best solution bright will move towards the brighter one. This also implies that the farther a firefly is from another, the less attractive it is.
3. The brightness of a firefly is determined by the objective function. For minimization problems (as used in this paper) the brightness or intensity of the firefly can be represented as I = −f (x), so that the minimum of the objective function is the highest intensity.
Considering the above rules, the Firefly Algorithm is formulated as shown in Algorithm 1.
The input parameters for Algorithm 1 are the N number of fireflies that will make the swarm, the randomization parameter α, the attractiveness β 0 when the distance r from the source is 0, the environment light absorption coefficient γ, and the maximum number of iterations of the algorithm.

Mathematical formulation of the Firefly Algorithm
The attractiveness of a firefly and the light intensity are two very important factors for the algorithm. As we said before, to simplify the process the attractiveness of the firefly is related to its brightness, which is determined by the fitness value of the firefly's position.
For simplicity, we determine the brightness (or light intensity) as I(x) ∝ −f (x), but the attractiveness β is 2 EAI Endorsed Transactions on Energy Web Online First more complicated since it is relative to the observant distance from the firefly (distance r ij between fireflies i and j). As the light intensity I(r) varies according to the inverse square law we can say that I(r) = I s /r 2 , where I s is the light intensity at the source. But distance is not the only factor affecting the attractiveness, we said before that the light is lost in the environment with a degree of absorption, given by the absorption coefficient γ. The effect of both factors can be expressed with the following Gaussian form: Remembering that the attractiveness β of a firefly is related to how other fireflies perceive the light intensity, the attractiveness of a firefly i perceived by a firefly j can be formulated as follows: where β 0 is the attractiveness of firefly i at r = 0 and r ij is the distance between both fireflies. This distance is calculated as the Cartesian distance: where If a firefly i finds an attractive (more brightening) firefly j, the movement of firefly i towards firefly j is defined as follows: where the constant β 0 is usually set to 1 and the random factor α ∈ (0, 1).

Multi-threaded Firefly Algorithm
Nowadays, parallel architectures are accessible at low costs and the implementation of parallel processes has been made easier with today's tools. This has made researchers develop multi-threaded algorithms to solve different problems [24].
The main purpose of the multi-threading implementations is to improve the execution times by running a process simultaneously in various processors instead of using a single one [25].

Modified Firefly Algorithm
Our work is based on the original Firefly Algorithm but we made some modifications to improve the results even if it is not executed in the multi-threaded implementation. Especially for improving the abilities of exploration and exploitation of the algorithm.
When we say exploration, we mean that the algorithm is capable of searching variously, with flexibility and taking risks through the objective function's search space, so that no subspace is unexplored. Exploitation consists of, once having a potential subspace, refine the results [26].
To achieve this, we propose a modification of the equation movement in which the randomization coefficient is smaller for the first fireflies in the swarm to give them the possibility to exploit and increases towards the last ones for them to explore. This is made because the fireflies are sorted after each iteration, so the first fireflies are the ones with better results, and the final ones have the worst results so, with a bigger randomization coefficient they can keep exploring to improve their results. With this consideration, our proposed movement equation is as follows: where the randomization coefficient for each firefly is modified according to the number of fireflies in the swarm (N ) and its position in the ordered cluster (i).
At the beginning of the algorithm, the fireflies have random positions so there is no guarantee that the first ones have good results, so we use the original movement equation (Equation 4) at the first iterations and our new movement equation (Equation 5) for the last ones. We use a percentage of iterations (P OI) to determine how many iterations should be executed before we start using the new movement equation.
In addition to the improvements before, we allow the fireflies to explore new solutions but we only accept them if they improve, so that all fireflies have temporal values (which are the ones changing during the iteration) and actual values, that are only changed if the temporal ones got better results.
The pseudocode in Algorithm 2 details this modified version of the Firefly Algorithm.

Multi-threaded implementation
The modified version of the Firefly Algorithm (Algorithm 2) is the one to be executed simultaneously by the threads, but for taking more advantage of this multi-threaded implementation, we use some crossover techniques that allow the independent executions of the algorithm to communicate between them periodically so that they can learn from each other.
The crossover techniques are executed after a determined number of iterations (CN I) is reached by each of the independent executions. All the processes are paused when they reach the CN I iterations and their results are compared and modified according to the selected crossover technique. After completing We developed two crossover techniques: Crossover with the K best threads and Crossover with Simulated Annealing with the K best threads.
• Crossover with the K best threads. This crossover technique consists of sorting the threads according to their best fitness obtained for the objective function and selecting the k_best number of best threads. For each of the remaining threads, a random k_best i thread is selected to copy its values for the parameters, so when the thread restarts the execution of the algorithm it starts with the same values for the parameters as the k_best i thread assigned.
• Crossover with Simulated Annealing with the K best threads. This technique also sorts the threads and selects the k_best ones but uses the Simulated Annealing process [22] to determine if a thread is going to receive the parameters of a random thread from the k_best ones or not. In order to determine this, the following expression is calculated as presented in [27]: where t is the current iterations and Max_Iterations is the maximum number of iterations for the algorithm.
A random number in the interval [0, 1) is generated for each of the threads out of the set of k_best threads. The thread is going to receive the parameters of the assigned k_best i thread only if g(t) < random [0, 1).
Once explained this, we can present how the Multi-threaded implementation of a modified Firefly Algorithm works. The pseudocode in Algorithm 3 describes the process. Algorithm 3 receives the following parameters: • N T : the number of threads that will execute the algorithm simultaneously.
• CN I: the number of iterations that each of the individual processes will perform before pausing for the implementation of the crossover technique.
• k_best: the number of threads that will be selected to be used to copy their parameters to the other threads.
• N : the number of fireflies that will make up the swarm of each individual process.
• α, β 0 , γ: the randomization factor, the attractiveness at the source, and the light absorption coefficient used for the Firefly Algorithm (Algorithm 2). • Max_Iterations: the maximum number of iterations that all the processes will perform.
• P OI: the percentage of iterations that will be executed before the new movement equation (Equation 5) is used as shown in Algorithm 2.

Experiments
For testing our algorithm we based on the results published by Hashmi et al. in [28].

Benchmark functions
We used the six benchmark functions shown in Table 1 for testing our algorithm since are the ones used in [28] so that we can compare the results obtained. Table 1 has all de details about the functions (number of function, name, dimensions, and the mathematical equation).

Experimental setup
We evaluated the six benchmark functions with different combinations of the number of threads N T The minimum of the Trid 10 function is obtained by: f (x) min = (−d(d + 4)(d − 1))/6 and the k_best parameters, and varying the number of fireflies N as in [28]. The remaining parameters were set as: We executed the algorithm 20 times for being able to obtain an average result, this average result is compared with the best values obtained by [28]. Hashmi et al. vary the number of fireflies for their experiment so we did the same, but we also vary the N T and k_best 5 EAI Endorsed Transactions on Energy Web Online First  10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2 Table 3. Results: 10 fireflies and Crossover with Simulated Annealing with the K best threads  10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2.10E+02 -2 parameters. We finished with six different experiments to test the objective functions: 1. 10 fireflies using the crossover with the K best technique ( Table 2) 2. 10 fireflies using the crossover with Simulated Annealing with the K best technique (Table 3) 3. 20 fireflies using the crossover with the K best technique (Table 4) 4. 20 fireflies using the crossover with Simulated Annealing with the K best technique (Table 5) 5. 40 fireflies using the crossover with the K best technique (Table 6) 6. 40 fireflies using the crossover with Simulated Annealing with the K best technique (Table 7) Each of these experiments varies the number of threads N T in 7, 8, and 10; and the k_best value in 3 and 4.
The execution of the experiments was made in a Dell Precision M6800 Laptop with Intel Core i7-4900MQ (Quad Core 2.80GHz, 3.8GHz Turbo, 8MB) with 32 GB in RAM.

Results of the experiments
In this subsection, we show the results obtained for the different experiments (Tables 2, 3, 4, 5, 6, and 7). The tables headings indicate the amount of threads used and the value for the k_best parameter, for example, the heading 8T-3K indicates that de number of threads N T = 8 and the value of K threads k_best = 3.
The best results obtained for each function are marked in bold. We applied the Wilcoxon test [29] to validate that our results were significantly better than the ones obtained by the original algorithm.

Results analysis
To show the behavior of our algorithm we prepared Figure 2 which shows the convergence curve of the algorithm using 10 fireflies, 10 threads and the k_best parameter set as 2. The vertical axis shows the fitness of the objective function while the horizontal axis represents the iterations (from 1 to 10000). The algorithm converges fastly to small values, this is why in further works we will propose modifications to start exploiting the search space when the algorithm converges, although this can be managed with the P OI parameter. Tests varying this parameter are also contemplated for further work.
We also made some trajectory curves of the firefly that obtained the best result in 500 iterations with a CN I of 100 and the remaining parameters stayed the same. The red dot shows the last value obtained (the best value). Table 3 shows the trajectory of the best firefly with the crossover to the best K technique while Table 4 uses the crossover with annealing with the best K technique. In these graphs, we are able to see sudden changes in the position that are consequences of the crossover between the threads. 6 EAI Endorsed Transactions on Energy Web Online First

Conclusions
Our algorithm obtained very significantly better results than the original implementation of the Firefly Algorithm. The tables with the results show that for every function, no matter the crossover technique nor the N T or k_best parameters, our algorithm obtained the best results. As we mentioned in the analysis section, the algorithm seems to converge fast to optimal values, so we seek to perform tests varying the P OI parameter to start exploiting the search space in a better moment.