Proceedings of the 1st International Symposium on Square Bamboos and the Geometree (ISSBG 2022)

Advanced Particle Swarm Optimization Methods for Electromagnetics
Downloads:
438
Full-Text Views:
35
Citations (Scopus):
0
Citations (Crossref):
0
Cite This Article

1. INTRODUCTION

Optimization is a widely used concept in many fields, such as engineering, economics, management, physical sciences, and social sciences. Its purpose is to identify the global maximum or minimum of a fitness function. Finding all optimal points of an objective function can aid in selecting a robust design that simultaneously considers various constraints and performance criteria.

Designers of microwave and antenna systems face the challenge of finding optimal solutions for electromagnetic problems of increasing complexity. This can be a difficult task as it involves evaluating electromagnetic fields in three dimensions, considering a large number of parameters and complex constraints, and dealing with non-differentiable and discontinuous regions. These optimization problems are often non-linear and more challenging to solve than linear ones, especially when many locally optimal solutions are in the feasible region.

When developing electromagnetic systems, it is essential to carefully consider how the different design elements interact with each other. Instead of relying on brute-force computational techniques, experts use advanced optimization procedures to achieve the best results. These procedures can be grouped into two categories: deterministic and stochastic methods. While deterministic methods have their advantages, stochastic methods have the potential to find the global optima of a problem, no matter where the search begins. Stochastic algorithms are highly valued by electromagnetic engineers for their ability to efficiently find global optima, even when faced with nonlinear and discontinuous problems that involve many variables. They are flexible, adaptable, easy to implement, and can handle complex fitness functions without requiring the computation of derivatives. Unlike traditional searching methods, these algorithms are not overly reliant on the starting point, making them an invaluable tool for optimizing non-differentiable cost functions in complex multimodal search spaces. However, due to their stochastic behavior, these algorithms require many iterations to produce reliable results.

Various swarm intelligence-based optimization algorithms have been developed to address the differing descriptors and unknowns in each optimization problem. These algorithms include particle swarm optimization, ant colony optimization, cuckoo search, firefly algorithm, bat algorithm, artificial fish swarm algorithm, flower pollination algorithm, artificial bee colony, wolf search algorithm, and gray wolf optimization [1]. Choosing the most suitable algorithm is crucial, as there is no general rule for this decision. Key factors to consider include good convergence properties, ease of use, ability to manage complex fitness functions, a limited number of control parameters, and effective use of the parallelism offered by modern computational architecture.

Particle swarm optimization (PSO) has gained popularity among researchers in the electromagnetic community since its inception. Many versions of the original algorithm have been developed using different parameter automation strategies. PSO has proven to be a powerful optimization method for solving various EM and antenna design problems, such as antenna pattern synthesis, reflector antenna shaping, patch antennas, EM absorber designs, and microwave filter design. The parallel implementation of PSO enables the simultaneous evaluation of all agents involved, significantly speeding up the optimization process. Compared to other evolutionary methods, PSO is a more effective and cost-efficient optimization algorithm that provides better results with fewer parameter adjustments [2,3,4].

2. CLASSIC PSO ALGORITHM

The PSO algorithm, created by Kennedy and Eberhart in 1995 [5], is a type of optimizer that mimics the behavior of swarms of animals like bees, fish, or birds. The focus is on the interaction between independent agents and social or swarm intelligence. The swarm consists of particles, each representing a potential solution to an optimization problem. Each particle searches for the global optimal point in a multi-dimensional solution space by adjusting its position based on its own experience and the experiences of other particles. The particle's position changes by altering its velocity and using the best position it has visited so far (called personal best) and the best position found by all particles (called global best).

Let us consider a swarm of M particles in an N-dimensional search space. Each swarm can be characterized by two N×M matrices, the position matrix X and the velocity matrix V:

X(t)=[ x11(t)x12(t)x1M(t)x21(t)x22(t)x2M(t)xN1(t)xN2(t)xNM(t)](1)
V(t)=[ v11(t)v12(t)v1M(t)v21(t)v22(t)v2M(t)vN1(t)vN2(t)vNM(t)](2)
where t is a unit pseudo-time increment (iteration counter), xnm(t) is the n-th position element of the m-th particle at the t-th iteration, vnm(t) is the n-th velocity element of the m-th particle at the t-th iteration, and every column xm(t) in matrix X is a possible solution for the problem. Moreover, the personal best position matrix Xb and the global best position vector Gb are introduced:
Xb(t)=[ x11b(t)x12b(t)x1Mb(t)x21b(t)x22b(t)x2Mb(t)xN1b(t)xN2b(t)xNMb(t)](3)
G(t)=[ g1(t)g2(t)gN(t)](4)
where xnmb(t) is the n-th personal best position element of the m-th particle at the t-th iteration, and gn(t) is the n-th global best position of the swarm at the t-th iteration. The swarm is randomly initialized, and subsequently, the position matrix is updated by the application of a suitable velocity matrix as follows:
X(t+1)=X(t)+V(t+1)(5)

The velocity field's update rule is the primary operator in the PSO algorithm, as it determines how the swarm moves towards the global optimum. To determine the best position update for the next iteration, the information on both global and local best positions needs to be processed correctly. Striking a balance between cognitive and social perspectives enhances the efficiency of the PSO algorithm. These perspectives can be illustrated as follows:

  • Cognitive perspectives: at t-th iteration, the m-th particle compares its fitness (F[(xm(t)]) with the one (F[ xmb(t1)]) corresponding to (t− 1)-th personal best position. If F[ xm(t)]>(F[ xmb(t1)]), then the algorithm sets xmb(t)=xm(t). In this way, a record of the best positions achieved by an individual particle is kept for the velocity update.

  • Social perspectives: at t-th iteration, the m-th particle compares its fitness (F[(xm(t)]) with the one (F[G(t)])) corresponding to t-th global best position. If F[xm(t)] < F[G(t)] then the algorithm sets G(t) = xm(t). So, by updating the global best position, a particle shares its information with the rest of the swarm.

In the classical PSO algorithm, the velocity component is updated so that the contribution due to the cognitive/social perspectives is directly proportional to the difference between the current position of the particle and the previously recorded personal/global best, respectively. More precisely, the velocity vector of each particle is the sum of three vectors: one representing the current motion, one pointing toward the particle's best position, this being steered by the individual knowledge accumulated during the evolution of the swarm, and, finally, one pointing towards the global best position, and modeling the contribution of the social knowledge. In particular, the m-th particle updates its velocity and position according to the following equations:

vnm(t+1)= wvnm(t) currentmotionterm+c1r1[ xnmb(t)-xnm(t)] individualknowledgeterm+c2r2[ gn(t)-xnm(t)] socialknowledgeterm(6)
xnm(t+1)=xnm(t)+vnm(t+1)(7)
where w ∈ [0,1] is a constant or variable value called inertia weight, c1 and c2 are two positive constants called the cognitive and social parameters respectively, and r1 and r2 are two random numbers uniformly distributed in the range [0,1]. The objective function evaluated at the new positions is compared with a user-defined error criterion. If this criterion is not satisfied, the random numbers r1 and r2 generate different numerical values in the next update, and the process is iterated until the error criterion is met. The personal best position of each particle is updated using the following equation:
xnmb(t+1)={ xnmb(t)ifF[ xnm(t+1)]F[ xnmb(t)]xnm(t+1)ifF[ xnm(t+1)]F[ xnmb(t)](8)
where F denotes the fitness function.

Experience shows that the success or failure of the search algorithm and its computational performance strongly depend on the values of the parameters w, c1, and c2. The leading causes for failure are: i) particles move out of the search space since their velocity increases rapidly; ii) particles become immobile since their velocity rapidly decreases; and iii) particles cannot escape from locally optimal solutions. The random variables ϕ1 = c1r1 and ϕ2 = c2r2 model the stochastic exploration of the search space. The weighting constants c1 and c2 regulate the relative importance of the cognitive perspective versus the social perspective. In particular, different weighting constants are used in recent versions of PSO to control the search ability more finely by biasing the new particle position toward its historically best position or globally best position.

High values of c1 and c2 result in new positions in the search space that are relatively far away from the former ones, thus leading to a finer global exploration but also, potentially, to a divergence of the particle motion. Small values of c1 and c2 are helpful to achieve a more refined local search around the best positions due to the limited movement of the particles. The condition c1 > c2 endorses the search behavior toward the particle best experience. The condition c1 < c2 supports the search behavior towards the global best experience. The inertial weight w is used to control the algorithm convergence.

Large values of w improve exploration, while a small w results in confinement within an area surrounding the global maximum. Convergence speed is an important aspect to assess in electromagnetic problems since the numerical evaluation of the fitness function generally takes considerable time. Convergence speed can be tuned by adequately setting the swarm size, as well as the initial population and boundary conditions. In particular, the parameters of the algorithm can be varied to adapt to the specific type of problem and, in this way, achieve better search efficiency. Particular attention should be paid to selecting the algorithm parameters to control the particles' divergence and convergence. Recent research has been presented in literature to illustrate these key aspects [2,3,6,7,8].

When the global best position becomes a local optimum, the PSO search performance may suffer and result in premature convergence. This happens because particles near the local optimum become inactive as their velocities approach zero. This limitation is more pronounced when PSO is applied to complex optimization problems with large search spaces and multiple local optima. To prevent the phenomenon of swarm explosion and facilitate convergence, specific solutions must be implemented [4,8,9,10]. The goal is to balance global and local searching abilities for an optimal PSO algorithm. Without constraints on the velocity field, particles may fly out of the physically meaningful solution space, leading to swarm divergence in large search spaces. To prevent this, a clamping rule based on upper and lower limits can be enforced on the velocity as follows:

vnm(t+1)={ vmaxifvnm(t+1)>vmaxvmaxifvnm(t+1)<vmax(9)

The value of the parameter vmax is selected empirically and can affect the behavior of the algorithm significantly. A too large value of vmax may result in good solutions being overlooked. In contrast, small values of vmax would reduce the motion of the particles and, therefore, prevent portions of the search domain from being explored efficiently. Instead, proper algorithm settings can enhance global exploration ability, while avoiding trapping in local optima. Generally, the threshold velocity is problem-dependent and dimension-dependent [11,12,13]. Modifications to the PSO learning process have been proposed [9,14,15]. In particular, a variant of the learning equation was introduced by M. Clerc in 2002. In this scheme, the velocity is updated by the equation [14]:

vnm(t+1)=χ{ vnm(t)+ϑ1r1[ vnmb(t)vnm(t)] +ϑ2r2[gn(t)vnm(t)]}(10)
where:
χ=2| 2ϑϑ24ϑ|,ϑ=ϑ1+ϑ2andϑ>4(11)
is the constriction factor. The use of the update equations in Eq. (10) and Eq. (11) enhances the convergence properties of the algorithm [16]. As a matter of fact, this approach proved to be effective in a large number of problems [13,17,18]. Furthermore, it allows for a straightforward identification of the optimal value of the algorithm parameter. In fact, the inertial weight and the cognitive constants are defined when only one value is set. This aspect makes the method very attractive and usable by non-expert users.

3. QUANTUM-BEHAVED PSO ALGORITHM

Several variants of the PSO algorithm improve the convergence speed and accuracy by implementing a velocity threshold and constriction factor. However, these variants are only semi-deterministic because the particles follow a deterministic trajectory with two random acceleration coefficients. This can weaken the global search ability, especially during the later stages of the search process. PSO's search pattern heavily relies on the personal and global best positions, and if these particles get stuck, the entire swarm will converge to that trapped position. To overcome this drawback, the quantum-behaved particle swarm optimization algorithm (QPSO) was developed. QPSO uses a probabilistic procedure, allowing particles to move under quantum-mechanical rules instead of classical Newtonian dynamics. QPSO eliminates velocity vectors, has fewer control parameters, and has a faster convergence rate with a stronger search ability, making it easier to implement for complex problems than the original PSO.

Using the δ potential well, QPSO generates new particles around the previous best point and receives feedback from the mean best position to enhance the global search ability.

Assuming that the PSO is a quantum system, the m-th particle can be treated as a spin-less particle moving in an N-dimensional search space with a δ potential centered at the point pnm(t), 1 ≤ nN. So, the quantum state of the n-th component of particle m is characterized by the wave function Ψ, instead of position and velocity. In such a framework, the exact values of X and V cannot be determined simultaneously since only the probability of the particles appearing in position X can be evaluated. It is defined by the probability density function |Ψ(rnm,t)|2 satisfying the general time-dependent Schrödinger equation:

jΨ(rnm,t)t=H^(rnm)Ψ(rnm,t)(12)
where ħ is the reduced Plank's constant and Ĥ is a time-independent Hamiltonian operator given by:
H^(rnm)=22m2δ(rnm)(13)
where m is the mass of the particle, δ(rnm) is the potential energy distribution, and rnm = xnm pnm is the n-th component of the vector difference between the m-th particle position and the corresponding δ potential well position. Applying the separation variables method, it is possible to separate the time dependence of the wave function from the spatial dependence obtaining:
Ψ(rnm,t)=φ(rnm)exp(jEt/)(14)
where E is the energy of the particle and φ(rnm) satisfies the stationary Schrödinger equation:
2φ(rnm)rnm2+2m2[ E+δ(rnm)]φ(rnm)=0(15)
and the normalization condition:
N| Ψ(rnm,t)|2dNr=1(16)

Solving Eq. (15) and taking into account Eq. (16) the probability density function is found to be:

|Ψ(rnm,t)|2=1Lnm(t)exp[ 2| xnm(t)pnm(t)|/Lnm(t)](17)
where Lnm(t) is the standard deviation of the distribution. Employing the Monte Carlo inverse method [19] it is possible to show that the update equation relevant to the n-th position component of particle m is:
xnm(t+1)={ pnm(t)+Lnm(t)2ln(1unm)ifsnm0.5pnm(t)Lnm(t)2ln(1unm)ifsnm<0.5(18)
where:
Pnm(t)=ξnm(t)xnmb(t)+[ 1ξnm(t)]gn(t)(19)
and unm, ξnm and snm are independent random numbers generated according to a uniform probability distribution in the range [0,1]. In order to improve the QPSO algorithm efficiency, the mean best of the population, x¯nb(t), is defined as the mean of the personal best positions of all particles:
x¯nb(t)=1Mm=1Mxnmb(t)(20)

In this way, the value of Lnm(t) is given by:

Lnm(t)=2β| x¯nb(t)xnm(t)|(21)
with β being the contraction-expansion coefficient. Considering that both the population size and the number of iterations are common requirements, β is the only parameter of the QPSO algorithm that can be tuned to control its speed and convergence [20]. In particular, to balance the local and global search of the algorithm, a dynamic adjustment of the contraction-expansion coefficient in the range [0,1] can be used:
β(t)=1t2tmax(22)
with tmax being the maximum number of iterations.

The illustrated QPSO algorithm is proven to be more effective than the traditional PSO algorithm in various standard optimization problems [20,21,22,23,24,25]. However, from Eq. (20) it can be seen that each particle affects, in the same way, the value of x¯nb(t) since the mean best position is just the average of the personal best position of all particles. This approach takes into account the search scope of each particle, and in some cases it can be reasonable. It is to be noticed, however, that based on general rules of real-life social culture, the equally weighted mean position could not represent the best choice. To this aim, a control method based on promoting particle importance has been developed [26]. In such an approach, elitism is associated with the particle's fitness value. In particular, the greater the fitness value, the more critical the particle. In this way, the particle has a weighted coefficient αm linearly decreasing with the corresponding fitness function. The closer the fitness function to the optimal value, the larger the weight of the particle. So, the mean best position is calculated as:

x¯nb(t)=1Mm=1Mαmxnmb(t)(23)
where the weighting coefficient linearly ranges from 1.5 for the best particle, down to 0.5 for the worst one. The corresponding QPSO algorithm is called weighted QPSO (WQPSO). To further improve the convergence rate of the WQPSO algorithm, we developed an enhanced weighting methodology where the computation of the mean best position x¯nb(t) is carried out by directly embedding the information associated with the error function E. The convergence speed is essential in electromagnetic problems since every run of the objective function takes a considerable amount of time, and any effort to reduce the relevant computational time and, in this way, shorten the design process is much more relevant. The resulting enhanced weighted QPSO (EWQPSO) algorithm is based on the following adaptive update equation:
x¯nb(t)=m=1MΛm(t)xnmb(t)m=1MΛm(t)(24)
where:
Λm(t)={ 1E(xmb(t))max{ E(x1b(t)),E(x2b(t)),E(xMb(t))}Minimizationproblem1min{ E(x1b(t)),E(x2b(t)),E(xMb(t))}E(xmb(t))Maximizationproblem(25)

The swarm moves towards positions close to the best value through a stochastic process. The EWQPSO algorithm includes an absorbing boundary condition, which means that any particle that goes beyond the search range in one dimension will be brought back to the boundary in that dimension. The flowchart in Fig. 1 shows how the algorithm solves the minimization problem.

Figure 1

Flowchart of the EWQPSO algorithm regarding the minimization problem.

4. BENCHMARK TESTS FOR THE EWQPSO ALGORITHM

Several tests have been carried out to verify the effectiveness and performance of the proposed EWQPSO algorithm. In particular, the minimum searching problems regarding the Alpine and De Jong test functions are considered by changing both the domain dimension N and the number of particles M [8,23]. The maximum generation value is set to tmax = 500 + 10N. For each function, the results calculated by using EWQPSO, WQPSO, and QPSO algorithms have been compared. The search algorithm is applied 100 times for each test function, and the mean and standard deviation values relevant to the best particle have been calculated.

4.1. Alpine Test Function

The Alpine function is defined as follows:

f(x)=i=1N| xisin(xi)+0.1xi|(26)
where the global minimum f(xmin) = 0 corresponds to xmin = (0,0,…, 0) coordinates. In this case, the hypercube searching domain is xi ∈ (−10,10), i = 1,2,…,N. Fig. 2 shows the evolution of the mean value corresponding to the best particle position considering a population of M=30 particles and N=10, N=15. In this case, the EWQPSO exhibits remarkable performance compared to the other PSO approaches since it is more accurate and faster.

Figure 2

Evolution of the mean value corresponding to the best particle position of the EWQPSO, WQPSO and QPSO applied to the Alpine function for N=10 (left) and N=15 (right). A swarm composed of M = 30 particles is considered.

4.2. De Jong Test Function

The de Jong function is defined as follows:

f(x)=i=1Nixi4(27)
where the global minimum f(xmin) = 0 corresponds to xmin = (0,0,…, 0) coordinates. For the test, the following search domain xi ∈ (−100,100), i = 1,2,…,N, is set. Fig. 3 shows the evolution of the mean value corresponding to the best particle position considering a population of M=30 particles and N=5, N=10. Also, in this case, the EWQPSO is faster and more accurate than the alternative QPSO algorithms.

Figure 3

Evolution of the mean value corresponding to the best particle position of the EWQPSO, WQPSO and QPSO applied to the de Jong function for N=5 (left) and N=10 (right). A swarm composed of M = 30 particles is considered.

In Table 1 are listed the mean and standard deviation (STD) values taking into account the minimum value associated with the best particle found in each of the 100 runs and by changing the particle number and the test function. By visual inspection of the results, one can easily conclude that the developed EWQPSO algorithm is characterized by improved accuracy.

Test Function N QPSO WQPSO EWQPSO
Mean STD Mean STD Mean STD
Alpine 5 3.63e-5 1.64e-4 7.99e-5 4.70e-4 1.42e-6 8.31e-6
10 6.94e-5 3.10e-4 6.35e-5 3.61e-4 2.05e-10 1.74e-9
15 7.46e-5 7.45e-4 9.23e-6 6.11e-5 1.08e-10 1.08e-9
20 5.50e-4 5.50e-3 5.09e-4 3.63e-3 2.51e-4 1.77e-3
de Jong 5 1.27e-169 6.48e-169 9.32e-165 6.56e-164 6.37e-193 6.05e-192
10 3.58e-133 2.73e-132 1.00e-130 9.31e-130 1.04e-142 1.00e-141
15 3.31e-104 1.44e-103 1.50e-101 7.46e-101 2.35e-108 1.92e-107
20 3.72e-81 2.59e-80 3.40e-80 1.80e-79 2.29e-84 1.34e-83
Table 1

Mean and STD values of the global best particle calculated using the EWQPSO, WQPSO and QPSO algorithms and considering different test functions.

5. EWQPSO FOR SUPERSHAPED LENS ANTENNA SYNTHESIS

Researchers and engineers have shown extensive interest in dielectric lens antennas due to their potential use in various fields, such as wireless communication systems, smart antennas and radar systems. These antennas are attractive because they are easy to integrate and have the ability of shaping and collimating beams. In the past, most research was focused on developing 3D dielectric lenses with simple geometries, but recent studies have explored complex shapes using Gielis’ superformula [27,28,29,30]. This formula allows for generating a wide range of 3D shapes by changing a few parameters, which can be optimized using an automated procedure based on the QPSO algorithm.

The surface of the lens can be described by the following Gielis’ equations in a Cartesian coordinate system:

x(v,μ)=R(v)cosvR(μ)cosμ(28)
y(v,μ)=R(v)sinvR(μ)cosμ(29)
z(μ)=R(μ)sinμ(30)
R(ν)=[ | cos(m1ν/4)a1|n1+| sin(m2ν/4)a2|n2]1/b1(31)
R(μ)=[ | cos(m3μ/4)a3|n3+| sin(m4μ/4)a4|n4]1/b2(32)
where mp,ap,np, p=1,2,3,4 are positive real numbers and bq, q=1,2 are strictly positive real numbers selected in such a way that the surface of the lens is closed and characterized, at any point, by a curvature radius that is larger than the working wavelength. The parameters ν∈(−π,π) and μ∈(0,π/2) denote convenient angle values, whereas the spherical angles are obtained by the equations:
θ=arccos(zr)(33)
φ=arccos(yx)(34)
where r=x2+y2+z2.

Fig. 4 shows the antenna structure used in the synthesis procedure. It consists of a large dielectric lens placed in the center of a circular plate made of electrically conductive material. The plate acts as a ground plane and also helps to reduce back-scattered radiation. The lens is illuminated by the electromagnetic field emitted by an open-ended circular waveguide filled with the same dielectric material as the lens. The propagation of the electromagnetic waves inside the lens is modeled using the tube tracing approach based on the combined geometrical optics/physical optics (GO/PO) approximation. This approximation allows significant simplification of the mathematical model making the simulation of electrically large structures possible with a lower computational effort than full-wave numerical methods. In fact, by virtue of the GO/PO approximation, the traveling electromagnetic wave can be approximated by a set of tubes propagating over a rectilinear path inside the lens [31,32,33]. The accuracy of the method can be further improved by considering the effects of multiple internal reflections occurring within the lens.

Figure 4

Structure of a supershaped dielectric lens antenna.

The developed GO/PO tube-tracing algorithm has been validated by comparison with the full-wave finite integration technique (FIT) adopted in the commercially available electromagnetic solver CST Microwave Studio [31,32,33]. A dedicated novel synthesis procedure based on EWQPSO is adopted to design a particular lens antenna showing a fixed 3D radiation pattern at frequency f=60 GHz. Such antennas could improve the channel capacity in communication systems implementing spatial-division multiplexing. The lens is made from a dielectric material with relative permittivity equal to εr= 2, the cylindrical open-ended waveguide and the metal plate have diameter dw = 2.3 mm and d = 20 cm, respectively. A swarm of M=48 particles has been launched over a maximum pseudo-time tmax = 40. The position vector relevant to the m-th particle is xm=[ n1n2n3n4b1b2]T. The multidimensional search space has been restricted by assuming that all the components of the vector position can range from 1 to 5, while the remaining Gielis’ parameters are a1 = a2 = a3 = a4 = 1 and m1 = m2 = m3 = m4 = 4.

The developed modeling technique is adopted to synthesize a lens antenna featuring a radiation beam pattern with four main lobes at the frequency f=60 GHz. This type of antenna could be adopted in communication systems implementing a spatial division multiplexing useful for increasing the channel capacity where the position of the receiver is known. The fitness function value is evaluated as:

F(xm)=p=1Nθq=1Nϕ| D^p,qTD^p,qm1+D^p,qT|(35)
where D^T is the target normalized directivity, expressed in dB, D^m is the normalized directivity, in dB, relevant to the m-th particle, and Nθ and Nϕ denote the number of points in which the azimuthal and polar coordinates are discretized, respectively. The directivity of the considered radiating system can be obtained by the following expression:
D(θFF,ϕFF)=4πrFF2|EFF|2η0Ptot(36)
where EFF is the electric far-field radiated by the lens antenna at the observation point PFF(rFF,θFF,ϕFF), η0 is the characteristic impedance of the vacuum, rFF is the distance between the observation point and the origin of the coordinate system, and Ptot is the total power radiated by the lens. Under these assumptions, by using the EWQPSO procedure, the optimal lens parameters are found to be: n1 = 4.161, n2 = 4.004, b1 = 1.017, b2 = 1.001, n3 = 2.837, and n4 = 1.010. The radiation solid and the corresponding current density distribution on the lens surface are shown in Fig. 5. As can be seen in Fig. 5, the antenna features four main lobes along the azimuthal directions ϕ = 45°, ϕ = 135°, ϕ = 225° and ϕ = 315° with a directivity of about 10 dBi.

Figure 5

Radiation solid (left) generated by the current density distribution (right) on the lens surface.

Fig. 6 shows both the target and EWQPSO recovered polar section of the radiation solids. It is worth noting that the synthesized radiation patterns are in excellent agreement with the target masks. Moreover, it is apparent that accounting for the multiple wave reflections occurring within the lens is instrumental to the enhancement of the modeling accuracy of the procedure. Fig. 7 shows the convergence rate of the new optimization procedure (EWQPSO) when applied to the synthesis of the lens antenna illustrated in Fig. 5. The convergence of the average value of the fitness function of the whole swarm to the one corresponding to the global best demonstrates the capability of the entire swarm to search the optimal solution effectively.

Figure 6

Comparison between the target directivity and the directivity of the Gielis’ lens antenna synthesized by means of the EWQPSO procedure: ϕ = 0° (left) and ϕ = 45° (right).

Figure 7

Convergence rate of the EWQPSO procedure.

The lens antenna structures shown in the illustrations have specific radiation pattern properties that can be beneficial for the newly introduced Wi-Fi 802.11ad communication protocol at a frequency of 60 GHz. Previously published articles by the authors [20,34] have demonstrated the effectiveness of the proposed EWQPSO model in designing a particular class of dielectric lens antennas defined by the Gielis formula. The EWQPSO technique outperforms classical PSO and genetic algorithms (GAs), as well as the conventional WQPSO procedure, in terms of convergence rate, accuracy and population size. These properties are desirable in the considered context as the computational burden associated with the computation of the fitness function is significant.

6. CONCLUSION

This research study has illustrated an optimization algorithm, EWQPSO, based on the quantum-behaved PSO approach, to solve complex electromagnetic problems. Comparative analysis with conventional QPSO and WQPSO algorithms indicates that EWQPSO is faster and more accurate.

The EWQPSO algorithm is applied to the solution of inverse problems concerning the determination of the Gielis parameters of supershaped dielectric lens antennas with multi-beam characteristics at mm-wave frequencies. The obtained results demonstrate the effectiveness of the proposed approach in identifying optimal solutions. Additionally, the algorithm is easy to implement, as it does not require the evaluation of complicated evolutionary operators or a large number of synthesis parameters. Even when using the generalized Gielis formula [35], with additional design parameters, the same algorithm can be applied [36]. This makes it an appealing and efficient alternative tool for designing and characterizing these types of antennas.

REFERENCES

A.E. Hassanien, E. Emary. Swarm Intelligence: Principles, Advances, and Applications. Boca Raton: CRC Press, 2016. https://doi.org/10.1201/9781315222455
T. Huang, A.S. Mohan. A Hybrid Boundary Condition for Robust Particle Swarm Optimization. IEEE Antennas and Wireless Propagation Letters, 2005, 4: 112–117. https://doi.org/10.1109/LAWP.2005.846166
J.F. Schutte, A.A. Groenwold. A Study of Global Optimization Using Particle Swarms. Journal of Global Optimization, 2005, 31(1): 93–108. https://doi.org/10.1007/s10898-003-6454-x
N. Jin, Y. Rahmat-Samii. Advances in Particle Swarm Optimization for Antenna Designs: Real-Number, Binary, Single-Objective and Multi-Objective Implementations. IEEE Transactions on Antennas and Propagation, 2007, 55(3): 556–567. https://doi.org/10.1109/TAP.2007.891552
J. Kennedy, R. Eberhart. Particle Swarm Optimization. Proceedings of the ICNN'95 International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
N. Iwasaki, K. Yasuda, G. Ueno. Dynamic Parameter Tuning of Particle Swarm Optimization. IEEJ Transactions on Electrical and Electronic Engineering, 2006, 1(4): 353–363. https://doi.org/10.1002/tee.20078
A. Mahanfar, S. Bila, M. Aubourg, S. Verdeyme. Cooperative Particle Swarm Optimization of Passive Microwave Devices. International Journal of Numerical Modelling: Electronic Networks, Devices and Fields, 2008, 21(1–2): 151–168. https://doi.org/10.1002/jnm.655
K. Yasuda, N. Iwasaki, G. Ueno, E. Aiyoshi. Particle Swarm Optimization: A Numerical Stability Analysis and Parameter Adjustment Based on Swarm Activity. IEEJ Transactions on Electrical and Electronic Engineering, 2008, 3(6): 642–659. https://doi.org/10.1002/tee.20326
Y. Shi, R.C. Eberhart. Empirical Study of Particle Swarm Optimization. Proceedings of the 1999 Congress on Evolutionary Computation (CEC99), Washington D.C., United States, 1999, Vol. 3, pp. 1945–1950. https://doi.org/10.1109/CEC.1999.785511
Y. del Valle, G.K. Venayagamoorthy, S. Mohagheghi, J.-C. Hernandez, R.G. Harley. Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems. IEEE Transactions on Evolutionary Computation, 2008, 12(2): 171–195. https://doi.org/10.1109/TEVC.2007.896686
K.E. Parsopoulos, M.N. Vrahatis. Particle Swarm Optimization and Intelligence: Advances and Applications. Hershey: IGI Global, 2010. https://doi.org/10.4018/978-1-61520-666-7
F. van den Bergh, A.P. Engelbrecht. A Study of Particle Swarm Optimization Particle Trajectories. Information Sciences, 2006, 176(8): 937–971. https://doi.org/10.1016/j.ins.2005.02.003
F. van den Bergh, A.P. Engelbrecht. A Cooperative Approach to Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 225–239. https://doi.org/10.1109/TEVC.2004.826069
M. Clerc, J. Kennedy. The Particle Swarm: Explosion, Stability, and Convergence in a Multi-Dimensional Complex Space. IEEE Transactions on Evolutionary Computation, 2002, 6(1): 58–73. https://doi.org/10.1109/4235.985692
A. Ratnaweera, S.K. Halgamuge, H.C. Watson. Self-Organizing Hierarchical Particle Swarm Optimizer With Time-Varying Acceleration Coefficients. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 240–255. https://doi.org/10.1109/TEVC.2004.826071
R.C. Eberhart, Y. Shi. Comparing Inertia Weights and Constriction Factors in Particle Swarm Optimization. Proceedings of the 2000 Congress on Evolutionary Computation (CEC00), San Diego, United States, 2000, pp. 84–88. https://doi.org/10.1109/CEC.2000.870279
G. Fornarelli, L. Mescia (Eds.). Swarm Intelligence for Electric and Electronic Engineering. Hershey: IGI Global, 2013. https://doi.org/10.4018/978-1-4666-2666-9
G. Fornarelli, A. Giaquinto, L. Mescia. Optimum Design and Characterization of Rare Earth-Doped Fibre Amplifiers by Means of Particle Swarm Optimization Approach. In: G. Fornarelli, L. Mescia (Eds.). Swarm Intelligence for Electric and Electronic Engineering, Ch. 7, pp. 127–147. Hershey: IGI Global, 2013. https://doi.org/10.4018/978-1-4666-2666-9.ch007
M.M. Woolfson, G.J. Pert. An Introduction to Computer Simulation. New York: Oxford University Press, 1999.
S.N. Omkar, R. Khandelwal, T.V.S. Ananth, G. Narayana Naik, S. Gopalakrishnan. Quantum Behaved Particle Swarm Optimization (QPSO) for Multi-Objective Design Optimization of Composite Structures. Expert Systems with Applications, 2009, 36(8): 11312–11322. https://doi.org/10.1016/j.eswa.2009.03.006
L. dos Santos Coelho. A Quantum Particle Swarm Optimizer With Chaotic Mutation Operator. Chaos, Solitons & Fractals, 2008, 37(5): 1409–1418. https://doi.org/10.1016/j.chaos.2006.10.028
X. Fu, W. Liu, B. Zhang, H. Deng. Quantum Behaved Particle Swarm Optimization With Neighborhood Search for Numerical Optimization. Mathematical Problems in Engineering, 2013: 469723. https://doi.org/10.1155/2013/469723
J. Sun, C.-H. Lai, X.-J. Wu. Particle Swarm Optimisation: Classical and Quantum Perspectives. Boca Raton: CRC Press, 2012. https://doi.org/10.1201/b11579
J. Sun, W. Fang, V. Palade, X. Wu, W. Xu. Quantum-Behaved Particle Swarm Optimization With Gaussian Distributed Local Attractor Point. Applied Mathematics and Computation, 2011, 218(7): 3763–3775. https://doi.org/10.1016/j.amc.2011.09.021
D. Yumin, Z. Li. Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm. Mathematical Problems in Engineering, 2014: 592682. https://doi.org/10.1155/2014/592682
M. Xi, J. Sun, W. Xu. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm With Weighted Mean Best Position. Applied Mathematics and Computation, 2008, 205(2): 751–759. https://doi.org/10.1016/j.amc.2008.05.135
J. Gielis. A Generic Geometric Transformation That Unifies a Wide Range of Natural and Abstract Shapes. American Journal of Botany, 2003, 90(3): 333–338. https://doi.org/10.3732/ajb.90.3.333
J. Gielis, B. Beirinckx, E. Bastiaens. Superquadrics With Rational and Irrational Symmetry. In: G. Elber, V. Shapiro (Eds.), Proceedings of the Eighth ACM Symposium on Solid Modeling and Applications (SM ’03), Seattle, United States, pp. 262–265. New York: Association for Computing Machinery, 2003. https://doi.org/10.1145/781606.781647
J. Gielis, P. Shi, B. Beirinckx, D. Caratelli, P.E. Ricci. Lamé-Gielis Curves in Biology and Geometry. In: A. Mihai, I. Mihai (Eds.), Proceedings of the 2021 International Conference Riemannian Geometry and Applications (RIGA 2021), Bucharest, Romania, 2021.
M. Simeoni, R. Cicchetti, A. Yarovoy, D. Caratelli. Plastic-Based Supershaped Dielectric Resonator Antennas for Wide-Band Applications. IEEE Transactions on Antennas and Propagation, 2011, 59(12): 4820–4825. https://doi.org/10.1109/TAP.2011.2165477
P. Bia, D. Caratelli, L. Mescia, J. Gielis. Analysis and Synthesis of Supershaped Dielectric Lens Antennas. IET Microwaves, Antennas & Propagation, 2015, 9(14): 1497–1504. https://doi.org/10.1049/iet-map.2015.0091
P. Bia, D. Caratelli, L. Mescia, J. Gielis. Electromagnetic Characterization of Supershaped Lens Antennas for High-Frequency Applications. Proceedings of the 2013 European Microwave Conference, Nuremberg, Germany, 2013, pp. 1679–1682. https://doi.org/10.23919/EuMC.2013.6686998
L. Mescia, P. Bia, D. Caratelli, M.A. Chiapperino, O. Stukach, J. Gielis. Electromagnetic Mathematical Modeling of 3D Supershaped Dielectric Lens Antennas. Mathematical Problems in Engineering, 2016: 8130160. https://doi.org/10.1155/2016/8130160
P. Bia, D. Caratelli, L. Mescia, R. Cicchetti, G. Maione, F. Prudenzano. A Novel FDTD Formulation Based on Fractional Derivatives for Dispersive Havriliak–Negami Media. Signal Processing, 2015, 107: 312–318. https://doi.org/10.1016/j.sigpro.2014.05.031
J. Gielis, P. Natalini, P.E. Ricci. A Note About Generalized Forms of the Gielis Formula. In: J. Gielis, P.E. Ricci, I. Tavkhelidze (Eds.), Modeling in Mathematics: Proceedings of the Second Tbilisi-Salerno Workshop on Modeling in Mathematics. Atlantis Transactions in Geometry, Vol. 2, pp. 107–116. Paris: Atlantis Press, 2017. https://doi.org/10.2991/978-94-6239-261-8_8
A. Facchini, F.P. Chietera, R. Colella, L. Catarinucci, P. Bia, L. Mescia. Lens Antenna Design Tool Based on Generalized Superformula: Preliminary Results. Proceedings of the 8th International Conference on Smart and Sustainable Technologies (SpliTech), Split/Bol, Croatia, 2023, pp. 1–5. https://doi.org/10.23919/SpliTech58164.2023.10193578

Cite This Article

ris
TY  - CONF
AU  - Luciano Mescia
AU  - Pietro Bia
AU  - Johan Gielis
AU  - Diego Caratelli
PY  - 2023
DA  - 2023/11/29
TI  - Advanced Particle Swarm Optimization Methods for Electromagnetics
BT  - Proceedings of the 1st International Symposium on Square Bamboos and the Geometree (ISSBG 2022)
PB  - Athena Publishing
SP  - 109
EP  - 122
SN  - 2949-9429
UR  - https://doi.org/10.55060/s.atmps.231115.010
DO  - https://doi.org/10.55060/s.atmps.231115.010
ID  - Mescia2023
ER  -
enw
bib