Particle swarm optimization can be applied to global optimization, as follows.
The goal is to search a space of many dimensions for the best solution. A swarm of particles are initially distributed randomly through the space.
Certain particles have two-way communication with certain others.
In each iteration, each particle moves randomly to another position, but with higher probability of moving closer to a communicating particle that occupies a good solution.
After many iterations, the best solution found is selected.
How can this process be viewed as enumerating a sequence of problem restrictions? Why is there no role for relaxation bounding here?