Genetic Algorithms

A genetic algorithm can be used to find a near optimal solution to an NP-complete problem. The resources required for such a problem increase exponentially with problem size. The algorithm mainly entails the folllowing steps: In this section, the implementation selects parents from the top half of the population, with a 90% chance. Why not 100%? Hopefully, this defeats local maxima.

As the population is nearing the optimal, it stands to reason that improved, rather than random, mutations and offspring should be promoted. During the next stage in the evolution, though, survival of the fittest should have a similar effect.

The specific problem chosen for illustration is allocating exactly one resource to every task. (This seems to have been proposed in the early 80's while presenting a new type of Neural network.) The goal is to maximise performance, though it could equally well have been the minimisation of a cost function. Here is the performance table:

Task123456
Resource
11054651
2649732
3183646
4537214
5325687
6764132

Within a few generations (for this size of problem), the optimal of 44 is found, given a random initial population of 52. The average population score is about 40 for random offspring and mutations, while it reached a peak of 43.88 for improved descendants and mutations. The few extra lines of code required to promote improved population characteristics are therefore well worthwhile, since it it harder to breed better children near the optimal.

Improved vs random mutations/ offspring

In this context, a mutation involves swapping the resources allocated to two tasks. Breeding also entails swapping, so that the resource takes the place it had in the first parent in the second parent, and vice versa.

The gatask program at another site solves the same problem. If you understand programming, but are not mainly a programmer, though, you may not be very fond of C.

For my own code: Valid XHTML 1.0!

1