Skip to content

Sample Rolling Horizon Evolutionary Algorithm Controller

Raluca D. Gaina edited this page Feb 12, 2018 · 2 revisions

The sample RHEA controller implements an online (rolling horizon) evolutionary algorithm to decide the next move to make. At every game step, a (new) small population of individuals is evolved during the 40 ms given. Each individual represents a sequence of actions, and its fitness is calculated with a heuristic that evaluates the state reached after applying all these actions. The move returned is the first action of the best individual found.

The complete code for this agent is contained in a single class: Agent.java (although it uses a heuristic class, also in the framework: WinScoreHuristic.java). Here, we highlight some interesting parts of this agent.

First, the function that evaluates an individual, from Agent.java:

private double evaluate(Individual individual, StateHeuristic heuristic, StateObservation state) {

        ElapsedCpuTimer elapsedTimerIterationEval = new ElapsedCpuTimer();

        StateObservation st = state.copy();
        int i;
        for (i = 0; i < SIMULATION_DEPTH; i++) {
            double acum = 0, avg;
            if (! st.isGameOver()) {
                ElapsedCpuTimer elapsedTimerIteration = new ElapsedCpuTimer();
                st.advance(action_mapping.get(individual.actions[i]));
                acum += elapsedTimerIteration.elapsedMillis();
                avg = acum / (i+1);
                remaining = timer.remainingTimeMillis();
                if (remaining < 2*avg || remaining < BREAK_MS) break;
            } else {
                break;
            }
        }

        StateObservation first = st.copy();
        double value = heuristic.evaluateState(first);

        // Apply discount factor
        value *= Math.pow(DISCOUNT,i);

        individual.value = value;

        numEvals++;
        acumTimeTakenEval += (elapsedTimerIterationEval.elapsedMillis());
        avgTimeTakenEval = acumTimeTakenEval / numEvals;
        remaining = timer.remainingTimeMillis();

        return value;
    }

Here is the code of the heuristic that evaluates a given state, used at the end of the function described above, from WinScoreHuristic.java. If the game was won or lost, it returns a huge positive (resp. negative) number as fitness. Otherwise, it returns the score of the game at that state.

public class WinScoreHeuristic extends StateHeuristic {

    public WinScoreHeuristic(StateObservation stateObs) {}

    //The StateObservation stateObs received is the state of the game to be evaluated.
    public double evaluateState(StateObservation stateObs) {
        boolean gameOver = stateObs.isGameOver();       //true if the game has finished.
        Types.WINNER win = stateObs.getGameWinner();    //player loses, wins, or no winner yet.
        double score = stateObs.getGameScore();

        if(gameOver && win == Types.WINNER.PLAYER_WINS)       return score + 10000000.0;  //We won, this is good.
        if(gameOver && win == Types.WINNER.PLAYER_LOSES)      return score - 10000000.0; //We lost, this is bad.

        return score; //Neither won or lost, let's get the score and use it as fitness. 
    }
}

There are several customizable parameters:

  • Population size
  • Individual length
  • Discount factor
  • Crossover type (uniform or 1-point crossover)
  • Reevaluation of previous individuals
  • Number of genes mutated
  • Tournament size
  • Elitism

Table of Contents:

Clone this wiki locally