Skip to content

Latest commit

 

History

History
1107 lines (874 loc) · 38.3 KB

《AI Agent 开发实战》第8章: 游戏 AI Agent 开发.md

File metadata and controls

1107 lines (874 loc) · 38.3 KB

第8章: 游戏 AI Agent 开发

游戏AI是人工智能研究的重要领域之一,它不仅为游戏产业提供了强大的技术支持,也为其他领域的AI应用提供了宝贵的经验和洞察。在本章中,我们将深入探讨游戏AI Agent的开发过程,从基础理论到实际实现,全面覆盖这一迷人的领域。

8.1 游戏 AI 基础

在开始开发游戏AI Agent之前,我们需要先了解一些基础概念和技术。这些基础知识将帮助我们更好地理解和设计游戏AI系统。

8.1.1 游戏状态表示

游戏状态是指在游戏的某一特定时刻,所有相关信息的集合。对于AI Agent来说,准确和高效地表示游戏状态是至关重要的。

常见的游戏状态表示方法包括:

  1. 数组或矩阵:适用于棋类游戏或网格based游戏。
  2. 图结构:适用于复杂的策略游戏或角色扮演游戏。
  3. 特征向量:将游戏状态编码为一系列数值特征。

以下是一个简单的井字棋游戏状态表示示例:

class TicTacToeState:
    def __init__(self):
        self.board = [' ' for _ in range(9)]
        self.current_player = 'X'

    def make_move(self, position):
        if self.board[position] == ' ':
            self.board[position] = self.current_player
            self.current_player = 'O' if self.current_player == 'X' else 'X'
            return True
        return False

    def is_terminal(self):
        # 检查是否有玩家获胜
        winning_combinations = [
            [0, 1, 2], [3, 4, 5], [6, 7, 8],  # 横行
            [0, 3, 6], [1, 4, 7], [2, 5, 8],  # 竖列
            [0, 4, 8], [2, 4, 6]  # 对角线
        ]
        for combo in winning_combinations:
            if self.board[combo[0]] == self.board[combo[1]] == self.board[combo[2]] != ' ':
                return True
        
        # 检查是否平局
        if ' ' not in self.board:
            return True
        
        return False

    def get_legal_actions(self):
        return [i for i, val in enumerate(self.board) if val == ' ']

    def __str__(self):
        return '\n'.join([' '.join(self.board[i:i+3]) for i in range(0, 9, 3)])

8.1.2 评估函数设计

评估函数是游戏AI的核心组件之一,它用于评估给定游戏状态对当前玩家的有利程度。一个好的评估函数应该能够准确反映游戏状态的优劣,同时计算速度要够快。

评估函数的设计通常基于以下几个方面:

  1. 材料价值:例如在国际象棋中,不同棋子的价值。
  2. 位置价值:棋子在棋盘上的位置对局势的影响。
  3. 机动性:可行动的数量和质量。
  4. 结构:如棋子的保护关系、阵型等。
  5. 王的安全:在某些游戏中,保护关键单位的重要性。

以下是一个简单的井字棋评估函数示例:

def evaluate(state):
    for combo in [[0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8], [0, 4, 8], [2, 4, 6]]:
        if state.board[combo[0]] == state.board[combo[1]] == state.board[combo[2]] != ' ':
            return 1 if state.board[combo[0]] == 'X' else -1
    return 0

8.1.3 搜索算法

搜索算法是游戏AI用来在可能的行动中找到最佳选择的方法。常见的搜索算法包括:

  1. Minimax算法:适用于双人零和游戏。
  2. Alpha-Beta剪枝:Minimax的优化版本,通过剪枝减少搜索空间。
  3. 蒙特卡洛树搜索(MCTS):特别适用于搜索空间较大的游戏。

以下是一个简单的Minimax算法实现:

def minimax(state, depth, maximizing_player):
    if depth == 0 or state.is_terminal():
        return evaluate(state)

    if maximizing_player:
        max_eval = float('-inf')
        for action in state.get_legal_actions():
            new_state = deepcopy(state)
            new_state.make_move(action)
            eval = minimax(new_state, depth - 1, False)
            max_eval = max(max_eval, eval)
        return max_eval
    else:
        min_eval = float('inf')
        for action in state.get_legal_actions():
            new_state = deepcopy(state)
            new_state.make_move(action)
            eval = minimax(new_state, depth - 1, True)
            min_eval = min(min_eval, eval)
        return min_eval

def get_best_move(state, depth):
    best_eval = float('-inf')
    best_move = None
    for action in state.get_legal_actions():
        new_state = deepcopy(state)
        new_state.make_move(action)
        eval = minimax(new_state, depth - 1, False)
        if eval > best_eval:
            best_eval = eval
            best_move = action
    return best_move

这些基础概念和技术为我们开发更复杂的游戏AI Agent奠定了基础。在接下来的章节中,我们将探讨如何将这些基础知识应用到实际的游戏AI开发中,以及如何使用更高级的技术来提升AI的性能。

8.2 基于规则的游戏 AI

基于规则的AI是游戏开发中最直接、最容易实现的AI方法之一。虽然它可能缺乏高级AI的灵活性和学习能力,但在许多情况下,一个精心设计的基于规则的AI系统可以表现得非常出色。

8.2.1 有限状态机

有限状态机(Finite State Machine, FSM)是一种简单而强大的工具,用于模拟具有不同状态的系统。在游戏AI中,FSM常用于控制AI角色的行为。

以下是一个简单的FSM实现,模拟了一个巡逻守卫的行为:

from enum import Enum

class GuardState(Enum):
    PATROL = 1
    CHASE = 2
    ATTACK = 3

class Guard:
    def __init__(self):
        self.state = GuardState.PATROL
        self.position = 0
        self.health = 100

    def update(self, player_position):
        if self.state == GuardState.PATROL:
            self.patrol()
            if abs(self.position - player_position) < 5:
                self.state = GuardState.CHASE
        elif self.state == GuardState.CHASE:
            self.chase(player_position)
            if abs(self.position - player_position) < 1:
                self.state = GuardState.ATTACK
            elif abs(self.position - player_position) > 10:
                self.state = GuardState.PATROL
        elif self.state == GuardState.ATTACK:
            self.attack()
            if abs(self.position - player_position) > 1:
                self.state = GuardState.CHASE

    def patrol(self):
        self.position += 1 if self.position < 10 else -1

    def chase(self, player_position):
        self.position += 1 if player_position > self.position else -1

    def attack(self):
        print("Guard attacks!")

# 使用示例
guard = Guard()
for _ in range(20):
    guard.update(5)  # 假设玩家位置固定在5
    print(f"Guard state: {guard.state}, position: {guard.position}")

8.2.2 行为树

行为树(Behavior Tree)是一种更加灵活和模块化的AI决策工具。它通过组合简单的任务来创建复杂的行为。

以下是一个简单的行为树实现:

from enum import Enum

class NodeStatus(Enum):
    SUCCESS = 1
    FAILURE = 2
    RUNNING = 3

class Node:
    def tick(self):
        pass

class Sequence(Node):
    def __init__(self, children):
        self.children = children

    def tick(self):
        for child in self.children:
            status = child.tick()
            if status != NodeStatus.SUCCESS:
                return status
        return NodeStatus.SUCCESS

class Selector(Node):
    def __init__(self, children):
        self.children = children

    def tick(self):
        for child in self.children:
            status = child.tick()
            if status != NodeStatus.FAILURE:
                return status
        return NodeStatus.FAILURE

class Action(Node):
    def __init__(self, action):
        self.action = action

    def tick(self):
        return self.action()

# 行为实现
def is_enemy_visible():
    # 实际应用中,这里应该检查游戏状态
    return True

def aim():
    print("Aiming at enemy")
    return NodeStatus.SUCCESS

def shoot():
    print("Shooting at enemy")
    return NodeStatus.SUCCESS

def move_to_cover():
    print("Moving to cover")
    return NodeStatus.SUCCESS

# 构建行为树
root = Selector([
    Sequence([
        Action(is_enemy_visible),
        Action(aim),
        Action(shoot)
    ]),
    Action(move_to_cover)
])

# 运行行为树
root.tick()

8.2.3 规则系统实现

规则系统是基于一系列"如果-那么"规则的决策系统。它可以用于实现复杂的游戏逻辑和AI决策。

以下是一个简单的规则系统实现:

class Rule:
    def __init__(self, condition, action):
        self.condition = condition
        self.action = action

class RuleSystem:
    def __init__(self):
        self.rules = []

    def add_rule(self, rule):
        self.rules.append(rule)

    def evaluate(self, game_state):
        for rule in self.rules:
            if rule.condition(game_state):
                rule.action(game_state)
                return

# 规则定义
def low_health_condition(game_state):
    return game_state['health'] < 30

def use_health_potion(game_state):
    print("Using health potion")
    game_state['health'] += 50

def enemy_nearby_condition(game_state):
    return game_state['enemy_distance'] < 5

def attack_enemy(game_state):
    print("Attacking enemy")

# 创建规则系统
rule_system = RuleSystem()
rule_system.add_rule(Rule(low_health_condition, use_health_potion))
rule_system.add_rule(Rule(enemy_nearby_condition, attack_enemy))

# 使用规则系统
game_state = {'health': 20, 'enemy_distance': 3}
rule_system.evaluate(game_state)

基于规则的AI系统虽然简单,但在许多游戏中仍然非常有效。它们易于实现和调试,可以快速创建出可玩的AI对手。然而,这种方法也有其局限性,如缺乏学习能力和适应性。在下一节中,我们将探讨如何使用机器学习技术来克服这些限制,创建更智能、更灵活的游戏AI。## 8.3 强化学习在游戏 AI 中的应用

强化学习是一种机器学习方法,它通过与环境交互来学习最优策略。在游戏AI中,强化学习可以帮助AI代理学习复杂的策略,而无需明确编程每一个决策规则。

8.3.1 Q-learning 算法

Q-learning是一种经典的强化学习算法,它学习一个动作值函数(Q函数),用于估计在给定状态下采取某个动作的长期回报。

以下是一个简单的Q-learning实现,用于学习玩井字棋:

import numpy as np
import random

class TicTacToeEnv:
    def __init__(self):
        self.board = [' ' for _ in range(9)]
        self.current_player = 'X'

    def reset(self):
        self.board = [' ' for _ in range(9)]
        self.current_player = 'X'
        return self._get_state()

    def step(self, action):
        if self.board[action] == ' ':
            self.board[action] = self.current_player
            done = self._check_win() or ' ' not in self.board
            reward = 1 if self._check_win() else 0
            self.current_player = 'O' if self.current_player == 'X' else 'X'
            return self._get_state(), reward, done
        else:
            return self._get_state(), -1, True

    def _get_state(self):
        return ''.join(self.board)

    def _check_win(self):
        win_conditions = [
            [0, 1, 2], [3, 4, 5], [6, 7, 8],
            [0, 3, 6], [1, 4, 7], [2, 5, 8],
            [0, 4, 8], [2, 4, 6]
        ]
        return any(self.board[i] == self.board[j] == self.board[k] != ' '
                   for i, j, k in win_conditions)

class QLearningAgent:
    def __init__(self, alpha=0.1, epsilon=0.1, gamma=0.9):
        self.q_table = {}
        self.alpha = alpha
        self.epsilon = epsilon
        self.gamma = gamma

    def get_action(self, state):
        if random.random() < self.epsilon:
            return random.choice([i for i, s in enumerate(state) if s == ' '])
        else:
            q_values = [self.q_table.get((state, action), 0) for action in range(9)]
            max_q = max(q_values)
            best_actions = [i for i, q in enumerate(q_values) if q == max_q]
            return random.choice(best_actions)

    def update(self, state, action, reward, next_state):
        current_q = self.q_table.get((state, action), 0)
        next_max_q = max([self.q_table.get((next_state, a), 0) for a in range(9)])
        new_q = current_q + self.alpha * (reward + self.gamma * next_max_q - current_q)
        self.q_table[(state, action)] = new_q

# 训练
env = TicTacToeEnv()
agent = QLearningAgent()

for episode in range(10000):
    state = env.reset()
    done = False
    while not done:
        action = agent.get_action(state)
        next_state, reward, done = env.step(action)
        agent.update(state, action, reward, next_state)
        state = next_state

# 测试
def play_game(agent, env):
    state = env.reset()
    done = False
    while not done:
        action = agent.get_action(state)
        state, reward, done = env.step(action)
        print(env.board)
        if done:
            break
        # 人类玩家回合
        human_action = int(input("Enter your move (0-8): "))
        state, reward, done = env.step(human_action)
        print(env.board)

play_game(agent, env)

8.3.2 深度 Q 网络 (DQN)

深度Q网络(DQN)是Q-learning的一个扩展,它使用深度神经网络来近似Q函数。这使得DQN能够处理更复杂的游戏和更大的状态空间。

以下是一个使用PyTorch实现的简化版DQN,用于玩CartPole游戏:

import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import random
from collections import deque

class DQN(nn.Module):
    def __init__(self, state_size, action_size):
        super(DQN, self).__init__()
        self.fc1 = nn.Linear(state_size, 24)
        self.fc2 = nn.Linear(24, 24)
        self.fc3 = nn.Linear(24, action_size)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        return self.fc3(x)

class DQNAgent:
    def __init__(self, state_size, action_size):
        self.state_size = state_size
        self.action_size = action_size
        self.memory = deque(maxlen=2000)
        self.gamma = 0.95
        self.epsilon = 1.0
        self.epsilon_min = 0.01
        self.epsilon_decay = 0.995
        self.learning_rate = 0.001
        self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self.model = DQN(state_size, action_size).to(self.device)
        self.optimizer = optim.Adam(self.model.parameters(), lr=self.learning_rate)

    def remember(self, state, action, reward, next_state, done):
        self.memory.append((state, action, reward, next_state, done))

    def act(self, state):
        if np.random.rand() <= self.epsilon:
            return random.randrange(self.action_size)
        state = torch.FloatTensor(state).unsqueeze(0).to(self.device)
        act_values = self.model(state)
        return np.argmax(act_values.cpu().data.numpy())

    def replay(self, batch_size):
        minibatch = random.sample(self.memory, batch_size)
        for state, action, reward, next_state, done in minibatch:
            target = reward
            if not done:
                next_state = torch.FloatTensor(next_state).unsqueeze(0).to(self.device)
                target = (reward + self.gamma * np.amax(self.model(next_state).cpu().data.numpy()))
            state = torch.FloatTensor(state).unsqueeze(0).to(self.device)
            target_f = self.model(state)
            target_f[0][action] = target
            loss = nn.MSELoss()(self.model(state), target_f)
            self.optimizer.zero_grad()
            loss.backward()
            self.optimizer.step()
        if self.epsilon > self.epsilon_min:
            self.epsilon *= self.epsilon_decay

# 训练
env = gym.make('CartPole-v1')
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
agent = DQNAgent(state_size, action_size)
batch_size = 32

for e in range(1000):
    state = env.reset()
    for time in range(500):
        action = agent.act(state)
        next_state, reward, done, _ = env.step(action)
        reward = reward if not done else -10
        agent.remember(state, action, reward, next_state, done)
        state = next_state
        if done:
            print(f"episode: {e}/{1000}, score: {time}")
            break
        if len(agent.memory) > batch_size:
            agent.replay(batch_size)

# 测试
state = env.reset()
for t in range(200):
    env.render()
    action = agent.act(state)
    state, reward, done, _ = env.step(action)
    if done:
        print("Episode finished after {} timesteps".format(t+1))
        break
env.close()

8.3.3 策略梯度方法

策略梯度方法直接学习一个策略函数,而不是通过值函数间接得到策略。这种方法在某些情况下可以更有效地学习复杂的策略。

以下是一个使用REINFORCE算法(一种简单的策略梯度方法)的示例,用于玩CartPole游戏:

import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions import Categorical

class Policy(nn.Module):
    def __init__(self, state_size, action_size):
        super(Policy, self).__init__()
        self.fc1 = nn.Linear(state_size, 128)
        self.fc2 = nn.Linear(128, action_size)
        
    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return torch.softmax(x, dim=1)
    
    def act(self, state):
        state = torch.from_numpy(state).float().unsqueeze(0)
        probs = self.forward(state)
        m = Categorical(probs)
        action = m.sample()
        return action.item(), m.log_prob(action)

def reinforce(policy, optimizer, n_episodes, max_t):
    scores_deque = deque(maxlen=100)
    scores = []
    for i_episode in range(1, n_episodes+1):
        saved_log_probs = []
        rewards = []
        state = env.reset()
        for t in range(max_t):
            action, log_prob = policy.act(state)
            saved_log_probs.append(log_prob)
            state, reward, done, _ = env.step(action)
            rewards.append(reward)
            if done:
                break 
        scores_deque.append(sum(rewards))
        scores.append(sum(rewards))
        
        discounts = [gamma**i for i in range(len(rewards)+1)]
        R = sum([a*b for a,b in zip(discounts, rewards)])
        
        policy_loss = []
        for log_prob in saved_log_probs:
            policy_loss.append(-log_prob * R)
        policy_loss = torch.cat(policy_loss).sum()
        
        optimizer.zero_grad()
        policy_loss.backward()
        optimizer.step()
        
        if i_episode % 100 == 0:
            print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
        if np.mean(scores_deque)>=195.0:
            print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
            break
    return scores

env = gym.make('CartPole-v1')
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
policy = Policy(state_size, action_size)
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
gamma = 0.99

scores = reinforce(policy, optimizer, n_episodes=1000, max_t=1000)

# 测试
state = env.reset()
for t in range(1000):
    action, _ = policy.act(state)
    env.render()
    state, reward, done, _ = env.step(action)
    if done:
        break
env.close()

这些强化学习方法为游戏AI提供了强大的学习能力,使AI能够在复杂的游戏环境中学习和适应。然而,它们也带来了一些挑战,如训练时间长、需要大量数据、以及在某些情况下的不稳定性。在实际应用中,我们常常需要结合规则based方法和学习based方法,以获得最佳的性能和可控性。

8.4 AlphaGo 原理与实现

AlphaGo是一个重要的里程碑,它标志着AI在复杂策略游戏中首次击败人类顶级选手。AlphaGo的成功基于几个关键技术的结合,包括深度神经网络、蒙特卡洛树搜索(MCTS)和强化学习。

8.4.1 蒙特卡洛树搜索

蒙特卡洛树搜索(MCTS)是一种用于决策的启发式搜索算法,特别适用于具有大状态空间的问题。MCTS的基本步骤包括:选择、扩展、模拟和回溯。

以下是一个简化版的MCTS实现:

import math
import random

class MCTSNode:
    def __init__(self, state, parent=None):
        self.state = state
        self.parent = parent
        self.children = []
        self.visits = 0
        self.value = 0

def uct_select(node):
    return max(node.children, key=lambda c: c.value / c.visits + 
               math.sqrt(2 * math.log(node.visits) / c.visits))

def mcts(root, iterations):
    for _ in range(iterations):
        node = root
        # Selection
        while node.children:
            node = uct_select(node)
        
        # Expansion
        if node.visits > 0:
            node = expand(node)
        
        # Simulation
        result = simulate(node.state)
        
        # Backpropagation
        while node:
            node.visits += 1
            node.value += result
            node = node.parent
    
    return max(root.children, key=lambda c: c.visits)

def expand(node):
    actions = get_legal_actions(node.state)
    for action in actions:
        new_state = apply_action(node.state, action)
        child = MCTSNode(new_state, parent=node)
        node.children.append(child)
    return random.choice(node.children)

def simulate(state):
    while not is_terminal(state):
        action = random.choice(get_legal_actions(state))
        state = apply_action(state, action)
    return evaluate(state)

# 这些函数需要根据具体游戏来实现
def get_legal_actions(state):
    pass

def apply_action(state, action):
    pass

def is_terminal(state):
    pass

def evaluate(state):
    pass

8.4.2 价值网络与策略网络

AlphaGo使用两个深度神经网络:价值网络和策略网络。价值网络估计给定棋局的胜率,而策略网络预测下一步最佳落子位置的概率分布。

以下是使用PyTorch实现的简化版价值网络和策略网络:

import torch
import torch.nn as nn
import torch.nn.functional as F

class GoNeuralNetwork(nn.Module):
    def __init__(self, board_size=19):
        super(GoNeuralNetwork, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)
        self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
        
        # 策略头
        self.policy_conv = nn.Conv2d(128, 2, kernel_size=1)
        self.policy_fc = nn.Linear(2 * board_size * board_size, board_size * board_size + 1)
        
        # 价值头
        self.value_conv = nn.Conv2d(128, 1, kernel_size=1)
        self.value_fc1 = nn.Linear(board_size * board_size, 256)
        self.value_fc2 = nn.Linear(256, 1)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.relu(self.conv2(x))
        x = F.relu(self.conv3(x))
        
        # 策略输出
        policy = F.relu(self.policy_conv(x))
        policy = policy.view(policy.size(0), -1)
        policy = self.policy_fc(policy)
        policy = F.softmax(policy, dim=1)
        
        # 价值输出
        value = F.relu(self.value_conv(x))
        value = value.view(value.size(0), -1)
        value = F.relu(self.value_fc1(value))
        value = torch.tanh(self.value_fc2(value))
        
        return policy, value

# 使用示例
model = GoNeuralNetwork()
board = torch.randn(1, 3, 19, 19)  # 批次大小为1,3个通道(黑子,白子,轮次),19x19的棋盘
policy, value = model(board)
print(f"Policy shape: {policy.shape}, Value: {value.item()}")

8.4.3 自我对弈与强化学习

AlphaGo通过自我对弈来不断改进其策略。在这个过程中,它使用当前最佳的策略网络和价值网络来指导MCTS搜索,然后使用搜索结果来更新网络。

以下是一个简化的自我对弈训练循环:

import torch
import torch.optim as optim
from copy import deepcopy

def self_play(model, num_games=100):
    optimizer = optim.Adam(model.parameters())
    for _ in range(num_games):
        game_states = []
        mcts_policies = []
        
        state = initial_state()
        while not is_game_over(state):
            mcts_policy = mcts_search(state, model)
            action = select_action(mcts_policy)
            
            game_states.append(state)
            mcts_policies.append(mcts_policy)
            
            state = apply_action(state, action)
        
        winner = determine_winner(state)
        
        # 更新模型
        for game_state, mcts_policy in zip(game_states, mcts_policies):
            board_tensor = state_to_tensor(game_state)
            policy, value = model(board_tensor)
            
            policy_loss = F.cross_entropy(policy, mcts_policy)
            value_loss = F.mse_loss(value, torch.tensor([[winner]]))
            loss = policy_loss + value_loss
            
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

def mcts_search(state, model):
    # 使用MCTS和神经网络进行搜索
    # 返回动作概率分布
    pass

def select_action(policy):
    # 根据策略选择动作
    pass

def initial_state():
    # 返回游戏的初始状态
    pass

def is_game_over(state):
    # 检查游戏是否结束
    pass

def apply_action(state, action):
    # 应用动作到状态
    pass

def determine_winner(state):
    # 确定游戏赢家
    pass

def state_to_tensor(state):
    # 将游戏状态转换为神经网络输入张量
    pass

# 训练模型
model = GoNeuralNetwork()
self_play(model)

这个训练过程不断重复,每次迭代都会产生更强的模型。AlphaGo的成功证明了结合深度学习和树搜索可以在复杂的策略游戏中取得突破性的成果。

8.5 多智能体系统

在许多游戏中,我们需要处理多个AI代理之间的交互。多智能体系统研究如何协调多个代理的行为,使它们能够有效地合作或竞争。

8.5.1 合作与竞争行为

在多智能体环境中,代理可能需要学习合作或竞争策略。以下是一个简单的示例,展示了如何实现基本的合作和竞争行为:

import numpy as np

class Agent:
    def __init__(self, agent_id, is_cooperative):
        self.agent_id = agent_id
        self.is_cooperative = is_cooperative
        self.position = np.random.randint(0, 10, 2)
        self.resource = 0

    def move(self, environment):
        if self.is_cooperative:
            # 合作行为:移动到最近的队友
            teammates = [agent for agent in environment.agents if agent.is_cooperative and agent != self]
            if teammates:
                closest = min(teammates, key=lambda a: np.linalg.norm(self.position - a.position))
                direction = np.sign(closest.position - self.position)
                self.position += direction
        else:
            # 竞争行为:移动到最近的资源
            resource_positions = np.array(environment.resources)
            if len(resource_positions) > 0:
                closest = resource_positions[np.argmin(np.linalg.norm(resource_positions - self.position, axis=1))]
                direction = np.sign(closest - self.position)
                self.position += direction

    def collect_resource(self, environment):
        for i, resource in enumerate(environment.resources):
            if np.array_equal(self.position, resource):
                self.resource += 1
                environment.resources.pop(i)
                break

class Environment:
    def __init__(self, num_cooperative, num_competitive, num_resources):
        self.agents = [Agent(i, True) for i in range(num_cooperative)] + \
                      [Agent(i+num_cooperative, False) for i in range(num_competitive)]
        self.resources = [np.random.randint(0, 10, 2) for _ in range(num_resources)]

    def step(self):
        for agent in self.agents:
            agent.move(self)
            agent.collect_resource(self)

        if len(self.resources) < 5:
            self.resources.append(np.random.randint(0, 10, 2))

    def run(self, num_steps):
        for _ in range(num_steps):
            self.step()

        cooperative_resources = sum(agent.resource for agent in self.agents if agent.is_cooperative)
        competitive_resources = sum(agent.resource for agent in self.agents if not agent.is_cooperative)
        
        print(f"Cooperative agents collected: {cooperative_resources} resources")
        print(f"Competitive agents collected: {competitive_resources} resources")

# 运行模拟
env = Environment(num_cooperative=5, num_competitive=5, num_resources=20)
env.run(num_steps=100)

8.5.2 多智能体强化学习

多智能体强化学习(Multi-Agent Reinforcement Learning, MARL)是强化学习在多智能体系统中的扩展。MARL面临许多挑战,如非平稳性、部分可观察性和信用分配问题。

以下是一个使用独立Q学习的简单MARL示例:

import numpy as np
import random

class MAQLAgent:
    def __init__(self, state_size, action_size, learning_rate=0.1, discount_factor=0.95, epsilon=0.1):
        self.q_table = np.zeros((state_size, action_size))
        self.lr = learning_rate
        self.gamma = discount_factor
        self.epsilon = epsilon

    def get_action(self, state):
        if random.uniform(0, 1) < self.epsilon:
            return random.randint(0, self.q_table.shape[1] - 1)
        else:
            return np.argmax(self.q_table[state, :])

    def learn(self, state, action, reward, next_state):
        current_q = self.q_table[state, action]
        max_next_q = np.max(self.q_table[next_state, :])
        new_q = current_q + self.lr * (reward + self.gamma * max_next_q - current_q)
        self.q_table[state, action] = new_q

class MultiAgentEnvironment:
    def __init__(self, num_agents, grid_size):
        self.num_agents = num_agents
        self.grid_size = grid_size
        self.agent_positions = np.random.randint(0, grid_size, size=(num_agents, 2))
        self.goal = np.random.randint(0, grid_size, size=2)

    def step(self, actions):
        rewards = np.zeros(self.num_agents)
        for i, action in enumerate(actions):
            if action == 0:  # 上
                self.agent_positions[i, 0] = max(0, self.agent_positions[i, 0] - 1)
            elif action == 1:  # 下
                self.agent_positions[i, 0] = min(self.grid_size - 1, self.agent_positions[i, 0] + 1)
            elif action == 2:  # 左
                self.agent_positions[i, 1] = max(0, self.agent_positions[i, 1] - 1)
            elif action == 3:  # 右
                self.agent_positions[i, 1] = min(self.grid_size - 1, self.agent_positions[i, 1] + 1)

            if np.array_equal(self.agent_positions[i], self.goal):
                rewards[i] = 1
                self.goal = np.random.randint(0, self.grid_size, size=2)

        return self.agent_positions, rewards

    def reset(self):
        self.agent_positions = np.random.randint(0, self.grid_size, size=(self.num_agents, 2))
        self.goal = np.random.randint(0, self.grid_size, size=2)
        return self.agent_positions

def train_maql(num_episodes=1000):
    env = MultiAgentEnvironment(num_agents=2, grid_size=5)
    agents = [MAQLAgent(state_size=25, action_size=4) for _ in range(env.num_agents)]

    for episode in range(num_episodes):
        states = env.reset()
        total_reward = np.zeros(env.num_agents)
        done = False

        while not done:
            actions = [agent.get_action(state[0] * 5 + state[1]) for agent, state in zip(agents, states)]
            next_states, rewards = env.step(actions)

            for i, agent in enumerate(agents):
                agent.learn(states[i][0] * 5 + states[i][1], actions[i], rewards[i], next_states[i][0] * 5 + next_states[i][1])

            states = next_states
            total_reward += rewards
            done = np.any(rewards)

        if episode % 100 == 0:
            print(f"Episode {episode}, Average Reward: {np.mean(total_reward)}")

train_maql()

8.5.3 群体智能算法

群体智能算法受到自然界中群体行为的启发,如蚁群、鸟群或鱼群。这些算法通过简单的个体行为规则产生复杂的集体行为。

以下是一个简单的群体智能算法示例,模拟鸟群行为:

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation

class Boid:
    def __init__(self, x, y):
        self.position = np.array([x, y])
        self.velocity = np.random.randn(2) * 0.1
        self.acceleration = np.zeros(2)
        self.max_force = 0.1
        self.max_speed = 2

    def update(self):
        self.velocity += self.acceleration
        self.velocity = np.clip(self.velocity, -self.max_speed, self.max_speed)
        self.position += self.velocity
        self.acceleration = np.zeros(2)

    def apply_force(self, force):
        self.acceleration += force

def align(boid, boids, perception):
    steering = np.zeros(2)
    total = 0
    avg_vec = np.zeros(2)
    for other in boids:
        if np.linalg.norm(boid.position - other.position) < perception:
            avg_vec += other.velocity
            total += 1
    if total > 0:
        avg_vec /= total
        avg_vec = (avg_vec / np.linalg.norm(avg_vec)) * boid.max_speed
        steering = avg_vec - boid.velocity
    return steering

def cohesion(boid, boids, perception):
    steering = np.zeros(2)
    total = 0
    center_of_mass = np.zeros(2)
    for other in boids:
        if np.linalg.norm(boid.position - other.position) < perception:
            center_of_mass += other.position
            total += 1
    if total > 0:
        center_of_mass /= total
        vec_to_com = center_of_mass - boid.position
        if np.linalg.norm(vec_to_com) > 0:
            vec_to_com = (vec_to_com / np.linalg.norm(vec_to_com)) * boid.max_speed
        steering = vec_to_com - boid.velocity
        if np.linalg.norm(steering) > boid.max_force:
            steering = (steering / np.linalg.norm(steering)) * boid.max_force
    return steering

def separation(boid, boids, perception):
    steering = np.zeros(2)
    total = 0
    avg_vector = np.zeros(2)
    for other in boids:
        distance = np.linalg.norm(boid.position - other.position)
        if distance > 0 and distance < perception:
            diff = boid.position - other.position
            diff /= distance
            avg_vector += diff
            total += 1
    if total > 0:
        avg_vector /= total
        if np.linalg.norm(avg_vector) > 0:
            avg_vector = (avg_vector / np.linalg.norm(avg_vector)) * boid.max_speed
        steering = avg_vector - boid.velocity
        if np.linalg.norm(steering) > boid.max_force:
            steering = (steering / np.linalg.norm(steering)) * boid.max_force
    return steering

class Flock:
    def __init__(self, n):
        self.boids = [Boid(np.random.rand() * 100, np.random.rand() * 100) for _ in range(n)]

    def run(self):
        for boid in self.boids:
            self.apply_behavior(boid)
            boid.update()

    def apply_behavior(self, boid):
        alignment = align(boid, self.boids, 25)
        cohesion = cohesion(boid, self.boids, 25)
        separation = separation(boid, self.boids, 25)

        boid.apply_force(alignment)
        boid.apply_force(cohesion)
        boid.apply_force(separation)

def update(frame, flock, scatter):
    flock.run()
    scatter.set_offsets(np.array([b.position for b in flock.boids]))
    return scatter,

flock = Flock(50)
fig, ax = plt.subplots()
scatter = ax.scatter([b.position[0] for b in flock.boids],
                     [b.position[1] for b in flock.boids])
ax.set_xlim(0, 100)
ax.set_ylim(0, 100)

anim = FuncAnimation(fig, update, frames=200, fargs=(flock, scatter),
                     interval=50, blit=True)
plt.show()

这个例子展示了如何使用简单的规则(对齐、凝聚和分离)来模拟复杂的群体行为。这种方法可以应用于游戏AI中的群体行为模拟,如军队单位的移动或生物群落的模拟。

总结一下,本章我们探讨了游戏AI开发的多个方面,从基础的规则based系统到复杂的强化学习和多智能体系统。我们看到了如何使用不同的技术来创建智能的、适应性强的游戏AI。

关键点包括:

  1. 游戏AI的基础,包括状态表示、评估函数和搜索算法。
  2. 基于规则的AI系统,如有限状态机、行为树和规则系统。
  3. 强化学习在游戏AI中的应用,包括Q-learning、深度Q网络和策略梯度方法。
  4. AlphaGo的核心原理,展示了如何结合深度学习和树搜索来解决复杂的游戏问题。
  5. 多智能体系统,包括合作与竞争行为、多智能体强化学习和群体智能算法。

这些技术为开发高质量的游戏AI提供了强大的工具。然而,在实际应用中,我们通常需要根据具体游戏的需求和约束来选择和组合这些方法。例如,在一个实时策略游戏中,我们可能会结合规则based系统的快速响应能力和强化学习的适应性,同时使用群体智能算法来控制大量单位的移动。

此外,游戏AI的开发还需要考虑以下几个方面:

  1. 性能优化:游戏AI通常需要在有限的计算资源下运行,因此优化算法的效率至关重要。
  2. 可调节性:能够轻松调整AI的难度级别,以适应不同技能水平的玩家。
  3. 可预测性与随机性的平衡:AI需要表现出一定的一致性,同时又不能过于可预测。
  4. 人性化:让AI表现出类似人类的行为和错误,增加游戏的真实感和乐趣。
  5. 公平性:确保AI不会使用人类玩家无法获得的信息或能力。

随着技术的不断进步,我们可以期待看到更加智能、更具适应性的游戏AI。未来的发展方向可能包括:

  1. 更复杂的多模态学习,结合视觉、音频和文本信息。
  2. 元学习技术,使AI能够更快地适应新的游戏规则或环境。
  3. 可解释的AI,帮助开发者和玩家理解AI的决策过程。
  4. 更深入的人机协作,创造出能够与人类玩家默契配合的AI队友。
  5. 程序化内容生成与AI的结合,创造动态、个性化的游戏体验。

作为游戏AI开发者,我们应该持续关注这个快速发展的领域,不断学习和实践新的技术,同时也要记住游戏的根本目的是为玩家创造有趣和有挑战性的体验。通过平衡技术创新和游戏设计,我们可以创造出既智能又有趣的游戏AI,推动整个游戏产业的发展。