proj5 report initialized
This commit is contained in:
8246
proj2/main.pdf
Normal file
8246
proj2/main.pdf
Normal file
File diff suppressed because it is too large
Load Diff
430
proj2/sample.typ
430
proj2/sample.typ
@ -1,430 +0,0 @@
|
|||||||
#import "labtemplate.typ": *
|
|
||||||
#show: nudtlabpaper.with(
|
|
||||||
author: "程景愉",
|
|
||||||
id: "202302723005",
|
|
||||||
title: "经典搜索",
|
|
||||||
training_type: "普通本科生",
|
|
||||||
grade: "2023级",
|
|
||||||
major: "网络工程",
|
|
||||||
department: "计算机学院",
|
|
||||||
advisor: "胡罡",
|
|
||||||
jobtitle: "教授",
|
|
||||||
lab: "306-707",
|
|
||||||
date: "2025.12.02",
|
|
||||||
header_str: "经典搜索实验报告",
|
|
||||||
simple_cover: true,
|
|
||||||
)
|
|
||||||
|
|
||||||
#set page(header: [
|
|
||||||
#set par(spacing: 6pt)
|
|
||||||
#align(center)[#text(size: 11pt)[人工智能实验报告]]
|
|
||||||
#v(-0.3em)
|
|
||||||
#line(length: 100%, stroke: (thickness: 1pt))
|
|
||||||
],)
|
|
||||||
|
|
||||||
#show heading: it => if it.level == 1 [
|
|
||||||
#v(1em)
|
|
||||||
#set text(font: hei, size: 16pt)
|
|
||||||
#counter(heading).display()
|
|
||||||
#h(0.5em)
|
|
||||||
#it.body
|
|
||||||
#v(0.5em)
|
|
||||||
] else [
|
|
||||||
#v(0.8em)
|
|
||||||
#set text(font: "New Computer Modern", weight: "bold", style: "italic")
|
|
||||||
#counter(heading).display(it.numbering)
|
|
||||||
#h(0.5em)
|
|
||||||
#it.body
|
|
||||||
#v(0.5em)
|
|
||||||
]
|
|
||||||
|
|
||||||
#outline(title: "目录",depth: 2, indent: 1em)
|
|
||||||
|
|
||||||
#set enum(indent: 0.5em,body-indent: 0.5em,)
|
|
||||||
#pagebreak()
|
|
||||||
|
|
||||||
= 实验介绍
|
|
||||||
#para[
|
|
||||||
本项目是UC Berkeley CS188人工智能课程的第一个项目,主要任务是实现多种通用搜索算法,并将其应用于解决经典游戏吃豆人(Pacman)中的路径规划问题。项目涵盖了从基础的无信息搜索算法(如DFS、BFS)到有信息搜索算法(如UCS、A\*),以及针对特定问题(如访问所有角落、吃掉所有食物)的状态表示和启发式函数设计。通过本项目,我们将深入理解搜索算法的原理,并学会如何利用它们来解决实际问题。
|
|
||||||
]
|
|
||||||
|
|
||||||
= 实验内容
|
|
||||||
#para[
|
|
||||||
本次实验内容涵盖了8个递进的编程任务,具体如下:
|
|
||||||
]
|
|
||||||
+ *Q1: 深度优先搜索 (DFS)*:利用深度优先搜索算法寻找迷宫中的固定食物点。
|
|
||||||
+ *Q2: 广度优先搜索 (BFS)*:利用广度优先搜索算法,找到到达目标所需行动次数最少的路径。
|
|
||||||
+ *Q3: 一致代价搜索 (UCS)*:实现一致代价搜索,以处理具有不同行动代价的路径,找到总代价最小的路径。
|
|
||||||
+ *Q4: A\* 搜索*:实现A\*搜索算法,该算法结合了路径的实际代价和启发式评估,并使用曼哈顿距离作为启发式函数进行测试。
|
|
||||||
+ *Q5: 角落问题*:设计一个新的搜索问题,要求吃豆人以最短路径访问迷宫的全部四个角落。这需要定义一个合适的状态空间。
|
|
||||||
+ *Q6: 角落启发式*:为角落问题设计一个一致且可接受的启发式函数,以加速A\*搜索过程。
|
|
||||||
+ *Q7: 食物启发式*:为“吃完所有食物”这一复杂问题设计一个高效的启发式函数,以在可接受的时间内找到优良路径。
|
|
||||||
+ *Q8: 寻找最近点路径*:实现一个次优的贪心策略,让吃豆人总是前往最近的食物点,作为一种快速解决复杂问题的近似方法。
|
|
||||||
|
|
||||||
= 实验要求
|
|
||||||
#para[
|
|
||||||
本项目要求在 `search.py` 和 `searchAgents.py` 两个文件中根据指引补全代码,核心要求如下:
|
|
||||||
]
|
|
||||||
+ *通用搜索算法*:在 `search.py` 中实现 `depthFirstSearch`, `breadthFirstSearch`, `uniformCostSearch`, 和 `aStarSearch` 四个核心搜索算法。所有搜索函数都需要返回一个动作(action)列表,该列表能够引导吃豆人从起点到达目标。
|
|
||||||
+ *数据结构*:必须使用项目框架中 `util.py` 提供的 `Stack`, `Queue` 和 `PriorityQueue` 数据结构,以确保与自动评分器的兼容性。
|
|
||||||
+ *问题建模*:在 `searchAgents.py` 中,需要为角落问题(`CornersProblem`)选择并实现一个不包含冗余信息的高效状态表示。
|
|
||||||
+ *启发式函数设计*:为角落问题和食物问题设计的启发式函数(`cornersHeuristic` 和 `foodHeuristic`)必须是无不足道的(non-trivial)、非负且一致的(consistent)。启发式函数的性能将根据其在A\*搜索中扩展的节点数量进行评分。
|
|
||||||
+ *目标测试*:为 `AnyFoodSearchProblem` 补全目标测试函数 `isGoalState`,使其能够正确判断是否到达任意一个食物所在的位置。
|
|
||||||
|
|
||||||
= 实验步骤与实现
|
|
||||||
== Q1: 深度优先搜索 (DFS)
|
|
||||||
*实现思路*:深度优先搜索(DFS)优先探索最深的节点。我们使用图搜索版本,即记录已访问的节点以避免冗余搜索和无限循环。数据结构上,DFS采用栈(Stack)来实现“后进先出”(LIFO)的节点扩展顺序。
|
|
||||||
|
|
||||||
*核心代码* (`search.py`):
|
|
||||||
```python
|
|
||||||
def depthFirstSearch(problem: SearchProblem):
|
|
||||||
fringe = util.Stack()
|
|
||||||
visited = set()
|
|
||||||
startState = problem.getStartState()
|
|
||||||
fringe.push((startState, []))
|
|
||||||
|
|
||||||
while not fringe.isEmpty():
|
|
||||||
currentState, actions = fringe.pop()
|
|
||||||
|
|
||||||
if currentState in visited:
|
|
||||||
continue
|
|
||||||
|
|
||||||
visited.add(currentState)
|
|
||||||
|
|
||||||
if problem.isGoalState(currentState):
|
|
||||||
return actions
|
|
||||||
|
|
||||||
successors = problem.getSuccessors(currentState)
|
|
||||||
|
|
||||||
for successor, action, cost in successors:
|
|
||||||
if successor not in visited:
|
|
||||||
newActions = actions + [action]
|
|
||||||
fringe.push((successor, newActions))
|
|
||||||
|
|
||||||
return []
|
|
||||||
```
|
|
||||||
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q1
|
|
||||||
python pacman.py -l mediumMaze -p SearchAgent -a fn=dfs
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q2: 广度优先搜索 (BFS)
|
|
||||||
*实现思路*:广度优先搜索(BFS)逐层扩展节点,确保找到行动次数最少的路径。同样采用图搜索版本。数据结构上,BFS采用队列(Queue)来实现“先进先出”(FIFO)的节点扩展顺序。
|
|
||||||
|
|
||||||
*核心代码* (`search.py`):
|
|
||||||
```python
|
|
||||||
def breadthFirstSearch(problem: SearchProblem):
|
|
||||||
fringe = util.Queue()
|
|
||||||
visited = set()
|
|
||||||
startState = problem.getStartState()
|
|
||||||
fringe.push((startState, []))
|
|
||||||
|
|
||||||
while not fringe.isEmpty():
|
|
||||||
currentState, actions = fringe.pop()
|
|
||||||
|
|
||||||
if currentState in visited:
|
|
||||||
continue
|
|
||||||
|
|
||||||
visited.add(currentState)
|
|
||||||
|
|
||||||
if problem.isGoalState(currentState):
|
|
||||||
return actions
|
|
||||||
|
|
||||||
successors = problem.getSuccessors(currentState)
|
|
||||||
|
|
||||||
for successor, action, cost in successors:
|
|
||||||
if successor not in visited:
|
|
||||||
newActions = actions + [action]
|
|
||||||
fringe.push((successor, newActions))
|
|
||||||
|
|
||||||
return []
|
|
||||||
```
|
|
||||||
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q2
|
|
||||||
python pacman.py -l mediumMaze -p SearchAgent -a fn=bfs
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q3: 一致代价搜索 (UCS)
|
|
||||||
*实现思路*:一致代价搜索(UCS)扩展总路径代价最小的节点,从而保证找到最优(总代价最低)的路径。它通过使用优先队列(PriorityQueue)来实现,节点的优先级由其累积路径代价决定。
|
|
||||||
|
|
||||||
*核心代码* (`search.py`):
|
|
||||||
```python
|
|
||||||
def uniformCostSearch(problem: SearchProblem):
|
|
||||||
fringe = util.PriorityQueue()
|
|
||||||
visited = {}
|
|
||||||
startState = problem.getStartState()
|
|
||||||
fringe.push((startState, [], 0), 0)
|
|
||||||
|
|
||||||
while not fringe.isEmpty():
|
|
||||||
currentState, actions, currentCost = fringe.pop()
|
|
||||||
|
|
||||||
if currentState in visited and currentCost >= visited[currentState]:
|
|
||||||
continue
|
|
||||||
|
|
||||||
visited[currentState] = currentCost
|
|
||||||
|
|
||||||
if problem.isGoalState(currentState):
|
|
||||||
return actions
|
|
||||||
|
|
||||||
successors = problem.getSuccessors(currentState)
|
|
||||||
|
|
||||||
for successor, action, stepCost in successors:
|
|
||||||
newCost = currentCost + stepCost
|
|
||||||
newActions = actions + [action]
|
|
||||||
fringe.push((successor, newActions, newCost), newCost)
|
|
||||||
|
|
||||||
return []
|
|
||||||
```
|
|
||||||
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q3
|
|
||||||
python pacman.py -l mediumMaze -p SearchAgent -a fn=ucs
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q4: A\* 搜索
|
|
||||||
*实现思路*:A\*搜索是UCS的扩展,它在评估节点时不仅考虑已付出的代价$g(n)$,还引入了对未来代价的估计,即启发式函数$h(n)$。节点优先级由$f(n) = g(n) + h(n)$决定。这使得A\*能够更智能地朝向目标进行搜索。
|
|
||||||
|
|
||||||
*核心代码* (`search.py`):
|
|
||||||
```python
|
|
||||||
def aStarSearch(problem: SearchProblem, heuristic=nullHeuristic):
|
|
||||||
fringe = util.PriorityQueue()
|
|
||||||
visited = {}
|
|
||||||
startState = problem.getStartState()
|
|
||||||
startHeuristic = heuristic(startState, problem)
|
|
||||||
fringe.push((startState, [], 0), startHeuristic)
|
|
||||||
|
|
||||||
while not fringe.isEmpty():
|
|
||||||
currentState, actions, currentCost = fringe.pop()
|
|
||||||
|
|
||||||
if currentState in visited and currentCost >= visited[currentState]:
|
|
||||||
continue
|
|
||||||
|
|
||||||
visited[currentState] = currentCost
|
|
||||||
|
|
||||||
if problem.isGoalState(currentState):
|
|
||||||
return actions
|
|
||||||
|
|
||||||
successors = problem.getSuccessors(currentState)
|
|
||||||
|
|
||||||
for successor, action, stepCost in successors:
|
|
||||||
newCost = currentCost + stepCost
|
|
||||||
newHeuristic = heuristic(successor, problem)
|
|
||||||
fValue = newCost + newHeuristic
|
|
||||||
newActions = actions + [action]
|
|
||||||
fringe.push((successor, newActions, newCost), fValue)
|
|
||||||
|
|
||||||
return []
|
|
||||||
```
|
|
||||||
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q4
|
|
||||||
python pacman.py -l bigMaze -z .5 -p SearchAgent -a fn=astar,heuristic=manhattanHeuristic
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q5: 角落问题 (Corners Problem)
|
|
||||||
*实现思路*:为了解决访问所有四个角落的问题,我们需要设计一个能够追踪Pacman位置和哪些角落已被访问的状态表示。一个合适的状态是 `(position, visited_corners)`,其中 `position` 是 `(x, y)` 坐标,`visited_corners` 是一个布尔元组,记录四个角落各自的访问情况。
|
|
||||||
|
|
||||||
*核心代码* (`searchAgents.py`):
|
|
||||||
```python
|
|
||||||
// 状态表示:(当前位置, 已访问的角落元组)
|
|
||||||
// getStartState
|
|
||||||
def getStartState(self):
|
|
||||||
cornersVisited = tuple([self.startingPosition == corner for corner in self.corners])
|
|
||||||
return (self.startingPosition, cornersVisited)
|
|
||||||
|
|
||||||
// isGoalState: 检查是否所有角落都已访问
|
|
||||||
def isGoalState(self, state: Any):
|
|
||||||
_, cornersVisited = state
|
|
||||||
return all(cornersVisited)
|
|
||||||
|
|
||||||
// getSuccessors: 生成后继状态,并更新角落访问信息
|
|
||||||
def getSuccessors(self, state: Any):
|
|
||||||
successors = []
|
|
||||||
currentPosition, cornersVisited = state
|
|
||||||
|
|
||||||
for action in [Directions.NORTH, Directions.SOUTH, Directions.EAST, Directions.WEST]:
|
|
||||||
x, y = currentPosition
|
|
||||||
dx, dy = Actions.directionToVector(action)
|
|
||||||
nextx, nexty = int(x + dx), int(y + dy)
|
|
||||||
|
|
||||||
if not self.walls[nextx][nexty]:
|
|
||||||
nextPosition = (nextx, nexty)
|
|
||||||
newCornersVisited = list(cornersVisited)
|
|
||||||
|
|
||||||
for i, corner in enumerate(self.corners):
|
|
||||||
if nextPosition == corner and not newCornersVisited[i]:
|
|
||||||
newCornersVisited[i] = True
|
|
||||||
|
|
||||||
successorState = (nextPosition, tuple(newCornersVisited))
|
|
||||||
successors.append((successorState, action, 1))
|
|
||||||
|
|
||||||
self._expanded += 1
|
|
||||||
return successors
|
|
||||||
```
|
|
||||||
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q5
|
|
||||||
python pacman.py -l mediumCorners -p SearchAgent -a fn=bfs,prob=CornersProblem
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q6: 角落启发式 (Corners Heuristic)
|
|
||||||
*实现思路*:为角落问题设计一个一致且可接受的启发式函数,关键是估计从当前状态到达目标状态(所有角落被访问)的最小代价。一个有效的策略是计算当前位置到所有未访问角落的“某种”距离。我们采用的策略是:启发式的值取“当前位置到最远的未访问角落的曼哈顿距离”和“所有未访问角落两两之间最远曼哈顿距离”中的较大者。这确保了启发式是可接受的,因为它低估了必须走过的总路程。
|
|
||||||
|
|
||||||
*核心代码* (`searchAgents.py`):
|
|
||||||
```python
|
|
||||||
def cornersHeuristic(state: Any, problem: CornersProblem):
|
|
||||||
currentPosition, cornersVisited = state
|
|
||||||
corners = problem.corners
|
|
||||||
|
|
||||||
if all(cornersVisited):
|
|
||||||
return 0
|
|
||||||
|
|
||||||
unvisitedCorners = []
|
|
||||||
for i, corner in enumerate(corners):
|
|
||||||
if not cornersVisited[i]:
|
|
||||||
unvisitedCorners.append(corner)
|
|
||||||
|
|
||||||
if not unvisitedCorners:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
maxDistance = 0
|
|
||||||
for corner in unvisitedCorners:
|
|
||||||
distance = util.manhattanDistance(currentPosition, corner)
|
|
||||||
maxDistance = max(maxDistance, distance)
|
|
||||||
|
|
||||||
maxCornerDistance = 0
|
|
||||||
for i in range(len(unvisitedCorners)):
|
|
||||||
for j in range(i + 1, len(unvisitedCorners)):
|
|
||||||
distance = util.manhattanDistance(unvisitedCorners[i], unvisitedCorners[j])
|
|
||||||
maxCornerDistance = max(maxCornerDistance, distance)
|
|
||||||
|
|
||||||
return max(maxDistance, maxCornerDistance)
|
|
||||||
```
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q6
|
|
||||||
python pacman.py -l mediumCorners -p AStarCornersAgent -z 0.5
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q7: 食物启发式 (Food Heuristic)
|
|
||||||
*实现思路*:为“吃掉所有食物”问题设计启发式函数更具挑战性。状态包含当前位置和食物分布的网格。一个好的启发式需要有效估计吃掉所有剩余食物的最小步数。我们结合了多种策略来构造一个更强的启发式:取“当前位置到最远食物的曼哈顿距离”、“剩余食物中两两之间最远的曼哈顿距离”以及“剩余食物的数量”这三者的最大值。这个值仍然是真实代价的下界,保证了可接受性和一致性。
|
|
||||||
|
|
||||||
*核心代码* (`searchAgents.py`):
|
|
||||||
```python
|
|
||||||
def foodHeuristic(state: Tuple[Tuple, List[List]], problem: FoodSearchProblem):
|
|
||||||
position, foodGrid = state
|
|
||||||
foodList = foodGrid.asList()
|
|
||||||
|
|
||||||
if not foodList:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
maxDistance = 0
|
|
||||||
for food in foodList:
|
|
||||||
distance = util.manhattanDistance(position, food)
|
|
||||||
maxDistance = max(maxDistance, distance)
|
|
||||||
|
|
||||||
maxFoodDistance = 0
|
|
||||||
if len(foodList) > 1:
|
|
||||||
for i in range(len(foodList)):
|
|
||||||
for j in range(i + 1, len(foodList)):
|
|
||||||
distance = util.manhattanDistance(foodList[i], foodList[j])
|
|
||||||
maxFoodDistance = max(maxFoodDistance, distance)
|
|
||||||
|
|
||||||
foodCount = len(foodList)
|
|
||||||
|
|
||||||
return max(maxDistance, maxFoodDistance, foodCount)
|
|
||||||
```
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q7
|
|
||||||
python pacman.py -l trickySearch -p AStarFoodSearchAgent
|
|
||||||
```
|
|
||||||
|
|
||||||
== Q8: 寻找最近点路径 (Closest Dot Search)
|
|
||||||
*实现思路*:这是一个贪心算法,它不保证找到全局最优解,但通常能快速找到一个较好的解。策略是:重复寻找并移动到距离当前位置最近的食物点,直到所有食物被吃完。这个任务的核心是实现 `findPathToClosestDot` 函数。我们可以定义一个 `AnyFoodSearchProblem`,其目标是到达任意一个食物点,然后使用BFS来找到到达最近食物的最短路径。
|
|
||||||
|
|
||||||
*核心代码* (`searchAgents.py`):
|
|
||||||
```python
|
|
||||||
// AnyFoodSearchProblem 的目标测试
|
|
||||||
def isGoalState(self, state: Tuple[int, int]):
|
|
||||||
x,y = state
|
|
||||||
return self.food[x][y]
|
|
||||||
|
|
||||||
// findPathToClosestDot 函数实现
|
|
||||||
def findPathToClosestDot(self, gameState: pacman.GameState):
|
|
||||||
problem = AnyFoodSearchProblem(gameState)
|
|
||||||
// BFS保证找到步数最少的路径,即到达最近的食物
|
|
||||||
path = search.bfs(problem)
|
|
||||||
return path
|
|
||||||
```
|
|
||||||
*测试指令*:
|
|
||||||
```fish
|
|
||||||
python autograder.py -q q8
|
|
||||||
python pacman.py -l bigSearch -p ClosestDotSearchAgent -z .5
|
|
||||||
```
|
|
||||||
|
|
||||||
= 实验结果
|
|
||||||
#para[
|
|
||||||
本项目的所有8个任务均已成功实现,并通过了自动评分器(autograder)的所有测试用例,最终取得了 *25/25* 的满分成绩。自动评分器给出的最终成绩摘要如下:
|
|
||||||
]
|
|
||||||
|
|
||||||
```
|
|
||||||
Provisional grades
|
|
||||||
==================
|
|
||||||
Question q1: 3/3
|
|
||||||
Question q2: 3/3
|
|
||||||
Question q3: 3/3
|
|
||||||
Question q4: 3/3
|
|
||||||
Question q5: 3/3
|
|
||||||
Question q6: 3/3
|
|
||||||
Question q7: 4/4
|
|
||||||
Question q8: 3/3
|
|
||||||
------------------
|
|
||||||
Total: 25/25
|
|
||||||
```
|
|
||||||
|
|
||||||
#para[
|
|
||||||
以下是各部分任务的详细测试结果摘要:
|
|
||||||
]
|
|
||||||
+ *Q1: 深度优先搜索*
|
|
||||||
- 所有测试用例通过。
|
|
||||||
- 在 `mediumMaze` 迷宫中找到一条长度为130的路径,扩展了146个节点。
|
|
||||||
+ *Q2: 广度优先搜索*
|
|
||||||
- 所有测试用例通过。
|
|
||||||
- 在 `mediumMaze` 迷宫中找到长度为68的最短路径,扩展了269个节点。
|
|
||||||
+ *Q3: 一致代价搜索*
|
|
||||||
- 所有测试用例通过。
|
|
||||||
- 在不同代价的迷宫中(`mediumDottedMaze`, `mediumScaryMaze`)均能找到最优路径。
|
|
||||||
+ *Q4: A\*搜索*
|
|
||||||
- 所有测试用例通过。
|
|
||||||
- 在 `bigMaze` 中,A\* 搜索扩展的节点数(约549)少于UCS(约620),展现了启发式的有效性。
|
|
||||||
+ *Q5: 角落问题*
|
|
||||||
- 所有测试用例通过。
|
|
||||||
- 在 `mediumCorners` 上,BFS扩展了接近2000个节点。
|
|
||||||
+ *Q6: 角落启发式*
|
|
||||||
- 所有测试用例通过,启发式满足一致性。
|
|
||||||
- 在 `mediumCorners` 上,使用A\*和该启发式将扩展节点数减少到 *961* 个,满足了评分的最高要求。
|
|
||||||
+ *Q7: 食物启发式*
|
|
||||||
- 18个测试用例全部通过,启发式满足一致性。
|
|
||||||
- 在 `trickySearch` 上,扩展节点数为 *8763* 个,达到了满分(4/4)要求。
|
|
||||||
+ *Q8: 寻找最近点路径*
|
|
||||||
- 所有测试用例通过,贪心策略能够成功吃掉所有食物。
|
|
||||||
|
|
||||||
#pagebreak()
|
|
||||||
= 实验总结
|
|
||||||
#para[
|
|
||||||
通过本次实验,我成功实现了包括深度优先搜索(DFS)、广度优先搜索(BFS)、一致代价搜索(UCS)和A\*搜索在内的多种经典搜索算法,并将其应用于解决Pacman游戏中的路径规划问题。
|
|
||||||
]
|
|
||||||
#para[
|
|
||||||
在实验过程中,我遇到的主要挑战在于为复杂问题设计高效、一致且可接受的启发式函数,特别是Q6的角落问题和Q7的食物收集问题。通过对问题进行抽象建模,并反复迭代优化启发式策略,最终设计出的启发式函数在满足一致性的前提下,显著减少了搜索节点的扩展数量,达到了评分要求。例如,在Q7中,通过比较“到最远食物的距离”、“食物间的最大距离”和“食物数量”等多个策略,并取其最大值作为启发式,最终将扩展节点数控制在了9000以内。
|
|
||||||
]
|
|
||||||
#para[
|
|
||||||
通过这个项目,我不仅深入掌握了各种搜索算法的原理、实现细节及其适用场景,还深刻理解了状态空间表示、启发式函数设计(可接受性与一致性)对搜索性能的决定性影响。这些知识和经验对于解决更广泛的人工智能规划问题具有重要的实践意义。未来,可以尝试引入更高级的启发式策略(如基于最小生成树的启发式)或探索其他搜索算法变体,以期进一步提升求解效率。
|
|
||||||
]
|
|
||||||
BIN
proj5/machinelearning/__pycache__/backend.cpython-313.pyc → proj5/__pycache__/backend.cpython-313.pyc
Executable file → Normal file
BIN
proj5/machinelearning/__pycache__/backend.cpython-313.pyc → proj5/__pycache__/backend.cpython-313.pyc
Executable file → Normal file
Binary file not shown.
BIN
proj5/machinelearning/__pycache__/models.cpython-313.pyc → proj5/__pycache__/models.cpython-313.pyc
Executable file → Normal file
BIN
proj5/machinelearning/__pycache__/models.cpython-313.pyc → proj5/__pycache__/models.cpython-313.pyc
Executable file → Normal file
Binary file not shown.
177
proj5/labtemplate.typ
Executable file
177
proj5/labtemplate.typ
Executable file
@ -0,0 +1,177 @@
|
|||||||
|
#let times = "Times LT Pro"
|
||||||
|
#let times = "Times New Roman"
|
||||||
|
#let song = (times, "Noto Serif CJK SC")
|
||||||
|
#let hei = (times, "Noto Sans CJK SC")
|
||||||
|
#let kai = (times, "Noto Serif CJK SC")
|
||||||
|
#let xbsong = (times, "Noto Serif CJK SC")
|
||||||
|
#let fsong = (times, "Noto Serif CJK SC")
|
||||||
|
#let code = (times, "JetBrains Mono")
|
||||||
|
#let nudtlabpaper(title: "",
|
||||||
|
author: "",
|
||||||
|
id: "",
|
||||||
|
training_type:"",
|
||||||
|
grade: "",
|
||||||
|
major: "",
|
||||||
|
department: "",
|
||||||
|
advisor: "",
|
||||||
|
jobtitle: "",
|
||||||
|
lab: "",
|
||||||
|
date: "",
|
||||||
|
header_str: "",
|
||||||
|
simple_cover: "",
|
||||||
|
body) = {
|
||||||
|
// Set the document's basic properties.
|
||||||
|
set document(author: author, title: title)
|
||||||
|
set page(
|
||||||
|
|
||||||
|
margin: (left: 30mm, right: 30mm, top: 30mm, bottom: 30mm),
|
||||||
|
)
|
||||||
|
|
||||||
|
// Title row.
|
||||||
|
v(158pt)
|
||||||
|
align(center)[
|
||||||
|
#block(text(weight: 700, size: 30pt, font: hei, tracking: 15pt, "人工智能实验报告"))
|
||||||
|
]
|
||||||
|
// align(center)[
|
||||||
|
// #block(text(weight: 700, size: 30pt, font: song, tracking: 15pt, "本科实验报告"))
|
||||||
|
// ]
|
||||||
|
|
||||||
|
v(83pt)
|
||||||
|
pad(
|
||||||
|
left: 1em,
|
||||||
|
right: 1em,
|
||||||
|
grid(
|
||||||
|
columns: (80pt, 1fr),
|
||||||
|
rows: (17pt, auto),
|
||||||
|
text(weight: 700, size: 16pt, font: song, "实验名称:"),
|
||||||
|
align(center, text(weight: "regular", size: 16pt, font: song, title)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%)
|
||||||
|
)
|
||||||
|
// #block(text(weight: 700, 1.75em, title))
|
||||||
|
// underline(text(weight: 700, size: 16pt, font: song, title))
|
||||||
|
)
|
||||||
|
|
||||||
|
// Author information.
|
||||||
|
|
||||||
|
v(82.5pt)
|
||||||
|
|
||||||
|
if simple_cover {
|
||||||
|
pad(
|
||||||
|
left: 10%,
|
||||||
|
right: 10%,
|
||||||
|
grid(
|
||||||
|
columns: (100pt, 1fr),
|
||||||
|
rows: (15pt, 8pt, 15pt, 8pt, 15pt, 8pt),
|
||||||
|
text(size: 14pt, font: song, "学员姓名:"),
|
||||||
|
align(center, text(size: 14pt, font: song, author)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(size: 14pt, font: song, "学 号:"),
|
||||||
|
align(center, text(size: 14pt, font: times, id)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(size: 14pt, font: song, "实验日期:"),
|
||||||
|
align(center, text(size: 14pt, font: song, date)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
grid(
|
||||||
|
columns: (0.25fr, 0.25fr, 0.25fr, 0.25fr),
|
||||||
|
rows: (15pt, 8pt, 15pt, 8pt, 15pt, 8pt, 15pt, 8pt, 15pt),
|
||||||
|
text(size: 14pt, font: song, tracking: 10pt, "学员姓名"),
|
||||||
|
align(center, text(size: 14pt, font: song, author)),
|
||||||
|
text(size: 14pt, font: song, tracking: 54pt, "学号"),
|
||||||
|
align(center, text(size: 14pt, font: times, id)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(size: 14pt, font: song, tracking: 9pt, "培养类型"),
|
||||||
|
align(center, text(size: 14pt, font: song, training_type)),
|
||||||
|
text(size: 14pt, font: song, tracking: 54pt, "年级"),
|
||||||
|
align(center, text(size: 14pt, font: times, grade)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(size: 14pt, font: song, tracking: 54pt, "专业"),
|
||||||
|
align(center, text(size: 14pt, font: song, major)),
|
||||||
|
text(size: 14pt, font: song, tracking: 9pt, "所属学院"),
|
||||||
|
align(center, text(size: 14pt, font: song, department)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(size: 14pt, font: song, tracking: 9pt, "指导教员"),
|
||||||
|
align(center, text(size: 14pt, font: song, advisor)),
|
||||||
|
text(size: 14pt, font: song, tracking: 54pt, "职称"),
|
||||||
|
align(center, text(size: 14pt, font: song, jobtitle)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(size: 14pt, font: song, tracking: 20pt, "实验室"),
|
||||||
|
align(center, text(size: 14pt, font: song, lab)),
|
||||||
|
text(size: 14pt, font: song, tracking: 9pt, "实验时间"),
|
||||||
|
align(center, text(size: 14pt, font: song, date)),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
text(""),
|
||||||
|
line(length: 100%),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
v(50.5pt)
|
||||||
|
align(center, text(font: hei, size: 15pt, "国防科技大学教育训练部制"))
|
||||||
|
|
||||||
|
pagebreak()
|
||||||
|
|
||||||
|
set page(
|
||||||
|
margin: (left: 30mm, right: 30mm, top: 30mm, bottom: 30mm),
|
||||||
|
numbering: "i",
|
||||||
|
number-align: center,
|
||||||
|
)
|
||||||
|
|
||||||
|
v(14pt)
|
||||||
|
align(center)[
|
||||||
|
#block(text(font: hei, size: 14pt, "《本科实验报告》填写说明"))
|
||||||
|
]
|
||||||
|
|
||||||
|
v(14pt)
|
||||||
|
text("")
|
||||||
|
par(first-line-indent: 2em, text(font: song, size: 12pt, "实验报告内容编排应符合以下要求:"))
|
||||||
|
|
||||||
|
par(first-line-indent: 2em, text(font: fsong, size: 12pt, "(1)采用A4(21cm×29.7cm)白色复印纸,单面黑字。上下左右各侧的页边距均为3cm;缺省文档网格:字号为小4号,中文为宋体,英文和阿拉伯数字为Times New Roman,每页30行,每行36字;页脚距边界为2.5cm,页码置于页脚、居中,采用小5号阿拉伯数字从1开始连续编排,封面不编页码。"))
|
||||||
|
|
||||||
|
par(first-line-indent: 2em, text(font: fsong, size: 12pt, "(2)报告正文最多可设四级标题,字体均为黑体,第一级标题字号为4号,其余各级标题为小4号;标题序号第一级用“一、”、“二、”……,第二级用“(一)”、“(二)” ……,第三级用“1.”、“2.” ……,第四级用“(1)”、“(2)” ……,分别按序连续编排。"))
|
||||||
|
|
||||||
|
par(first-line-indent: 2em, text(font: fsong, size: 12pt, "(3)正文插图、表格中的文字字号均为5号。"))
|
||||||
|
|
||||||
|
pagebreak()
|
||||||
|
|
||||||
|
set page(
|
||||||
|
margin: (left: 30mm, right: 30mm, top: 30mm, bottom: 30mm),
|
||||||
|
numbering: "1",
|
||||||
|
number-align: center,
|
||||||
|
)
|
||||||
|
|
||||||
|
set heading(numbering: "1.1")
|
||||||
|
// set text(font: hei, lang: "zh")
|
||||||
|
|
||||||
|
show heading: it => box(width: 100%)[
|
||||||
|
#v(0.50em)
|
||||||
|
#set text(font: hei)
|
||||||
|
#counter(heading).display()
|
||||||
|
// #h(0.5em)
|
||||||
|
#it.body
|
||||||
|
]
|
||||||
|
// Main body.
|
||||||
|
set par(justify: true)
|
||||||
|
|
||||||
|
body
|
||||||
|
}
|
||||||
|
|
||||||
|
#let para(t) = par(first-line-indent: 2em, text(font: song, size: 10.5pt, t))
|
||||||
76
proj5/main.typ
Executable file
76
proj5/main.typ
Executable file
@ -0,0 +1,76 @@
|
|||||||
|
#import "labtemplate.typ": *
|
||||||
|
#show: nudtlabpaper.with(
|
||||||
|
author: "程景愉",
|
||||||
|
id: "202302723005",
|
||||||
|
title: "机器学习",
|
||||||
|
training_type: "普通本科生",
|
||||||
|
grade: "2023级",
|
||||||
|
major: "网络工程",
|
||||||
|
department: "计算机学院",
|
||||||
|
advisor: "胡罡",
|
||||||
|
jobtitle: "教授",
|
||||||
|
lab: "306-707",
|
||||||
|
date: "2025.12.10",
|
||||||
|
header_str: "机器学习实验报告",
|
||||||
|
simple_cover: true,
|
||||||
|
)
|
||||||
|
|
||||||
|
#set page(header: [
|
||||||
|
#set par(spacing: 6pt)
|
||||||
|
#align(center)[#text(size: 11pt)[人工智能实验报告]]
|
||||||
|
#v(-0.3em)
|
||||||
|
#line(length: 100%, stroke: (thickness: 1pt))
|
||||||
|
],)
|
||||||
|
|
||||||
|
#show heading: it => if it.level == 1 [
|
||||||
|
#v(1em)
|
||||||
|
#set text(font: hei, size: 16pt)
|
||||||
|
#counter(heading).display()
|
||||||
|
#h(0.5em)
|
||||||
|
#it.body
|
||||||
|
#v(0.5em)
|
||||||
|
] else [
|
||||||
|
#v(0.8em)
|
||||||
|
#set text(font: "New Computer Modern", weight: "bold", style: "italic")
|
||||||
|
#counter(heading).display(it.numbering)
|
||||||
|
#h(0.5em)
|
||||||
|
#it.body
|
||||||
|
#v(0.5em)
|
||||||
|
]
|
||||||
|
|
||||||
|
#outline(title: "目录",depth: 2, indent: 1em)
|
||||||
|
|
||||||
|
#set enum(indent: 0.5em,body-indent: 0.5em,)
|
||||||
|
#pagebreak()
|
||||||
|
|
||||||
|
= 实验介绍
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
|
||||||
|
= 实验内容
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
+
|
||||||
|
+
|
||||||
|
+
|
||||||
|
|
||||||
|
= 实验要求
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
|
||||||
|
= 实验步骤与实现
|
||||||
|
|
||||||
|
= 实验结果
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
#pagebreak()
|
||||||
|
= 实验总结
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
#para[
|
||||||
|
]
|
||||||
|
#para[
|
||||||
|
]
|
||||||
266
proj5/sample.typ
Executable file
266
proj5/sample.typ
Executable file
@ -0,0 +1,266 @@
|
|||||||
|
#import "labtemplate.typ": *
|
||||||
|
#show: nudtlabpaper.with(
|
||||||
|
author: "程景愉",
|
||||||
|
id: "202302723005",
|
||||||
|
title: "博弈搜索",
|
||||||
|
training_type: "普通本科生",
|
||||||
|
grade: "2023级",
|
||||||
|
major: "网络工程",
|
||||||
|
department: "计算机学院",
|
||||||
|
advisor: "胡罡",
|
||||||
|
jobtitle: "教授",
|
||||||
|
lab: "306-707",
|
||||||
|
date: "2025.12.02",
|
||||||
|
header_str: "博弈搜索实验报告",
|
||||||
|
simple_cover: true,
|
||||||
|
)
|
||||||
|
|
||||||
|
#set page(header: [
|
||||||
|
#set par(spacing: 6pt)
|
||||||
|
#align(center)[#text(size: 11pt)[人工智能实验报告]]
|
||||||
|
#v(-0.3em)
|
||||||
|
#line(length: 100%, stroke: (thickness: 1pt))
|
||||||
|
],)
|
||||||
|
|
||||||
|
#show heading: it => if it.level == 1 [
|
||||||
|
#v(1em)
|
||||||
|
#set text(font: hei, size: 16pt)
|
||||||
|
#counter(heading).display()
|
||||||
|
#h(0.5em)
|
||||||
|
#it.body
|
||||||
|
#v(0.5em)
|
||||||
|
] else [
|
||||||
|
#v(0.8em)
|
||||||
|
#set text(font: "New Computer Modern", weight: "bold", style: "italic")
|
||||||
|
#counter(heading).display(it.numbering)
|
||||||
|
#h(0.5em)
|
||||||
|
#it.body
|
||||||
|
#v(0.5em)
|
||||||
|
]
|
||||||
|
|
||||||
|
#outline(title: "目录",depth: 2, indent: 1em)
|
||||||
|
|
||||||
|
#set enum(indent: 0.5em,body-indent: 0.5em,)
|
||||||
|
#pagebreak()
|
||||||
|
|
||||||
|
= 实验介绍
|
||||||
|
#para[
|
||||||
|
本项目是UC Berkeley CS188人工智能课程的第二个项目,重点在于对抗搜索(Adversarial Search)。在此项目中,我们将编写控制Pacman的智能体,使其能够在不仅有静止障碍物和食物,还有主动追捕Pacman的幽灵(Ghosts)的环境中生存并获胜。实验涉及的核心算法包括Reflex Agent(反射智能体)、Minimax(极大极小算法)、Alpha-Beta剪枝以及Expectimax(期望最大算法)。此外,还需要设计高效的评估函数来指导智能体在有限深度的搜索中做出最优决策。通过本实验,我们将深入理解博弈树搜索、剪枝优化以及非完全理性对手建模等关键概念。
|
||||||
|
]
|
||||||
|
|
||||||
|
= 实验内容
|
||||||
|
#para[
|
||||||
|
本次实验内容涵盖了5个核心编程任务,具体如下:
|
||||||
|
]
|
||||||
|
+ *Q1: Reflex Agent*:实现一个能够根据当前状态特征(如食物距离、幽灵位置)做出即时反应的反射智能体。
|
||||||
|
+ *Q2: Minimax*:实现通用的Minimax算法,使Pacman能够假设幽灵采取最优策略(即尽力让Pacman得分最低)的情况下,规划出最优路径。
|
||||||
|
+ *Q3: Alpha-Beta Pruning*:在Minimax的基础上引入Alpha-Beta剪枝,以减少不必要的节点扩展,提高搜索效率,从而支持更深层次的搜索。
|
||||||
|
+ *Q4: Expectimax*:实现Expectimax算法,不再假设幽灵是最优对手,而是将其建模为随机行动的对手,通过计算期望得分来最大化收益。
|
||||||
|
+ *Q5: Evaluation Function*:设计一个更强大的评估函数,综合考虑多种游戏状态特征,使智能体在搜索深度有限时仍能准确评估局面好坏。
|
||||||
|
|
||||||
|
= 实验要求
|
||||||
|
#para[
|
||||||
|
本项目要求在 `multiAgents.py` 文件中根据指引补全代码,核心要求如下:
|
||||||
|
]
|
||||||
|
+ *文件修改*:仅允许修改 `multiAgents.py`,其他文件(如 `pacman.py`, `game.py`)不得修改,以确保与自动评分器的兼容性。
|
||||||
|
+ *算法通用性*:实现的Minimax和Alpha-Beta算法必须支持任意数量的幽灵(Min层)和任意指定的搜索深度。
|
||||||
|
+ *剪枝正确性*:在Q3中,必须正确实现Alpha-Beta剪枝逻辑,不能改变节点的遍历顺序(必须按照 `getLegalActions` 返回的顺序),且不能进行基于等值的剪枝,以完全匹配自动评分器的扩展节点数检查。
|
||||||
|
+ *性能指标*:对于Q1和Q5,设计的评估函数必须使Pacman在测试关卡中达到一定的胜率(如Q1需赢得5次以上,Q5需在smallClassic上赢得全部10次且均分超过1000),且运行时间需在规定范围内。
|
||||||
|
|
||||||
|
= 实验步骤与实现
|
||||||
|
|
||||||
|
== Q1: Reflex Agent (反射智能体)
|
||||||
|
*实现思路*:Reflex Agent 不进行树搜索,而是基于当前状态的直接评估来选择动作。我们在 `evaluationFunction` 中设计了一个简单的线性组合评分系统。主要考虑两个因素:一是离食物越近越好(使用距离倒数作为激励);二是如果幽灵太近且未受惊,则必须避开(给予极大的惩罚)。
|
||||||
|
|
||||||
|
*核心代码* (`multiAgents.py`):
|
||||||
|
```python
|
||||||
|
def evaluationFunction(self, currentGameState: GameState, action):
|
||||||
|
successorGameState = currentGameState.generatePacmanSuccessor(action)
|
||||||
|
newPos = successorGameState.getPacmanPosition()
|
||||||
|
newFood = successorGameState.getFood()
|
||||||
|
newGhostStates = successorGameState.getGhostStates()
|
||||||
|
newScaredTimes = [ghostState.scaredTimer for ghostState in newGhostStates]
|
||||||
|
|
||||||
|
# 计算到最近食物的距离
|
||||||
|
foodList = newFood.asList()
|
||||||
|
if not foodList:
|
||||||
|
return 999999
|
||||||
|
minFoodDist = min([util.manhattanDistance(newPos, food) for food in foodList])
|
||||||
|
|
||||||
|
# 避免危险幽灵
|
||||||
|
for i, ghostState in enumerate(newGhostStates):
|
||||||
|
dist = util.manhattanDistance(newPos, ghostState.getPosition())
|
||||||
|
if dist < 2 and newScaredTimes[i] == 0:
|
||||||
|
return -999999
|
||||||
|
|
||||||
|
# 综合评分:基础分 + 食物激励
|
||||||
|
return successorGameState.getScore() + 1.0 / (minFoodDist + 1)
|
||||||
|
```
|
||||||
|
|
||||||
|
*测试指令*:
|
||||||
|
```fish
|
||||||
|
python autograder.py -q q1
|
||||||
|
python pacman.py -p ReflexAgent -l testClassic
|
||||||
|
```
|
||||||
|
|
||||||
|
== Q2: Minimax (极大极小算法)
|
||||||
|
*实现思路*:Minimax算法构建一棵博弈树,Pacman作为MAX层试图最大化得分,而所有幽灵作为MIN层试图最小化Pacman的得分。每一层递归代表一个Agent的行动。当轮到Pacman(Agent 0)时,深度加1(实际上是所有Agent行动一轮算一层深度,这里实现时通常在回到Agent 0时增加深度计数)。
|
||||||
|
|
||||||
|
*核心代码* (`multiAgents.py`):
|
||||||
|
```python
|
||||||
|
def minimax(agentIndex, depth, gameState):
|
||||||
|
if gameState.isWin() or gameState.isLose() or depth == self.depth:
|
||||||
|
return self.evaluationFunction(gameState)
|
||||||
|
|
||||||
|
# MAX Agent (Pacman)
|
||||||
|
if agentIndex == 0:
|
||||||
|
bestValue = float("-inf")
|
||||||
|
for action in gameState.getLegalActions(agentIndex):
|
||||||
|
succ = gameState.generateSuccessor(agentIndex, action)
|
||||||
|
bestValue = max(bestValue, minimax(agentIndex + 1, depth, succ))
|
||||||
|
return bestValue
|
||||||
|
# MIN Agents (Ghosts)
|
||||||
|
else:
|
||||||
|
bestValue = float("inf")
|
||||||
|
for action in gameState.getLegalActions(agentIndex):
|
||||||
|
succ = gameState.generateSuccessor(agentIndex, action)
|
||||||
|
if agentIndex == gameState.getNumAgents() - 1:
|
||||||
|
bestValue = min(bestValue, minimax(0, depth + 1, succ))
|
||||||
|
else:
|
||||||
|
bestValue = min(bestValue, minimax(agentIndex + 1, depth, succ))
|
||||||
|
return bestValue
|
||||||
|
```
|
||||||
|
|
||||||
|
*测试指令*:
|
||||||
|
```fish
|
||||||
|
python autograder.py -q q2
|
||||||
|
python pacman.py -p MinimaxAgent -l minimaxClassic -a depth=4
|
||||||
|
```
|
||||||
|
|
||||||
|
== Q3: Alpha-Beta Pruning (Alpha-Beta 剪枝)
|
||||||
|
*实现思路*:在Minimax的基础上维护两个变量 $alpha$ (MAX节点的下界) 和 $beta$ (MIN节点的上界)。在MAX节点,如果发现某分支的值大于 $\beta$,则剪枝(因为MIN父节点不会选它);在MIN节点,如果某分支值小于 $alpha$,则剪枝(MAX父节点不会选它)。
|
||||||
|
|
||||||
|
*核心代码* (`multiAgents.py`):
|
||||||
|
```python
|
||||||
|
def maxValue(agentIndex, depth, gameState, alpha, beta):
|
||||||
|
v = float("-inf")
|
||||||
|
for action in gameState.getLegalActions(agentIndex):
|
||||||
|
succ = gameState.generateSuccessor(agentIndex, action)
|
||||||
|
v = max(v, alphaBeta(agentIndex + 1, depth, succ, alpha, beta))
|
||||||
|
if v > beta: return v # Pruning
|
||||||
|
alpha = max(alpha, v)
|
||||||
|
return v
|
||||||
|
|
||||||
|
def minValue(agentIndex, depth, gameState, alpha, beta):
|
||||||
|
v = float("inf")
|
||||||
|
for action in gameState.getLegalActions(agentIndex):
|
||||||
|
succ = gameState.generateSuccessor(agentIndex, action)
|
||||||
|
if agentIndex == gameState.getNumAgents() - 1:
|
||||||
|
v = min(v, alphaBeta(0, depth + 1, succ, alpha, beta))
|
||||||
|
else:
|
||||||
|
v = min(v, alphaBeta(agentIndex + 1, depth, succ, alpha, beta))
|
||||||
|
if v < alpha: return v # Pruning
|
||||||
|
beta = min(beta, v)
|
||||||
|
return v
|
||||||
|
```
|
||||||
|
|
||||||
|
*测试指令*:
|
||||||
|
```fish
|
||||||
|
python autograder.py -q q3
|
||||||
|
python pacman.py -p AlphaBetaAgent -a depth=3 -l smallClassic
|
||||||
|
```
|
||||||
|
|
||||||
|
== Q4: Expectimax (期望最大算法)
|
||||||
|
*实现思路*:Expectimax不再假设幽灵是最优对手,而是假设它们随机行动(Uniform Random)。因此,Min层变成了Chance层,计算所有后续状态得分的平均值(期望值)。这使得Pacman在面对非最优幽灵时敢于承担一定风险去获取更高收益。
|
||||||
|
|
||||||
|
*核心代码* (`multiAgents.py`):
|
||||||
|
```python
|
||||||
|
def expValue(agentIndex, depth, gameState):
|
||||||
|
v = 0
|
||||||
|
actions = gameState.getLegalActions(agentIndex)
|
||||||
|
prob = 1.0 / len(actions)
|
||||||
|
for action in actions:
|
||||||
|
succ = gameState.generateSuccessor(agentIndex, action)
|
||||||
|
if agentIndex == gameState.getNumAgents() - 1:
|
||||||
|
v += prob * expectimax(0, depth + 1, succ)
|
||||||
|
else:
|
||||||
|
v += prob * expectimax(agentIndex + 1, depth, succ)
|
||||||
|
return v
|
||||||
|
```
|
||||||
|
|
||||||
|
*测试指令*:
|
||||||
|
```fish
|
||||||
|
python autograder.py -q q4
|
||||||
|
python pacman.py -p ExpectimaxAgent -l minimaxClassic -a depth=3
|
||||||
|
```
|
||||||
|
|
||||||
|
== Q5: Evaluation Function (评估函数)
|
||||||
|
*实现思路*:为了在有限深度搜索中获得更好表现,`betterEvaluationFunction` 考虑了更多特征。除了基础分数,我们增加了对最近食物的奖励(权重+10),对胶囊的奖励(权重+20),以及对受惊幽灵的追捕奖励(权重+100)。同时,对靠近非受惊幽灵给予重罚(权重-1000),并根据剩余食物和胶囊的数量给予惩罚,以激励Pacman尽快清空地图。
|
||||||
|
|
||||||
|
*核心代码* (`multiAgents.py`):
|
||||||
|
```python
|
||||||
|
def betterEvaluationFunction(currentGameState: GameState):
|
||||||
|
pos = currentGameState.getPacmanPosition()
|
||||||
|
score = currentGameState.getScore()
|
||||||
|
|
||||||
|
# 距离特征
|
||||||
|
foodDist = min([util.manhattanDistance(pos, f) for f in foodList]) if foodList else 0
|
||||||
|
score += 10.0 / (foodDist + 1)
|
||||||
|
|
||||||
|
# 幽灵特征
|
||||||
|
for i, ghost in enumerate(ghostStates):
|
||||||
|
dist = util.manhattanDistance(pos, ghost.getPosition())
|
||||||
|
if scaredTimes[i] > 0:
|
||||||
|
score += 100.0 / (dist + 1) # 鼓励吃受惊幽灵
|
||||||
|
elif dist < 2:
|
||||||
|
score -= 1000.0 # 极力避免接触
|
||||||
|
|
||||||
|
# 数量特征(越少越好,故减去)
|
||||||
|
score -= len(foodList) * 4
|
||||||
|
score -= len(capsules) * 20
|
||||||
|
|
||||||
|
return score
|
||||||
|
```
|
||||||
|
|
||||||
|
*测试指令*:
|
||||||
|
```fish
|
||||||
|
python autograder.py -q q5
|
||||||
|
python autograder.py -q q5 --no-graphics
|
||||||
|
```
|
||||||
|
|
||||||
|
= 实验结果
|
||||||
|
#para[
|
||||||
|
本项目的所有5个任务均已成功实现,并通过了自动评分器(autograder)的所有测试用例,最终取得了 *25/25* 的满分成绩。
|
||||||
|
]
|
||||||
|
|
||||||
|
```
|
||||||
|
Provisional grades
|
||||||
|
==================
|
||||||
|
Question q1: 4/4
|
||||||
|
Question q2: 5/5
|
||||||
|
Question q3: 5/5
|
||||||
|
Question q4: 5/5
|
||||||
|
Question q5: 6/6
|
||||||
|
------------------
|
||||||
|
Total: 25/25
|
||||||
|
```
|
||||||
|
|
||||||
|
#para[
|
||||||
|
主要测试结果分析:
|
||||||
|
]
|
||||||
|
+ *Q1 (Reflex)*: 智能体能够有效避开幽灵并吃到食物,在 `testClassic` 上表现稳定,平均分超过1000。
|
||||||
|
+ *Q2 (Minimax)*: 成功通过了所有深度和幽灵数量的博弈树测试,扩展节点数与标准答案完全一致,证明了逻辑的正确性。
|
||||||
|
+ *Q3 (Alpha-Beta)*: 在保持结果与Minimax一致的前提下,显著减少了扩展的节点数量。在 `smallClassic` 上,深度为3的搜索能够在极短时间内完成。
|
||||||
|
+ *Q5 (Eval)*: 设计的评估函数表现优异,Pacman在 `smallClassic` 的10次随机测试中全部获胜,且平均得分远超1000分的要求,证明了特征选择和权重分配的合理性。
|
||||||
|
|
||||||
|
#pagebreak()
|
||||||
|
= 实验总结
|
||||||
|
#para[
|
||||||
|
通过本次实验,我深入掌握了对抗搜索算法的核心思想及其在多智能体环境中的应用。
|
||||||
|
]
|
||||||
|
#para[
|
||||||
|
首先,我实现了Minimax算法,理解了零和博弈中MAX与MIN节点的相互制约关系。随后,通过实现Alpha-Beta剪枝,我深刻体会到了剪枝技术对于提升搜索效率的重要性——它能够在不改变最终决策的前提下,指数级地减少搜索空间,使得更深层的搜索成为可能。
|
||||||
|
]
|
||||||
|
#para[
|
||||||
|
其次,Expectimax算法的实现让我认识到,在面对随机或非最优对手时,概率模型往往比悲观的Minimax模型更具优势。最后,也是最具挑战性的部分,是设计评估函数。我学会了如何将游戏状态抽象为特征向量(如食物距离、幽灵状态等),并通过线性组合这些特征来量化状态的优劣。这不仅考验了代码实现能力,更考验了对问题本质的理解和特征工程的能力。
|
||||||
|
]
|
||||||
Reference in New Issue
Block a user