本文介绍了井字游戏变种方案,可通过设置xsize、ysize指定棋盘大小,winnum指定连珠数。用两个深度学习模型分别扮演玩家和电脑自动对弈,借QLearning记录每步,依胜负判定方案好坏。代码展示了模型训练等过程,包括迭代、下棋、胜负判定及模型更新等。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

是一种在3x3格子上进行的连珠游戏,和五子棋比较类似,由于棋盘一般不画边框,格线排成井字故得名。游戏需要的工具仅为纸和笔,然后由分别代表O和X的两个游戏者轮流在格子里留下标记(一般来说先手者为X)。由最先在任意一条直线上成功连接三个标记的一方获胜。
import numpy as npimport paddlefrom Model import Modelfrom VictoryRule import Rulefrom QLearning import QLearningfrom visualdl import LogWriter
log_writer = LogWriter(logdir="./log")
Max_Epoch = 200 #最大迭代次数xsize = 3 #多少行ysize = 3 #多少列winnum = 3 #连珠数,多少个连珠则获胜learning_rate = 1e-3 #学习率decay_rate = 0.6 #每步衰减率player=1 #玩家是数字,非0,非负computer=2 #电脑的数字,非0,非负remain = [] #地图中剩余可下棋子位置rule = Rule(xsize,ysize,winnum) #规则Qchart = QLearning(xsize * ysize,decay_rate)#Q表格player_model = Model(xsize * ysize,xsize * ysize)
player_model.train()
computer_model = Model(xsize * ysize,xsize * ysize)
computer_model.train()
player_optimizer = paddle.optimizer.SGD(parameters=player_model.parameters(),
learning_rate=learning_rate)
computer_optimizer = paddle.optimizer.SGD(parameters=computer_model.parameters(),
learning_rate=learning_rate)def restart():
"重启环境"
Qchart.clear()
remain.clear()
rule.map = np.zeros(xsize * ysize,dtype=int) for i in range(xsize * ysize):
remain.append(i)def modelupdate(player_loss,computer_loss):
"模型更新"
log_writer.add_scalar(tag="player/loss", step=epoch, value=player_loss.numpy())
log_writer.add_scalar(tag="computer/loss", step=epoch, value=computer_loss.numpy()) # 梯度更新
player_loss.backward()
computer_loss.backward()
player_optimizer.step()
player_optimizer.clear_grad()
computer_optimizer.step()
computer_optimizer.clear_grad()
paddle.save(player_model.state_dict(),'player_model')
paddle.save(computer_model.state_dict(),'computer_model')
for i in range(xsize * ysize):
remain.append(i)for epoch in range(Max_Epoch): while True:
player_predict = player_model(paddle.to_tensor(rule.map, dtype='float32',stop_gradient=False))#玩家方预测
for pred in np.argsort(-player_predict.numpy()): if pred in remain:
remain.remove(pred) break
rule.map[pred] = player
Qchart.update(pred,'player') print('player down at {}'.format(pred))
overcode=rule.checkover(pred,player) if overcode == player: "获胜方为玩家"
player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(Qchart.playerstep, dtype='float32', stop_gradient=False))
computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(-1 * Qchart.computerstep, dtype='float32', stop_gradient=False))#损失计算中,失败方的label为每步的负数
print("Player Victory!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}\tplayer loss:{}\tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0]))
modelupdate(player_loss,computer_loss)
restart() break
elif overcode == 0:
player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(Qchart.playerstep, dtype='float32', stop_gradient=False))
computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(Qchart.computerstep, dtype='float32', stop_gradient=False)) print("Draw!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}\tplayer loss:{}\tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0]))
modelupdate(player_loss,computer_loss)
restart() break
computer_predict = computer_model(paddle.to_tensor(rule.map, dtype='float32',stop_gradient=False))#电脑方预测
for pred in np.argsort(-computer_predict.numpy()): if pred in remain:
remain.remove(pred) break
rule.map[pred] = computer
Qchart.update(pred,'computer') print('computer down at {}'.format(pred))
overcode=rule.checkover(pred, computer) if overcode == computer:
player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(-1 * Qchart.playerstep, dtype='float32', stop_gradient=False))
computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(Qchart.computerstep, dtype='float32', stop_gradient=False)) print("Computer Victory!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}\tplayer loss:{}\tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0]))
modelupdate(player_loss,computer_loss)
restart() break
elif overcode == 0:
player_loss = paddle.nn.functional.mse_loss(player_predict, paddle.to_tensor(Qchart.playerstep, dtype='float32', stop_gradient=False))
computer_loss = paddle.nn.functional.mse_loss(computer_predict, paddle.to_tensor(Qchart.computerstep, dtype='float32', stop_gradient=False)) print("Draw!") print(rule.map.reshape(xsize,ysize)) #print("epoch:{}\tplayer loss:{}\tcomputer loss:{}".format(epoch,player_loss.numpy()[0],computer_loss.numpy()[0]))
modelupdate(player_loss,computer_loss)
restart() breakplayer down at 7computer down at 3player down at 1computer down at 8player down at 6computer down at 2player down at 0computer down at 5Computer Victory! [[1 1 2] [2 0 2] [1 1 2]]
以上就是Tic-Tac-Toe:井字游戏(井字棋)的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号