200字范文,内容丰富有趣,生活中的好帮手!
200字范文 > 【强化学习】优势演员-评论员算法(Advantage Actor-Critic A2C)求解倒立摆问题 + Pytorch代码实战

【强化学习】优势演员-评论员算法(Advantage Actor-Critic A2C)求解倒立摆问题 + Pytorch代码实战

时间:2022-06-03 01:43:01

相关推荐

【强化学习】优势演员-评论员算法(Advantage Actor-Critic   A2C)求解倒立摆问题 + Pytorch代码实战

文章目录

一、倒立摆问题介绍二、优势演员-评论员算法简介三、详细资料四、Python代码实战4.1 运行前配置4.2 主要代码4.2.1 网络参数不共享版本4.2.2 网络参数共享版本 4.4 关于可视化的设置

一、倒立摆问题介绍

Agent 必须在两个动作之间做出决定 - 向左或向右移动推车 - 以使连接到它的杆保持直立。

二、优势演员-评论员算法简介

优势演员-评论员算法的流程如下图所示,我们有一个 π \pi π ,有个初始的演员与环境交互,先收集资料。在策略梯度方法里收集资料以后,就来更新策略。但是在演员-评论员算法里面,我们不是直接使用那些资料来更新策略。我们先用这些资料去估计价值函数,可以用时序差分方法或蒙特卡洛方法来估计价值函数。接下来,我们再基于价值函数,使用下式更新π \pi π 。

∇ R ˉ θ ≈ 1 N ∑ n − 1 N ∑ t − 1 T n ( r t n + V π ( s t + 1 n ) − V π ( s t n ) ) ∇ log ⁡ p θ ( a t n ∣ s t n ) \nabla \bar{R}_\theta \approx \frac{1}{N} \sum_{n-1}^N \sum_{t-1}^{T_n}\left(r_t^n+V_\pi\left(s_{t+1}^n\right)-V_\pi\left(s_t^n\right)\right) \nabla \log p_\theta\left(a_t^n \mid s_t^n\right) ∇Rˉθ​≈N1​n−1∑N​t−1∑Tn​​(rtn​+Vπ​(st+1n​)−Vπ​(stn​))∇logpθ​(atn​∣stn​)

有了新的 π \pi π 以后,再与环境交互,收集新的资料,去估计价值函数。再用新的价值函数更新策略,更新演员。整个优势演员-评论员算法就是这么运作的。

三、详细资料

关于更加详细的优势演员-评论员算法的介绍,请看我之前发的博客:【EasyRL学习笔记】第九章 Actor-Critic 演员-评论员算法

在学习优势演员-评论员算法前你最好能了解以下知识点:

深度Q网络时序差分方法蒙特卡洛方法

四、Python代码实战

4.1 运行前配置

准备好一个RL_Utils.py文件,文件内容可以从我的一篇里博客获取:【RL工具类】强化学习常用函数工具类(Python代码)

这一步很重要,后面需要引入该RL_Utils.py文件

4.2 主要代码

4.2.1 网络参数不共享版本

import argparseimport datetimeimport timefrom collections import dequeimport torch.nn.functional as Fimport gymfrom torch import nn# 这里需要改成自己的RL_Utils.py文件的路径from Python.ReinforcementLearning.EasyRL.RL_Utils import *# 经验回放缓存区对象class PGReplay:def __init__(self):self.buffer = deque()def push(self, transitions):self.buffer.append(transitions)def sample(self):batch = list(self.buffer)return zip(*batch)def clear(self):self.buffer.clear()def __len__(self):return len(self.buffer)# 演员:离散动作,就输出每个动作的概率分布(softmax);连续动作就直接输出动作(sigmoid)class ActorSoftmax(nn.Module):def __init__(self, input_dim, output_dim, hidden_dim=256):super(ActorSoftmax, self).__init__()self.fc1 = nn.Linear(input_dim, hidden_dim)self.fc2 = nn.Linear(hidden_dim, output_dim)def forward(self, state):dist = F.relu(self.fc1(state))dist = F.softmax(self.fc2(dist), dim=1)return dist# 评论员:输出V_{\pi}(s)class Critic(nn.Module):def __init__(self, input_dim, output_dim, hidden_dim=256):super(Critic, self).__init__()assert output_dim == 1self.fc1 = nn.Linear(input_dim, hidden_dim)self.fc2 = nn.Linear(hidden_dim, output_dim)def forward(self, state):value = F.relu(self.fc1(state))value = self.fc2(value)return value# A2C智能体对象class A2C:def __init__(self, models, memory, arg_dict):self.n_actions = arg_dict['n_actions']self.gamma = arg_dict['gamma']self.device = torch.device(arg_dict['device'])self.memory = memoryself.actor = models['Actor'].to(self.device)self.critic = models['Critic'].to(self.device)self.actor_optim = torch.optim.Adam(self.actor.parameters(), lr=arg_dict['actor_lr'])self.critic_optim = torch.optim.Adam(self.critic.parameters(), lr=arg_dict['critic_lr'])def sample_action(self, state):# unsqueeze():升维state = torch.tensor(state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)# 将当前状态传给演员,演员返回每个动作的概率分布dist = self.actor(state)# 将当前状态传给评论员,评论员返回每个动作的价值value = self.critic(state) # note that 'dist' need require_grad=Truevalue = value.detach().numpy().squeeze(0)[0]# squeeze():降维# 根据分布,按照概率选取动作action = np.random.choice(self.n_actions, p=dist.detach().numpy().squeeze(0)) # shape(p=(n_actions,1)return action, value, dist# 这里之所以不用最大概率的动作进行行动,是为了增加模型的随机性,防止被猜透def predict_action(self, state):state = torch.tensor(state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)dist = self.actor(state)value = self.critic(state) # note that 'dist' need require_grad=Truevalue = value.detach().numpy().squeeze(0)[0]action = np.random.choice(self.n_actions, p=dist.detach().numpy().squeeze(0)) # shape(p=(n_actions,1)return action, value, distdef update(self, next_state, entropy):value_pool, log_prob_pool, reward_pool = self.memory.sample()next_state = torch.tensor(next_state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)next_value = self.critic(next_state)returns = np.zeros_like(reward_pool)for t in reversed(range(len(reward_pool))):next_value = reward_pool[t] + self.gamma * next_value # G(s_{t},a{t}) = r_{t+1} + gamma * V(s_{t+1})returns[t] = next_valuereturns = torch.tensor(returns, device=self.device)value_pool = torch.tensor(value_pool, device=self.device)advantages = returns - value_poollog_prob_pool = torch.stack(log_prob_pool)actor_loss = (-log_prob_pool * advantages).mean()critic_loss = 0.5 * advantages.pow(2).mean()tot_loss = actor_loss + critic_loss + 0.001 * entropyself.actor_optim.zero_grad()self.critic_optim.zero_grad()tot_loss.backward()self.actor_optim.step()self.critic_optim.step()self.memory.clear()def save_model(self, path):Path(path).mkdir(parents=True, exist_ok=True)torch.save(self.actor.state_dict(), f"{path}/actor_checkpoint.pt")torch.save(self.critic.state_dict(), f"{path}/critic_checkpoint.pt")def load_model(self, path):self.actor.load_state_dict(torch.load(f"{path}/actor_checkpoint.pt"))self.critic.load_state_dict(torch.load(f"{path}/critic_checkpoint.pt"))# 训练函数def train(arg_dict, env, agent):# 开始计时startTime = time.time()print(f"环境名: {arg_dict['env_name']}, 算法名: {arg_dict['algo_name']}, Device: {arg_dict['device']}")print("开始训练智能体......")rewards = []steps = []for i_ep in range(arg_dict['train_eps']):ep_reward = 0ep_step = 0ep_entropy = 0state = env.reset()# 每次采样ep_max_steps个样本再对模型进行更新for _ in range(arg_dict['ep_max_steps']):# 画图if arg_dict['train_render']:env.render()# 探索采样action, value, dist = agent.sample_action(state)# 根据动作获取下一步状态、奖励(经验)next_state, reward, done, _ = env.step(action)log_prob = torch.log(dist.squeeze(0)[action])entropy = -np.sum(np.mean(dist.detach().numpy()) * np.log(dist.detach().numpy()))# 保存经验agent.memory.push((value, log_prob, reward))# 更新状态state = next_stateep_reward += rewardep_entropy += entropyep_step += 1if done:break# 更新智能体参数agent.update(next_state, ep_entropy)rewards.append(ep_reward)steps.append(ep_step)if (i_ep + 1) % 10 == 0:print(f'Episode: {i_ep + 1}/{arg_dict["train_eps"]}, Reward: {ep_reward:.2f}, Steps:{ep_step}')print('训练结束 , 用时: ' + str(time.time() - startTime) + " s")# 关闭环境env.close()return {'episodes': range(len(rewards)), 'rewards': rewards}# 测试函数def test(arg_dict, env, agent):startTime = time.time()print("开始测试智能体......")print(f"环境名: {arg_dict['env_name']}, 算法名: {arg_dict['algo_name']}, Device: {arg_dict['device']}")rewards = []steps = []for i_ep in range(arg_dict['test_eps']):ep_reward = 0ep_step = 0state = env.reset()for _ in range(arg_dict['ep_max_steps']):# 画图if arg_dict['test_render']:env.render()# 预测动作action, _, _ = agent.predict_action(state)next_state, reward, done, _ = env.step(action)state = next_stateep_reward += rewardep_step += 1if done:breakrewards.append(ep_reward)steps.append(ep_step)print(f"Episode: {i_ep + 1}/{arg_dict['test_eps']}, Steps:{ep_step}, Reward: {ep_reward:.2f}")print("测试结束 , 用时: " + str(time.time() - startTime) + " s")env.close()return {'episodes': range(len(rewards)), 'rewards': rewards}# 创建环境和智能体def create_env_agent(arg_dict):# 创建环境env = gym.make(arg_dict['env_name'])# 设置随机种子all_seed(env, seed=arg_dict["seed"])# 获取状态数try:n_states = env.observation_space.nexcept AttributeError:n_states = env.observation_space.shape[0]# 获取动作数n_actions = env.action_space.nprint(f"状态数: {n_states}, 动作数: {n_actions}")# 将状态数和动作数加入算法参数字典arg_dict.update({"n_states": n_states, "n_actions": n_actions})# 实例化智能体对象models = {'Actor': ActorSoftmax(arg_dict['n_states'], arg_dict['n_actions'], hidden_dim=arg_dict['actor_hidden_dim']),'Critic': Critic(arg_dict['n_states'], 1, hidden_dim=arg_dict['critic_hidden_dim'])}# 经验回放缓存区memory = PGReplay()agent = A2C(models, memory, arg_dict)# 返回环境,智能体return env, agentif __name__ == '__main__':# 防止报错 OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"# 获取当前路径curr_path = os.path.dirname(os.path.abspath(__file__))# 获取当前时间curr_time = datetime.datetime.now().strftime("%Y_%m_%d-%H_%M_%S")# 相关参数设置parser = argparse.ArgumentParser(description="hyper parameters")parser.add_argument('--algo_name', default='A2C', type=str, help="name of algorithm")parser.add_argument('--env_name', default='CartPole-v0', type=str, help="name of environment")parser.add_argument('--train_eps', default=1600, type=int, help="episodes of training")parser.add_argument('--test_eps', default=20, type=int, help="episodes of testing")parser.add_argument('--ep_max_steps', default=100000, type=int,help="steps per episode, much larger value can simulate infinite steps")parser.add_argument('--gamma', default=0.99, type=float, help="discounted factor")parser.add_argument('--actor_lr', default=3e-4, type=float, help="learning rate of actor")parser.add_argument('--critic_lr', default=1e-3, type=float, help="learning rate of critic")parser.add_argument('--actor_hidden_dim', default=256, type=int, help="hidden of actor net")parser.add_argument('--critic_hidden_dim', default=256, type=int, help="hidden of critic net")parser.add_argument('--device', default='cpu', type=str, help="cpu or cuda")parser.add_argument('--seed', default=520, type=int, help="seed")parser.add_argument('--show_fig', default=False, type=bool, help="if show figure or not")parser.add_argument('--save_fig', default=True, type=bool, help="if save figure or not")parser.add_argument('--train_render', default=False, type=bool,help="Whether to render the environment during training")parser.add_argument('--test_render', default=True, type=bool,help="Whether to render the environment during testing")args = parser.parse_args()default_args = {'result_path': f"{curr_path}/outputs/{args.env_name}/{curr_time}/results/",'model_path': f"{curr_path}/outputs/{args.env_name}/{curr_time}/models/",}# 将参数转化为字典 type(dict)arg_dict = {**vars(args), **default_args}print("算法参数字典:", arg_dict)# 创建环境和智能体env, agent = create_env_agent(arg_dict)# 传入算法参数、环境、智能体,然后开始训练res_dic = train(arg_dict, env, agent)print("算法返回结果字典:", res_dic)# 保存相关信息agent.save_model(path=arg_dict['model_path'])save_args(arg_dict, path=arg_dict['result_path'])save_results(res_dic, tag='train', path=arg_dict['result_path'])plot_rewards(res_dic['rewards'], arg_dict, path=arg_dict['result_path'], tag="train")# =================================================================================================# 创建新环境和智能体用来测试print("=" * 300)env, agent = create_env_agent(arg_dict)# 加载已保存的智能体agent.load_model(path=arg_dict['model_path'])res_dic = test(arg_dict, env, agent)save_results(res_dic, tag='test', path=arg_dict['result_path'])plot_rewards(res_dic['rewards'], arg_dict, path=arg_dict['result_path'], tag="test")

运行结果展示

由于有些输出太长,下面仅展示部分输出

状态数: 4, 动作数: 2环境名: CartPole-v0, 算法名: A2C, Device: cpu开始训练智能体......Episode: 10/1600, Reward: 25.00, Steps:25Episode: 20/1600, Reward: 12.00, Steps:12Episode: 30/1600, Reward: 20.00, Steps:20Episode: 40/1600, Reward: 14.00, Steps:14Episode: 50/1600, Reward: 24.00, Steps:24Episode: 60/1600, Reward: 37.00, Steps:37Episode: 70/1600, Reward: 40.00, Steps:40Episode: 80/1600, Reward: 13.00, Steps:13Episode: 90/1600, Reward: 23.00, Steps:23Episode: 100/1600, Reward: 14.00, Steps:14Episode: 110/1600, Reward: 25.00, Steps:25Episode: 120/1600, Reward: 25.00, Steps:25Episode: 130/1600, Reward: 22.00, Steps:22Episode: 140/1600, Reward: 20.00, Steps:20Episode: 150/1600, Reward: 94.00, Steps:94Episode: 160/1600, Reward: 19.00, Steps:19Episode: 170/1600, Reward: 25.00, Steps:25Episode: 180/1600, Reward: 11.00, Steps:11Episode: 190/1600, Reward: 36.00, Steps:36Episode: 200/1600, Reward: 33.00, Steps:33Episode: 210/1600, Reward: 20.00, Steps:20Episode: 220/1600, Reward: 17.00, Steps:17Episode: 230/1600, Reward: 12.00, Steps:12Episode: 240/1600, Reward: 15.00, Steps:15Episode: 250/1600, Reward: 31.00, Steps:31Episode: 260/1600, Reward: 12.00, Steps:12Episode: 270/1600, Reward: 27.00, Steps:27Episode: 280/1600, Reward: 40.00, Steps:40Episode: 290/1600, Reward: 20.00, Steps:20Episode: 300/1600, Reward: 60.00, Steps:60Episode: 310/1600, Reward: 38.00, Steps:38Episode: 320/1600, Reward: 10.00, Steps:10Episode: 330/1600, Reward: 23.00, Steps:23Episode: 340/1600, Reward: 34.00, Steps:34Episode: 350/1600, Reward: 55.00, Steps:55Episode: 360/1600, Reward: 24.00, Steps:24Episode: 370/1600, Reward: 45.00, Steps:45Episode: 380/1600, Reward: 24.00, Steps:24Episode: 390/1600, Reward: 32.00, Steps:32Episode: 400/1600, Reward: 92.00, Steps:92Episode: 410/1600, Reward: 53.00, Steps:53Episode: 420/1600, Reward: 40.00, Steps:40Episode: 430/1600, Reward: 77.00, Steps:77Episode: 440/1600, Reward: 44.00, Steps:44Episode: 450/1600, Reward: 32.00, Steps:32Episode: 460/1600, Reward: 51.00, Steps:51Episode: 470/1600, Reward: 91.00, Steps:91Episode: 480/1600, Reward: 51.00, Steps:51Episode: 490/1600, Reward: 66.00, Steps:66Episode: 500/1600, Reward: 27.00, Steps:27Episode: 510/1600, Reward: 66.00, Steps:66Episode: 520/1600, Reward: 37.00, Steps:37Episode: 530/1600, Reward: 29.00, Steps:29Episode: 540/1600, Reward: 38.00, Steps:38Episode: 550/1600, Reward: 82.00, Steps:82Episode: 560/1600, Reward: 33.00, Steps:33Episode: 570/1600, Reward: 79.00, Steps:79Episode: 580/1600, Reward: 78.00, Steps:78Episode: 590/1600, Reward: 26.00, Steps:26Episode: 600/1600, Reward: 80.00, Steps:80Episode: 610/1600, Reward: 85.00, Steps:85Episode: 620/1600, Reward: 92.00, Steps:92Episode: 630/1600, Reward: 35.00, Steps:35Episode: 640/1600, Reward: 88.00, Steps:88Episode: 650/1600, Reward: 157.00, Steps:157Episode: 660/1600, Reward: 35.00, Steps:35Episode: 670/1600, Reward: 60.00, Steps:60Episode: 680/1600, Reward: 42.00, Steps:42Episode: 690/1600, Reward: 55.00, Steps:55Episode: 700/1600, Reward: 51.00, Steps:51Episode: 710/1600, Reward: 65.00, Steps:65Episode: 720/1600, Reward: 61.00, Steps:61Episode: 730/1600, Reward: 125.00, Steps:125Episode: 740/1600, Reward: 162.00, Steps:162Episode: 750/1600, Reward: 19.00, Steps:19Episode: 760/1600, Reward: 120.00, Steps:120Episode: 770/1600, Reward: 34.00, Steps:34Episode: 780/1600, Reward: 115.00, Steps:115Episode: 790/1600, Reward: 66.00, Steps:66Episode: 800/1600, Reward: 114.00, Steps:114Episode: 810/1600, Reward: 130.00, Steps:130Episode: 820/1600, Reward: 71.00, Steps:71Episode: 830/1600, Reward: 52.00, Steps:52Episode: 840/1600, Reward: 128.00, Steps:128Episode: 850/1600, Reward: 24.00, Steps:24Episode: 860/1600, Reward: 101.00, Steps:101Episode: 870/1600, Reward: 39.00, Steps:39Episode: 880/1600, Reward: 33.00, Steps:33Episode: 890/1600, Reward: 111.00, Steps:111Episode: 900/1600, Reward: 159.00, Steps:159Episode: 910/1600, Reward: 131.00, Steps:131Episode: 920/1600, Reward: 73.00, Steps:73Episode: 930/1600, Reward: 54.00, Steps:54Episode: 940/1600, Reward: 178.00, Steps:178Episode: 950/1600, Reward: 200.00, Steps:200Episode: 960/1600, Reward: 82.00, Steps:82Episode: 970/1600, Reward: 63.00, Steps:63Episode: 980/1600, Reward: 113.00, Steps:113Episode: 990/1600, Reward: 68.00, Steps:68Episode: 1000/1600, Reward: 151.00, Steps:151Episode: 1010/1600, Reward: 160.00, Steps:160Episode: 1020/1600, Reward: 135.00, Steps:135Episode: 1030/1600, Reward: 135.00, Steps:135Episode: 1040/1600, Reward: 200.00, Steps:200Episode: 1050/1600, Reward: 200.00, Steps:200Episode: 1060/1600, Reward: 141.00, Steps:141Episode: 1070/1600, Reward: 101.00, Steps:101Episode: 1080/1600, Reward: 200.00, Steps:200Episode: 1090/1600, Reward: 191.00, Steps:191Episode: 1100/1600, Reward: 200.00, Steps:200Episode: 1110/1600, Reward: 89.00, Steps:89Episode: 1120/1600, Reward: 198.00, Steps:198Episode: 1130/1600, Reward: 162.00, Steps:162Episode: 1140/1600, Reward: 175.00, Steps:175Episode: 1150/1600, Reward: 149.00, Steps:149Episode: 1160/1600, Reward: 110.00, Steps:110Episode: 1170/1600, Reward: 200.00, Steps:200Episode: 1180/1600, Reward: 129.00, Steps:129Episode: 1190/1600, Reward: 161.00, Steps:161Episode: 1200/1600, Reward: 137.00, Steps:137Episode: 1210/1600, Reward: 200.00, Steps:200Episode: 1220/1600, Reward: 200.00, Steps:200Episode: 1230/1600, Reward: 200.00, Steps:200Episode: 1240/1600, Reward: 190.00, Steps:190Episode: 1250/1600, Reward: 166.00, Steps:166Episode: 1260/1600, Reward: 163.00, Steps:163Episode: 1270/1600, Reward: 127.00, Steps:127Episode: 1280/1600, Reward: 137.00, Steps:137Episode: 1290/1600, Reward: 60.00, Steps:60Episode: 1300/1600, Reward: 156.00, Steps:156Episode: 1310/1600, Reward: 97.00, Steps:97Episode: 1320/1600, Reward: 115.00, Steps:115Episode: 1330/1600, Reward: 200.00, Steps:200Episode: 1340/1600, Reward: 200.00, Steps:200Episode: 1350/1600, Reward: 200.00, Steps:200Episode: 1360/1600, Reward: 200.00, Steps:200Episode: 1370/1600, Reward: 200.00, Steps:200Episode: 1380/1600, Reward: 200.00, Steps:200Episode: 1390/1600, Reward: 200.00, Steps:200Episode: 1400/1600, Reward: 154.00, Steps:154Episode: 1410/1600, Reward: 174.00, Steps:174Episode: 1420/1600, Reward: 114.00, Steps:114Episode: 1430/1600, Reward: 157.00, Steps:157Episode: 1440/1600, Reward: 191.00, Steps:191Episode: 1450/1600, Reward: 65.00, Steps:65Episode: 1460/1600, Reward: 200.00, Steps:200Episode: 1470/1600, Reward: 200.00, Steps:200Episode: 1480/1600, Reward: 155.00, Steps:155Episode: 1490/1600, Reward: 107.00, Steps:107Episode: 1500/1600, Reward: 27.00, Steps:27Episode: 1510/1600, Reward: 200.00, Steps:200Episode: 1520/1600, Reward: 200.00, Steps:200Episode: 1530/1600, Reward: 132.00, Steps:132Episode: 1540/1600, Reward: 142.00, Steps:142Episode: 1550/1600, Reward: 99.00, Steps:99Episode: 1560/1600, Reward: 171.00, Steps:171Episode: 1570/1600, Reward: 172.00, Steps:172Episode: 1580/1600, Reward: 147.00, Steps:147Episode: 1590/1600, Reward: 182.00, Steps:182Episode: 1600/1600, Reward: 200.00, Steps:200训练结束 , 用时: 81.30708861351013 s============================================================================================================================================================================================================================================================================================================状态数: 4, 动作数: 2开始测试智能体......环境名: CartPole-v0, 算法名: A2C, Device: cpuEpisode: 1/20, Steps:161, Reward: 161.00Episode: 2/20, Steps:150, Reward: 150.00Episode: 3/20, Steps:93, Reward: 93.00Episode: 4/20, Steps:169, Reward: 169.00Episode: 5/20, Steps:200, Reward: 200.00Episode: 6/20, Steps:168, Reward: 168.00Episode: 7/20, Steps:25, Reward: 25.00Episode: 8/20, Steps:171, Reward: 171.00Episode: 9/20, Steps:200, Reward: 200.00Episode: 10/20, Steps:200, Reward: 200.00Episode: 11/20, Steps:188, Reward: 188.00Episode: 12/20, Steps:200, Reward: 200.00Episode: 13/20, Steps:87, Reward: 87.00Episode: 14/20, Steps:200, Reward: 200.00Episode: 15/20, Steps:200, Reward: 200.00Episode: 16/20, Steps:200, Reward: 200.00Episode: 17/20, Steps:200, Reward: 200.00Episode: 18/20, Steps:200, Reward: 200.00Episode: 19/20, Steps:198, Reward: 198.00Episode: 20/20, Steps:200, Reward: 200.00测试结束 , 用时: 28.915676593780518 s

4.2.2 网络参数共享版本

import argparseimport datetimeimport timefrom collections import dequeimport torch.nn.functional as Fimport gymfrom torch import nn# 这里需要改成自己的RL_Utils.py文件的路径from Python.ReinforcementLearning.EasyRL.RL_Utils import *# 经验回放缓存区对象class PGReplay:def __init__(self):self.buffer = deque()def push(self, transitions):self.buffer.append(transitions)def sample(self):batch = list(self.buffer)return zip(*batch)def clear(self):self.buffer.clear()def __len__(self):return len(self.buffer)# 演员:离散动作,就输出每个动作的概率分布(softmax);连续动作就直接输出动作(sigmoid)# 评论员:输出V_{\pi}(s)# 网络参数共享的演员-评论员网络class ActorCriticSoftMax(nn.Module):def __init__(self, input_dim, output_dim, actor_hidden_dim=256, critic_hidden_dim=256):super(ActorCriticSoftMax, self).__init__()self.critic_fc1 = nn.Linear(input_dim, critic_hidden_dim)self.critic_fc2 = nn.Linear(critic_hidden_dim, 1)self.actor_fc1 = nn.Linear(input_dim, actor_hidden_dim)self.actor_fc2 = nn.Linear(actor_hidden_dim, output_dim)def forward(self, state):value = F.relu(self.critic_fc1(state))value = self.critic_fc2(value)policy_dist = F.relu(self.actor_fc1(state))policy_dist = F.softmax(self.actor_fc2(policy_dist), dim=1)return value, policy_dist# A2C智能体对象class A2C:def __init__(self, models, memory, cfg):self.n_actions = cfg['n_actions']self.gamma = cfg['gamma']self.device = torch.device(cfg['device'])self.memory = memoryself.ac_net = models['ActorCritic'].to(self.device)self.ac_optimizer = torch.optim.Adam(self.ac_net.parameters(), lr=cfg['lr'])def sample_action(self, state):state = torch.tensor(state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)value, dist = self.ac_net(state) # note that 'dist' need require_grad=Truevalue = value.detach().numpy().squeeze(0)[0]action = np.random.choice(self.n_actions, p=dist.detach().numpy().squeeze(0)) # shape(p=(n_actions,1)return action, value, distdef predict_action(self, state):with torch.no_grad():state = torch.tensor(state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)value, dist = self.ac_net(state)value = value.numpy().squeeze(0)[0] # shape(value) = (1,)action = np.random.choice(self.n_actions, p=dist.numpy().squeeze(0)) # shape(p=(n_actions,1)return action, value, distdef update(self, next_state, entropy):value_pool, log_prob_pool, reward_pool = self.memory.sample()next_state = torch.tensor(next_state, device=self.device, dtype=torch.float32).unsqueeze(dim=0)next_value, _ = self.ac_net(next_state)returns = np.zeros_like(reward_pool)for t in reversed(range(len(reward_pool))):next_value = reward_pool[t] + self.gamma * next_value # G(s_{t},a{t}) = r_{t+1} + gamma * V(s_{t+1})returns[t] = next_valuereturns = torch.tensor(returns, device=self.device)value_pool = torch.tensor(value_pool, device=self.device)advantages = returns - value_poollog_prob_pool = torch.stack(log_prob_pool)actor_loss = (-log_prob_pool * advantages).mean()critic_loss = 0.5 * advantages.pow(2).mean()ac_loss = actor_loss + critic_loss + 0.001 * entropyself.ac_optimizer.zero_grad()ac_loss.backward()self.ac_optimizer.step()self.memory.clear()def save_model(self, path):Path(path).mkdir(parents=True, exist_ok=True)torch.save(self.ac_net.state_dict(), f"{path}/a2c_checkpoint.pt")def load_model(self, path):self.ac_net.load_state_dict(torch.load(f"{path}/a2c_checkpoint.pt"))# 训练函数def train(arg_dict, env, agent):# 开始计时startTime = time.time()print(f"环境名: {arg_dict['env_name']}, 算法名: {arg_dict['algo_name']}, Device: {arg_dict['device']}")print("开始训练智能体......")rewards = []steps = []for i_ep in range(arg_dict['train_eps']):ep_reward = 0ep_step = 0ep_entropy = 0state = env.reset()# 每次采样ep_max_steps个样本再对模型进行更新for _ in range(arg_dict['ep_max_steps']):# 画图if arg_dict['train_render']:env.render()# 探索采样action, value, dist = agent.sample_action(state)# 根据动作获取下一步状态、奖励(经验)next_state, reward, done, _ = env.step(action)log_prob = torch.log(dist.squeeze(0)[action])entropy = -np.sum(np.mean(dist.detach().numpy()) * np.log(dist.detach().numpy()))# 保存经验agent.memory.push((value, log_prob, reward))# 更新状态state = next_stateep_reward += rewardep_entropy += entropyep_step += 1if done:break# 更新智能体参数agent.update(next_state, ep_entropy)rewards.append(ep_reward)steps.append(ep_step)if (i_ep + 1) % 10 == 0:print(f'Episode: {i_ep + 1}/{arg_dict["train_eps"]}, Reward: {ep_reward:.2f}, Steps:{ep_step}')print('训练结束 , 用时: ' + str(time.time() - startTime) + " s")# 关闭环境env.close()return {'episodes': range(len(rewards)), 'rewards': rewards}# 测试函数def test(arg_dict, env, agent):startTime = time.time()print("开始测试智能体......")print(f"环境名: {arg_dict['env_name']}, 算法名: {arg_dict['algo_name']}, Device: {arg_dict['device']}")rewards = []steps = []for i_ep in range(arg_dict['test_eps']):ep_reward = 0ep_step = 0state = env.reset()for _ in range(arg_dict['ep_max_steps']):# 画图if arg_dict['test_render']:env.render()# 预测动作action, _, _ = agent.predict_action(state)next_state, reward, done, _ = env.step(action)state = next_stateep_reward += rewardep_step += 1if done:breakrewards.append(ep_reward)steps.append(ep_step)print(f"Episode: {i_ep + 1}/{arg_dict['test_eps']}, Steps:{ep_step}, Reward: {ep_reward:.2f}")print("测试结束 , 用时: " + str(time.time() - startTime) + " s")env.close()return {'episodes': range(len(rewards)), 'rewards': rewards}# 创建环境和智能体def create_env_agent(arg_dict):# 创建环境env = gym.make(arg_dict['env_name'])# 设置随机种子all_seed(env, seed=arg_dict["seed"])# 获取状态数try:n_states = env.observation_space.nexcept AttributeError:n_states = env.observation_space.shape[0]# 获取动作数n_actions = env.action_space.nprint(f"状态数: {n_states}, 动作数: {n_actions}")# 将状态数和动作数加入算法参数字典arg_dict.update({"n_states": n_states, "n_actions": n_actions})# 实例化智能体对象models = {'ActorCritic': ActorCriticSoftMax(arg_dict['n_states'], arg_dict['n_actions'],actor_hidden_dim=arg_dict['actor_hidden_dim'],critic_hidden_dim=arg_dict['critic_hidden_dim'])}# 经验回放缓存区memory = PGReplay()agent = A2C(models, memory, arg_dict)# 返回环境,智能体return env, agentif __name__ == '__main__':# 防止报错 OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"# 获取当前路径curr_path = os.path.dirname(os.path.abspath(__file__))# 获取当前时间curr_time = datetime.datetime.now().strftime("%Y_%m_%d-%H_%M_%S")# 相关参数设置parser = argparse.ArgumentParser(description="hyper parameters")parser.add_argument('--algo_name', default='A2C', type=str, help="name of algorithm")parser.add_argument('--env_name', default='CartPole-v0', type=str, help="name of environment")parser.add_argument('--train_eps', default=2000, type=int, help="episodes of training")parser.add_argument('--test_eps', default=20, type=int, help="episodes of testing")parser.add_argument('--ep_max_steps', default=100000, type=int,help="steps per episode, much larger value can simulate infinite steps")parser.add_argument('--gamma', default=0.99, type=float, help="discounted factor")parser.add_argument('--lr', default=3e-4, type=float, help="learning rate")parser.add_argument('--actor_hidden_dim', default=256, type=int)parser.add_argument('--critic_hidden_dim', default=256, type=int)parser.add_argument('--device', default='cpu', type=str, help="cpu or cuda")parser.add_argument('--seed', default=520, type=int, help="seed")parser.add_argument('--show_fig', default=False, type=bool, help="if show figure or not")parser.add_argument('--save_fig', default=True, type=bool, help="if save figure or not")parser.add_argument('--train_render', default=False, type=bool,help="Whether to render the environment during training")parser.add_argument('--test_render', default=True, type=bool,help="Whether to render the environment during testing")args = parser.parse_args()default_args = {'result_path': f"{curr_path}/outputs/{args.env_name}/{curr_time}/results/",'model_path': f"{curr_path}/outputs/{args.env_name}/{curr_time}/models/",}# 将参数转化为字典 type(dict)arg_dict = {**vars(args), **default_args}print("算法参数字典:", arg_dict)# 创建环境和智能体env, agent = create_env_agent(arg_dict)# 传入算法参数、环境、智能体,然后开始训练res_dic = train(arg_dict, env, agent)print("算法返回结果字典:", res_dic)# 保存相关信息agent.save_model(path=arg_dict['model_path'])save_args(arg_dict, path=arg_dict['result_path'])save_results(res_dic, tag='train', path=arg_dict['result_path'])plot_rewards(res_dic['rewards'], arg_dict, path=arg_dict['result_path'], tag="train")# =================================================================================================# 创建新环境和智能体用来测试print("=" * 300)env, agent = create_env_agent(arg_dict)# 加载已保存的智能体agent.load_model(path=arg_dict['model_path'])res_dic = test(arg_dict, env, agent)save_results(res_dic, tag='test', path=arg_dict['result_path'])plot_rewards(res_dic['rewards'], arg_dict, path=arg_dict['result_path'], tag="test")

运行结果展示

由于有些输出太长,下面仅展示部分输出

状态数: 4, 动作数: 2环境名: CartPole-v0, 算法名: A2C, Device: cpu开始训练智能体......Episode: 10/2000, Reward: 12.00, Steps:12Episode: 20/2000, Reward: 21.00, Steps:21Episode: 30/2000, Reward: 13.00, Steps:13Episode: 40/2000, Reward: 14.00, Steps:14Episode: 50/2000, Reward: 19.00, Steps:19Episode: 60/2000, Reward: 22.00, Steps:22Episode: 70/2000, Reward: 50.00, Steps:50Episode: 80/2000, Reward: 19.00, Steps:19Episode: 90/2000, Reward: 18.00, Steps:18Episode: 100/2000, Reward: 28.00, Steps:28Episode: 110/2000, Reward: 20.00, Steps:20Episode: 120/2000, Reward: 28.00, Steps:28Episode: 130/2000, Reward: 76.00, Steps:76Episode: 140/2000, Reward: 22.00, Steps:22Episode: 150/2000, Reward: 70.00, Steps:70Episode: 160/2000, Reward: 20.00, Steps:20Episode: 170/2000, Reward: 85.00, Steps:85Episode: 180/2000, Reward: 17.00, Steps:17Episode: 190/2000, Reward: 49.00, Steps:49Episode: 200/2000, Reward: 21.00, Steps:21Episode: 210/2000, Reward: 65.00, Steps:65Episode: 220/2000, Reward: 54.00, Steps:54Episode: 230/2000, Reward: 85.00, Steps:85Episode: 240/2000, Reward: 48.00, Steps:48Episode: 250/2000, Reward: 22.00, Steps:22Episode: 260/2000, Reward: 34.00, Steps:34Episode: 270/2000, Reward: 22.00, Steps:22Episode: 280/2000, Reward: 29.00, Steps:29Episode: 290/2000, Reward: 77.00, Steps:77Episode: 300/2000, Reward: 30.00, Steps:30Episode: 310/2000, Reward: 115.00, Steps:115Episode: 320/2000, Reward: 62.00, Steps:62Episode: 330/2000, Reward: 45.00, Steps:45Episode: 340/2000, Reward: 102.00, Steps:102Episode: 350/2000, Reward: 93.00, Steps:93Episode: 360/2000, Reward: 27.00, Steps:27Episode: 370/2000, Reward: 31.00, Steps:31Episode: 380/2000, Reward: 27.00, Steps:27Episode: 390/2000, Reward: 30.00, Steps:30Episode: 400/2000, Reward: 30.00, Steps:30Episode: 410/2000, Reward: 61.00, Steps:61Episode: 420/2000, Reward: 61.00, Steps:61Episode: 430/2000, Reward: 56.00, Steps:56Episode: 440/2000, Reward: 120.00, Steps:120Episode: 450/2000, Reward: 87.00, Steps:87Episode: 460/2000, Reward: 66.00, Steps:66Episode: 470/2000, Reward: 30.00, Steps:30Episode: 480/2000, Reward: 65.00, Steps:65Episode: 490/2000, Reward: 72.00, Steps:72Episode: 500/2000, Reward: 64.00, Steps:64Episode: 510/2000, Reward: 93.00, Steps:93Episode: 520/2000, Reward: 159.00, Steps:159Episode: 530/2000, Reward: 21.00, Steps:21Episode: 540/2000, Reward: 31.00, Steps:31Episode: 550/2000, Reward: 126.00, Steps:126Episode: 560/2000, Reward: 176.00, Steps:176Episode: 570/2000, Reward: 116.00, Steps:116Episode: 580/2000, Reward: 131.00, Steps:131Episode: 590/2000, Reward: 156.00, Steps:156Episode: 600/2000, Reward: 158.00, Steps:158Episode: 610/2000, Reward: 125.00, Steps:125Episode: 620/2000, Reward: 39.00, Steps:39Episode: 630/2000, Reward: 52.00, Steps:52Episode: 640/2000, Reward: 67.00, Steps:67Episode: 650/2000, Reward: 110.00, Steps:110Episode: 660/2000, Reward: 95.00, Steps:95Episode: 670/2000, Reward: 33.00, Steps:33Episode: 680/2000, Reward: 188.00, Steps:188Episode: 690/2000, Reward: 29.00, Steps:29Episode: 700/2000, Reward: 58.00, Steps:58Episode: 710/2000, Reward: 60.00, Steps:60Episode: 720/2000, Reward: 131.00, Steps:131Episode: 730/2000, Reward: 132.00, Steps:132Episode: 740/2000, Reward: 169.00, Steps:169Episode: 750/2000, Reward: 189.00, Steps:189Episode: 760/2000, Reward: 109.00, Steps:109Episode: 770/2000, Reward: 70.00, Steps:70Episode: 780/2000, Reward: 200.00, Steps:200Episode: 790/2000, Reward: 157.00, Steps:157Episode: 800/2000, Reward: 178.00, Steps:178Episode: 810/2000, Reward: 181.00, Steps:181Episode: 820/2000, Reward: 112.00, Steps:112Episode: 830/2000, Reward: 28.00, Steps:28Episode: 840/2000, Reward: 184.00, Steps:184Episode: 850/2000, Reward: 80.00, Steps:80Episode: 860/2000, Reward: 25.00, Steps:25Episode: 870/2000, Reward: 148.00, Steps:148Episode: 880/2000, Reward: 111.00, Steps:111Episode: 890/2000, Reward: 121.00, Steps:121Episode: 900/2000, Reward: 130.00, Steps:130Episode: 910/2000, Reward: 190.00, Steps:190Episode: 920/2000, Reward: 124.00, Steps:124Episode: 930/2000, Reward: 140.00, Steps:140Episode: 940/2000, Reward: 200.00, Steps:200Episode: 950/2000, Reward: 86.00, Steps:86Episode: 960/2000, Reward: 82.00, Steps:82Episode: 970/2000, Reward: 186.00, Steps:186Episode: 980/2000, Reward: 66.00, Steps:66Episode: 990/2000, Reward: 200.00, Steps:200Episode: 1000/2000, Reward: 193.00, Steps:193Episode: 1010/2000, Reward: 200.00, Steps:200Episode: 1020/2000, Reward: 157.00, Steps:157Episode: 1030/2000, Reward: 150.00, Steps:150Episode: 1040/2000, Reward: 200.00, Steps:200Episode: 1050/2000, Reward: 200.00, Steps:200Episode: 1060/2000, Reward: 115.00, Steps:115Episode: 1070/2000, Reward: 108.00, Steps:108Episode: 1080/2000, Reward: 189.00, Steps:189Episode: 1090/2000, Reward: 126.00, Steps:126Episode: 1100/2000, Reward: 200.00, Steps:200Episode: 1110/2000, Reward: 200.00, Steps:200Episode: 1120/2000, Reward: 200.00, Steps:200Episode: 1130/2000, Reward: 200.00, Steps:200Episode: 1140/2000, Reward: 200.00, Steps:200Episode: 1150/2000, Reward: 200.00, Steps:200Episode: 1160/2000, Reward: 131.00, Steps:131Episode: 1170/2000, Reward: 191.00, Steps:191Episode: 1180/2000, Reward: 200.00, Steps:200Episode: 1190/2000, Reward: 200.00, Steps:200Episode: 1200/2000, Reward: 171.00, Steps:171Episode: 1210/2000, Reward: 200.00, Steps:200Episode: 1220/2000, Reward: 180.00, Steps:180Episode: 1230/2000, Reward: 127.00, Steps:127Episode: 1240/2000, Reward: 94.00, Steps:94Episode: 1250/2000, Reward: 113.00, Steps:113Episode: 1260/2000, Reward: 150.00, Steps:150Episode: 1270/2000, Reward: 200.00, Steps:200Episode: 1280/2000, Reward: 148.00, Steps:148Episode: 1290/2000, Reward: 111.00, Steps:111Episode: 1300/2000, Reward: 200.00, Steps:200Episode: 1310/2000, Reward: 77.00, Steps:77Episode: 1320/2000, Reward: 158.00, Steps:158Episode: 1330/2000, Reward: 200.00, Steps:200Episode: 1340/2000, Reward: 180.00, Steps:180Episode: 1350/2000, Reward: 142.00, Steps:142Episode: 1360/2000, Reward: 142.00, Steps:142Episode: 1370/2000, Reward: 147.00, Steps:147Episode: 1380/2000, Reward: 196.00, Steps:196Episode: 1390/2000, Reward: 200.00, Steps:200Episode: 1400/2000, Reward: 163.00, Steps:163Episode: 1410/2000, Reward: 159.00, Steps:159Episode: 1420/2000, Reward: 170.00, Steps:170Episode: 1430/2000, Reward: 200.00, Steps:200Episode: 1440/2000, Reward: 200.00, Steps:200Episode: 1450/2000, Reward: 200.00, Steps:200Episode: 1460/2000, Reward: 200.00, Steps:200Episode: 1470/2000, Reward: 200.00, Steps:200Episode: 1480/2000, Reward: 200.00, Steps:200Episode: 1490/2000, Reward: 200.00, Steps:200Episode: 1500/2000, Reward: 200.00, Steps:200Episode: 1510/2000, Reward: 200.00, Steps:200Episode: 1520/2000, Reward: 75.00, Steps:75Episode: 1530/2000, Reward: 200.00, Steps:200Episode: 1540/2000, Reward: 200.00, Steps:200Episode: 1550/2000, Reward: 200.00, Steps:200Episode: 1560/2000, Reward: 189.00, Steps:189Episode: 1570/2000, Reward: 194.00, Steps:194Episode: 1580/2000, Reward: 200.00, Steps:200Episode: 1590/2000, Reward: 164.00, Steps:164Episode: 1600/2000, Reward: 200.00, Steps:200Episode: 1610/2000, Reward: 200.00, Steps:200Episode: 1620/2000, Reward: 161.00, Steps:161Episode: 1630/2000, Reward: 200.00, Steps:200Episode: 1640/2000, Reward: 135.00, Steps:135Episode: 1650/2000, Reward: 159.00, Steps:159Episode: 1660/2000, Reward: 115.00, Steps:115Episode: 1670/2000, Reward: 197.00, Steps:197Episode: 1680/2000, Reward: 200.00, Steps:200Episode: 1690/2000, Reward: 200.00, Steps:200Episode: 1700/2000, Reward: 157.00, Steps:157Episode: 1710/2000, Reward: 190.00, Steps:190Episode: 1720/2000, Reward: 127.00, Steps:127Episode: 1730/2000, Reward: 64.00, Steps:64Episode: 1740/2000, Reward: 178.00, Steps:178Episode: 1750/2000, Reward: 130.00, Steps:130Episode: 1760/2000, Reward: 142.00, Steps:142Episode: 1770/2000, Reward: 108.00, Steps:108Episode: 1780/2000, Reward: 99.00, Steps:99Episode: 1790/2000, Reward: 130.00, Steps:130Episode: 1800/2000, Reward: 147.00, Steps:147Episode: 1810/2000, Reward: 200.00, Steps:200Episode: 1820/2000, Reward: 60.00, Steps:60Episode: 1830/2000, Reward: 200.00, Steps:200Episode: 1840/2000, Reward: 93.00, Steps:93Episode: 1850/2000, Reward: 163.00, Steps:163Episode: 1860/2000, Reward: 189.00, Steps:189Episode: 1870/2000, Reward: 200.00, Steps:200Episode: 1880/2000, Reward: 200.00, Steps:200Episode: 1890/2000, Reward: 200.00, Steps:200Episode: 1900/2000, Reward: 200.00, Steps:200Episode: 1910/2000, Reward: 200.00, Steps:200Episode: 1920/2000, Reward: 200.00, Steps:200Episode: 1930/2000, Reward: 200.00, Steps:200Episode: 1940/2000, Reward: 102.00, Steps:102Episode: 1950/2000, Reward: 106.00, Steps:106Episode: 1960/2000, Reward: 200.00, Steps:200Episode: 1970/2000, Reward: 200.00, Steps:200Episode: 1980/2000, Reward: 200.00, Steps:200Episode: 1990/2000, Reward: 200.00, Steps:200Episode: 2000/2000, Reward: 200.00, Steps:200训练结束 , 用时: 129.54206490516663 s状态数: 4, 动作数: 2开始测试智能体......环境名: CartPole-v0, 算法名: A2C, Device: cpuEpisode: 1/20, Steps:130, Reward: 130.00Episode: 2/20, Steps:200, Reward: 200.00Episode: 3/20, Steps:200, Reward: 200.00Episode: 4/20, Steps:200, Reward: 200.00Episode: 5/20, Steps:200, Reward: 200.00Episode: 6/20, Steps:200, Reward: 200.00Episode: 7/20, Steps:87, Reward: 87.00Episode: 8/20, Steps:200, Reward: 200.00Episode: 9/20, Steps:68, Reward: 68.00Episode: 10/20, Steps:200, Reward: 200.00Episode: 11/20, Steps:62, Reward: 62.00Episode: 12/20, Steps:200, Reward: 200.00Episode: 13/20, Steps:200, Reward: 200.00Episode: 14/20, Steps:200, Reward: 200.00Episode: 15/20, Steps:200, Reward: 200.00Episode: 16/20, Steps:200, Reward: 200.00Episode: 17/20, Steps:200, Reward: 200.00Episode: 18/20, Steps:200, Reward: 200.00Episode: 19/20, Steps:200, Reward: 200.00Episode: 20/20, Steps:200, Reward: 200.00测试结束 , 用时: 27.40801215171814 s

4.4 关于可视化的设置

如果你觉得可视化比较耗时,你可以进行设置,取消可视化。

或者你想看看训练过程的可视化,也可以进行相关设置

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。