Multi-Agent Actor-Critic with Generative Cooperative Policy Network
October 22, 2018 Β· Declared Dead Β· π arXiv.org
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Heechang Ryu, Hayong Shin, Jinkyoo Park
arXiv ID
1810.09206
Category
cs.MA: Multiagent Systems
Cross-listed
cs.AI
Citations
13
Venue
arXiv.org
Last Checked
2 months ago
Abstract
We propose an efficient multi-agent reinforcement learning approach to derive equilibrium strategies for multi-agents who are participating in a Markov game. Mainly, we are focused on obtaining decentralized policies for agents to maximize the performance of a collaborative task by all the agents, which is similar to solving a decentralized Markov decision process. We propose to use two different policy networks: (1) decentralized greedy policy network used to generate greedy action during training and execution period and (2) generative cooperative policy network (GCPN) used to generate action samples to make other agents improve their objectives during training period. We show that the samples generated by GCPN enable other agents to explore the policy space more effectively and favorably to reach a better policy in terms of achieving the collaborative tasks.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Multiagent Systems
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Mean Field Multi-Agent Reinforcement Learning
R.I.P.
π»
Ghosted
A Survey and Critique of Multiagent Deep Reinforcement Learning
R.I.P.
π»
Ghosted
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity
R.I.P.
π»
Ghosted
Collaborative vehicle routing: a survey
R.I.P.
π»
Ghosted
Deep Reinforcement Learning for Swarm Systems
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted