Close

Presentation

GRL: Redesign Distributed Reinforcement Learning Training on One GPU
DescriptionReinforcement learning is computationally intensive due to frequent data exchanges between learners and actors., making it hard to fully utilize the GPU. To address this, we propose a RL framework GRL, marking the first time the complete RL process is deployed on one GPU. Based on the features of GPU, we design the lock-free model queue and the fused actors to enhance the experience throughput of framework. We propose an auto-configurator to adjust the runtime configuration to speed up the whole framework. We test GRL in various RL environments. GRL achieves an improvement on throughput from 4 to 200 times.
Event Type
Networking
Work-in-Progress Poster
TimeSunday, June 226:00pm - 7:00pm PDT
LocationLevel 3 Lobby