Robotic grasping faces catastrophic forgetting as knowledge of manipulated objects fades when handling new items. This paper introduces “replay tail,” a memory technique using an RGBD camera to capture tabletop scenes, convert observations to 3D point clouds, and generate vertically projected heightmaps. Building on deep Q-learning combining deep neural networks and reinforcement learning, replay tail replays recent heightmap experiences to maintain adaptation. By emphasizing recent interactions during memory replay, grasping policies continuously recalibrate, preventing performance degradation despite emerging novelty. Experiments with 2000 simulated automated grasping attempts show 89% average success rates using replay tail versus 86% otherwise. These highlights replay tail’s potential to enable real-world deployment by mitigating catastrophic forgetting through consolidated recent memories. |