foundation model in robotics

1.SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning

post-thumbnail

2.LM-Nav

post-thumbnail

3.ViNT: A Foundation Model for Visual Navigation

post-thumbnail

4.LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models

post-thumbnail

5.R2R dataset & ALFRED dataset & AI2-Thor Simulator

post-thumbnail

6.Code as Policies: Language Model Programs for Embodied Control

post-thumbnail

7.Segment Anything Model (SAM)

post-thumbnail

8.Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning

post-thumbnail

9.VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models

post-thumbnail

10.HomeRobot: Open-Vocabulary Mobile Manipulation

post-thumbnail

11.OpenScene: 3D Scene Understanding with Open Vocabularies

post-thumbnail

12.OpenEQA: Embodied Question Answering in the Era of Foundation Models

post-thumbnail

13.RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

post-thumbnail

14.TidyBot: Personalized Robot Assistance with Large Language Models

post-thumbnail