MIT presented a new method for teaching robots that uses large amounts of data, similar to how large language models are trained, writes TechCrunch. Here's the details.
What Happened
The Massachusetts Institute of Technology (MIT) has unveiled a new model to help train robots using massive amounts of data, as in large language models (LLM).
200% Deposit Bonus up to €3,000 180% First Deposit Bonus up to $20,000The researchers note that traditional learning methods, particularly imitation learning, are ineffective when unexpected conditions such as changing lighting or new obstacles arise. In such cases, the robots do not have enough data to adapt.
That is why MIT scientists decided to use a large amount of data, as they do for training language models. They developed a new architecture called Heterogeneous Pretrained Transformers (HPT). This system integrates information from various sensors and environments to teach robots to better adapt to change.
What's Next
Associate Professor David Held emphasized, that the goal of the research is to create a universal brain for robots that can be downloaded without the need for additional training.