Dear MEL Topic Readers,
How filming your chores could train the android butlers of the future
AI is not limited to chatbots, in classrooms, or at workplaces. Physical
AI enables autonomous machines to perceive, understand, and perform complex actions
in the real, physical world, such as industrial automation, autonomous driving,
drones, human caring, and household chores. Before generative AI like ChatGPT was
introduced, it had been trained on billions of words, texts, and documents from
the Internet to learn text patterns to generate human-like responses to user
prompts, and it is still being trained. So, how will general-purpose robots be
trained to work safely and effectively in various, interactive, dynamic environments,
such as factories, warehouses, shops, hospitals, and homes? To learn how to perceive,
judge, and make movements, a vast amount of visual data in various environments
and tasks is now being collected by first-person view cameras from all over the
world. Chatbots are being trained by texts and documents on the Internet. Map
apps are collecting visual and physical data from the streets, and autonomous
vehicles perceive real-time situations ahead and around the vehicle to drive.
Now, physical AI is being trained by visual data to perceive, judge, and react
to perform tasks in a real-world environment. There are a lot of things and activities
going on behind the AI.
Read the article and learn about what it takes to develop humanoid robots.
No comments:
Post a Comment