AI must be able to better understand objects and spaces if it is to make its way into our living rooms. The Allen Institute for AI has developed a huge and varied database 3D models everyday objects to help make simulations for AI models even closer to reality.
Simulators are 3D environments that simulate real locations where an AI or robot might need to navigate. Training simulators, however, are not as realistic as modern console games. They lack detail, variation, and interactivity.
Objaverse is a name that, while it sounds awkward, yet makes a good impression, aims to improve the quality of life with its over 800,000 (3D) models and all types of metadata. There are many items represented, including food, tables and chairs, as well as gadgets and appliances. This section includes any object that you might find in a home, office, or restaurant.
It is meant to replace obsolete object libraries such as ShapeNet, which has about 50,000 more detailed models. How can your AI recognize a cut-glass lamp with a different shape or pattern if the only “lamp” it has ever seen is a generic, unadorned one? Objaverse allows the model to see variations of common objects, so it can identify what they are despite their differences.
Your AI assistant won’t likely need to identify a bookcase “medieval” or otherwise, but it should be able to tell the difference between a peeled banana and an unpeeled banana. You never know what could be important.
Photorealistic imagery, captured via photogrammetry, is also a way to bring realism and variety that’s obvious in retrospect. All beds are similar, but how about unmade ones? All different!
It is helpful to have objects that can also move around and do their “main job”, if you will. It is helpful to know what an object looks like when it is closed. But how do you get from A-B? Although it sounds simple, AI models won’t be able to invent or intuitive this information if they don’t have it.