Patent Issued for Systems and methods for labeling 3D models using virtual reality and augmented reality (USPTO 11210851): State Farm Mutual Automobile Insurance Company
2022 JAN 19 (NewsRx) -- By a
The patent’s inventors are Carnahan, Jeremy (
This patent was filed on
From the background information supplied by the inventors, news correspondents obtained the following quote: “Machine learning (ML) and artificial intelligence (AI) are techniques increasingly utilized by computer systems for processing data and carrying out tasks. ML and AI often involve training models which may then be used to process raw data and generate useful outputs. For example, in supervised machine learning, a model may be trained to identify pictures that include an image of a cat. The model may be trained using numerous labeled images, where the labels denote whether or not the image contains a cat. The trained model may then be used to identify if another image contains a cat.
“As ML and AI become more ubiquitous in technological applications, be it big data analysis, robotics, autonomous vehicles, digital personal assistants, or image recognition systems, the amount of data required for training ML and AI systems is increasing. Without a more efficient way to generate training data, the development of ML and AI systems may be stunted. Efficient methods for generating reliable training data, then, are of great importance for promoting the continued advancement of AI and ML technologies.
“Simultaneous to the evolution of ML and AI systems, virtual reality (VR) and augmented reality (AR) systems are quickly developing as new mediums for interacting with a digital environment. VR systems immerse users in a digital environment (e.g., through a VR headset), such that the user is effectively positioned within a “virtual” world. AR systems overlay real-world environments with digital content (e.g., aspects of a digital environment), such that the user experiences the real world “augmented” by digital data. In addition to applications for user enjoyment (e.g., VR video games), VR and AR allow a user to experience and interact with a digital environment in a new way.
“Currently, conventional systems for generating labeled training data for ML and AI face a number of challenges. First, labeling is time intensive for users manually labeling objects and features in 3D models, especially given difficulties in navigating 3D environments in conventional systems. Additionally, difficulties in labeling may lead to increased user error, such as not labeling unnoticed objects, mislabeling objects, or bounding objects in the wrong position or with the wrong size/orientation boundary. Identified objects and labels in 3D models are also difficult to verify against potential real-world counterparts. As such, a system and process for alleviating these difficulties are desired.”
Supplementing the background information on this patent, NewsRx reporters also obtained the inventors’ summary information for this patent: “The present embodiments may relate to systems and methods for labeling objects in three dimensional (3D) models using virtual reality (VR) and augmented reality (AR). The system may include a VR computing device, a model processing (MP) computing device, a third party computing device, and a database.
“In one aspect, a computer-implemented method for labeling a 3D model using VR may be provided. The method may be implemented by a VR computer system including at least one processor. The method may include: (i) receiving the 3D model; (ii) processing the 3D model using object recognition; (iii) identifying at least one environmental feature within the 3D model; (iv) generating a processed 3D model including the at least one environmental feature; (v) displaying a VR environment based upon the processed 3D model; (vi) receiving user input including labeling data associated with the environmental feature; (vii) generating a labeled 3D model by embedding the labeling data into the processed 3D model; and/or (viii) and generating training data based upon the labeled 3D model. The method may include additional, less, or alternate actions, including those discussed elsewhere herein.
“In another aspect, a virtual reality (VR) labeling computer system for labeling a three dimensional (3D) model using VR may be provided. The VR labeling computer system may include at least one processor in communication with at least one memory device, and the at least one processor may be configured to: (i) receive the 3D model; (ii) process the 3D model using object recognition; (iii) identify at least one environmental feature within the 3D model; (iv) generate a processed 3D model including the at least one environmental feature; (v) display a VR environment based upon the processed 3D model; (vi) receive user input including labeling data associated with the environmental feature; (vii) generate a labeled 3D model by embedding the labeling data into the processed 3D model; and/or (vii) generate training data based upon the labeled 3D model. The system may include additional, less, or alternate functionality, including that discussed elsewhere herein.
“In another aspect, at least one non-transitory computer-readable storage media having computer-executable instructions embodied thereon for labeling a three dimensional (3D) model using virtual reality (VR) may be provided. When executed by at least one processor, the computer-executable instructions may cause the processor to: (i) receive the 3D model; (ii) process the 3D model using object recognition; (iii) identify at least one environmental feature within the 3D model; (iv) generate a processed 3D model including the at least one environmental feature; (v) display a VR environment based upon the processed 3D model; (vi) receive user input including labeling data associated with the environmental feature; (vii) generate a labeled 3D model by embedding the labeling data into the processed 3D model; and (vii) generate training data based upon the labeled 3D model. The instructions may direct additional, less, or alternate functionality, including that discussed elsewhere herein.
“In yet another aspect, a computer-implemented method for labeling a three dimensional (3D) model using virtual reality (VR) may be provided. The method may be implemented by a VR labeling computer system including a model processing (MP) computing device and a VR computing device. The method may include: (i) receiving, by the MP computing device, the 3D model, the 3D model including at least one unidentified environmental feature; (ii) identifying, by the MP computing device, an environmental feature corresponding to the unidentified environmental feature based upon analysis of the 3D model; (iii) transmitting, by the MP computing device, the 3D model and environmental feature to the VR computing device; (iv) displaying, by the VR computing device, the VR environment based upon the 3D model and environmental feature; (v) receiving, by the VR computing device, user input including labeling data associated with the environmental feature; (vi) transmitting, by the VR computing device, the 3D model, the environmental feature, and the labeling data to the MP computing device as a labeled 3D model; and/or (vii) generating, by the MP computing device, training data comprising the labeled 3D model. The method may include additional, less, or alternate actions, including those discussed elsewhere herein.”
The claims supplied by the inventors are:
“1. A computer-implemented method for labeling a three dimensional (3D) model using virtual reality (VR) techniques, the method implemented by a computer system including at least one processor, the method comprising: receiving, by the processor, the 3D model; processing, by the processor, the 3D model using object recognition; identifying at least one environmental feature within the 3D model, wherein the at least one environmental feature is unidentified and unlabeled; generating, by the processor, a processed 3D model including the at least one environmental feature; displaying, through a VR device, a VR environment to a user based upon the processed 3D model, wherein the VR environment includes the at least one environmental feature; prompting, through the VR environment, the user to input (i) labeling data for the environmental feature through user interaction with the VR device and (ii) boundary data by creating a bounding frame within the VR environment, wherein the labeling data identifies the environmental feature, wherein the bounding frame delineates a 3D region within the 3D model, and wherein the 3D region includes data representing the environmental feature; processing, by the VR device, (i) the data within the 3D region using object recognition and (ii) the user input associated with the environmental feature; identifying an environmental feature corresponding to the environmental feature represented by data within the 3D region; generating a labeled 3D model by embedding the labeling data and the boundary data for the environmental feature into the processed 3D model; extracting the environmental feature and associated labeling data and boundary data from the labeled 3D model; and generating, by the processor, training data based upon the extracted environmental feature of the labeled 3D model.
“2. The computer-implemented method of claim 1, the method further comprising: training a machine learning model using the training data.
“3. The computer-implemented method of claim 1, the method further comprising: training a machine learning model using the labeled 3D model.
“4. The computer-implemented method of claim 1, wherein the 3D model is a point cloud.
“5. The computer-implemented method of claim 4, wherein the point cloud is generated based upon aerial photos of a real-world location.
“6. The computer-implemented method of claim 1, wherein processing the 3D model using object recognition includes processing the 3D model using a segmentation technique.
“7. The computer-implemented method of claim 6, wherein the segmentation technique is a semantic segmentation technique.
“8. The computer-implemented method of claim 1, wherein generating the processed 3D model further comprises updating meta-data of data points representing the environmental feature.
“9. The computer-implemented method of claim 1, wherein generating the processed 3D model further comprises: generating data points representing the environmental feature; and embedding the data points representing the environmental feature into the 3D model.
“10. The computer-implemented method of claim 9, wherein the data points representing the environmental feature are a surface mesh.
“11. The computer-implemented method of claim 10, wherein displaying the VR environment further comprises altering the appearance of the surface mesh.
“12. The computer-implemented method of claim 11, wherein altering the appearance of the environmental feature includes shading the environmental feature.
“13. The computer-implemented method of claim 11, wherein altering the appearance of the environmental feature includes outlining the environmental feature.
“14. The computer-implemented method of claim 10, wherein generating a labeled 3D model by embedding the labeling data into the processed 3D model further comprises updating meta-data associated with the surface mesh.
“15. The computer-implemented method of claim 1, wherein displaying the VR environment further comprises altering the appearance of the environmental feature.
“16. The computer-implemented method of claim 1, wherein the user input comprises hand gestures made within the VR environment.
“17. The computer-implemented method of claim 1, wherein the user input comprises eye movement detected by the processor.
“18. The computer-implemented method of claim 1, wherein the user input comprises spoken commands.
“19. The computer-implemented method of claim 1, wherein generating a labeled 3D model by embedding the labeling data into the processed 3D model further comprises updating meta-data associated with the 3D model.
“20. The computer-implemented method of claim 1, wherein generating a labeled 3D model by embedding the labeling data into the processed 3D model further comprises generating a surface mesh, embedding the surface mesh in the 3D model, and updating meta-data associated with the surface mesh.
“21. The computer-implemented method of claim 1, wherein generating the training data further comprises: receiving a second labeled 3D model; and aggregating the labeled 3D model and the second labeled 3D model as training data.
“22. The computer-implemented method of claim 1, wherein generating the training data includes translating the labeled 3D model from a first file format to a second file format.
“23. The computer-implemented method of claim 1, wherein generating the training data further comprises: capturing a first and second element of the labeled 3D model; and aggregating the first and second elements of the labeled 3D model as training data.
“24. A virtual reality (VR) labeling computer system for labeling a three dimensional (3D) model using VR techniques, the VR labeling computer system including at least one processor in communication with at least one memory device, wherein the at least one processor is configured to: receive the 3D model; process the 3D model using object recognition; identify at least one environmental feature within the 3D model, wherein the at least one environmental feature is unidentified and unlabeled; generate a processed 3D model including the at least one environmental feature; display, through a VR device, a VR environment to a user based upon the processed 3D model, wherein the VR environment includes the at least one environmental feature; prompt, through the VR environment, the user to input (i) labeling data for the environmental feature through user interaction with the VR device and (ii) boundary data by creating a bounding frame within the VR environment, wherein the labeling data identifies the environmental feature, wherein the bounding frame delineates a 3D region within the 3D model, and wherein the 3D region includes data representing the environmental feature; process, by the VR device, (i) the data within the 3D region using object recognition and (ii) the user input associated with the environmental feature; identify an environmental feature corresponding to the environmental feature represented by data within the 3D region; generate a labeled 3D model by embedding the labeling data and the boundary data for the environmental feature into the processed 3D model; extract the environmental feature and associated labeling data and boundary data from the labeled 3D model; and generate training data based upon the extracted environmental feature of the labeled 3D model.
“25. At least one non-transitory computer-readable storage media having computer-executable instructions embodied thereon for labeling a three dimensional (3D) model using a virtual reality (VR) technique, wherein when executed by at least one processor, the computer-executable instructions cause the processor to: receive the 3D model; process the 3D model using object recognition; identify at least one environmental feature within the 3D model, wherein the at least one environmental feature is unidentified and unlabeled; generate a processed 3D model including the at least one environmental feature; display, through a VR device, a VR environment to a user based upon the processed 3D model, wherein the VR environment includes the at least one environmental feature; prompt, through the VR environment, the user to input (i) labeling data for the environmental feature through user interaction with the VR device and (ii) boundary data by creating a bounding frame within the VR environment, wherein the labeling data identifies the environmental feature, wherein the bounding frame delineates a 3D region within the 3D model, and wherein the 3D region includes data representing the environmental feature; process, by the VR device, (i) the data within the 3D region using object recognition and (ii) user input associated with the environmental feature; identify an environmental feature corresponding to the environmental feature represented by data within the 3D region; generate a labeled 3D model by embedding the labeling data and the boundary data for the environmental feature into the processed 3D model; extract the environmental feature and associated labeling data and boundary data from the labeled 3D model; and generate training data based upon the extracted environmental feature of the labeled 3D model.”
For the URL and additional information on this patent, see: Carnahan, Jeremy. Systems and methods for labeling 3D models using virtual reality and augmented reality.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Studies in the Area of Public Health Reported from Woldia University (Association between health insurance enrolment and maternal health care service utilization among women in Ethiopia): Health and Medicine – Public Health
Patent Issued for Predictive system for generating clinical queries (USPTO 11210346): IQVIA Inc.
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News