Patent Issued for Local physical environment modeling in extended reality environments (USPTO 11710280): United Services Automobile Association
2023 AUG 14 (NewsRx) -- By a
The patent’s assignee for patent number 11710280 is
News editors obtained the following quote from the background information supplied by the inventors: “Extended reality (XR) is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., virtual reality (VR), augmented reality (AR), mixed reality (MR), hybrid reality, or some combination and/or derivatives thereof. XR visualization systems are starting to enter the mainstream consumer electronics marketplace. XR Head-Mounted Display (HMD) devices (“XR-HMD devices”) are one promising use of such technology. These devices may include transparent display elements that enable a user to see virtual content transposed over the user’s view of the real world. Virtual content that appears to be superimposed over the user’s real-world view or that is presented in a virtual reality environment is commonly referred to as XR content. Displayed XR objects are often referred to as “holographic” objects. XR visualization systems can provide users with entertaining, immersive three-dimensional (3D) virtual environments in which they can visually (and sometimes audibly) experience things they might not normally experience in real life.
“The techniques introduced here may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals indicate identical or functionally similar elements.”
As a supplement to the background information on this patent, NewsRx correspondents also obtained the inventors’ summary information for this patent: “Aspects of the present disclosure are directed to digital models of local physical environments. The digital models are analyzed for identifications of real-world objects and an evaluation of the physical condition of those objects.
“Initially, an extended reality display device (XR device) is used to execute an environment scanning application (e.g., for coordinating with a service representative). External facing environment cameras positioned on the XR device or in communication with the XR device capture a local physical environment of a user. The external environment cameras make use of depth sensing to create a textured map of a room. As the user looks around, various captures are stitched together to form a model. The stitched together room model is analyzed by a machine learning computer vision model in order to identify specific objects (e.g., walls, chairs, TVs, tables, cars, etc.).
“Once there is a digital model of the local environment, the XR device may add artificial components to the environment through digital renderings. In some implementations, an XR device can display digital renderings without first creating a digital model of the local environment, e.g., by placing digital renderings that are positioned relative to the user without regard to other aspects of the local environment. Included in the artificial components can be an avatar of a second user. The second user does not have to be present with the first user, or even using a corresponding XR device. The avatar may be a true-to-life rendering of the second user, or an animated character. The second user is positioned into the local environment of the user wearing the XR device.
“In some implementations, the positioning of the avatar of second user is based on the local environment. That is, the avatar of the second user is not placed intersecting a wall or cut in half by a table. In other cases, the positioning of the avatar of second user can be without regard to the local environment. While present in the local environment, the avatar of the second user interacts with the first user and can deliver instructions via animated renderings or audio data delivered via the XR device. For example, the second user may direct the first user to go look at a particular object within the local environment (thereby enhancing the digital model of that object or generating that object as part of the digital model for the first time). In an illustrative example, an insurance agent whom is represented by an avatar can direct a user to go look at their car with the XR device.
“From the digital model of the local environment, the system can render a digital representation of the specific physical object within the local physical environment of the first user to a display of the second user. In some embodiments, the rendering is a 3-dimensional holographic representation in a corresponding XR device of the second user. In other embodiments, the rendering is one or more photographs/images of the object.”
The claims supplied by the inventors are:
“1. A method comprising: capturing a local physical environment of a first user, via external facing environment cameras of a first head mounted extended reality device (“XR device”); generating a digital model of the local physical environment of the first user; rendering to a display of the first XR device of the first user, a digital avatar of a second user positioned within the local physical environment; receiving instructions from the second user; providing, via the first XR device and based on the instructions from the second user, directions for the first user to aim external facing environment cameras of the first XR device at a physical object within the local physical environment of the first user, wherein multiple images captured by one or more of the cameras are each a depth map frame including a digital representation of the physical object; generating, by combining the depth map frame and a corresponding visible light frame, the digital representation of the physical object within the local physical environment of the first user, the visible light frame being combined with its corresponding depth map frame so as to provide color to a portion of the digital model of the local environment corresponding to the digital representation of the physical object; and transmitting the digital representation of the physical object to a device associated with the second user for display.
“2. The method of claim 1, wherein the device associated with the second user is a second XR device worn by the second user and the digital representation of the physical object is displayed, via the second XR device, as a 3D model.
“3. The method of claim 1, wherein the device associated with the second user is a second XR device worn by the second user; and wherein the method further comprises causing a system to: identify a make and model of the physical object based on image recognition performed on the digital model of the local physical environment; and render a textual display of the make and model of the physical object to the display of the second XR device worn by the second user, wherein the textual display is positioned adjacent to the digital representation of the physical object.
“4. The method of claim 1, further comprising identifying a make and model of the physical object based on image recognition performed on the digital model of the local physical environment.
“5. The method of claim 1, wherein the device associated with the second user is a second XR device worn by the second user; and wherein the method further comprises causing a system to: identify a make and model of the physical object based on image recognition performed on the digital model of the local physical environment; render a textual display of the make and model of the physical object to the display of the second XR device worn by the second user, wherein the textual display is positioned adjacent to the digital representation of the physical object; and update a user account of the first user confirming a status of the physical object in response to input by the second user.
“6. The method of claim 1, further comprising causing a system to: identify one or more characteristics of the physical object based on image recognition performed on the digital model of the local physical environment; render a textual display of the one or more characteristics of the physical object to a display of the device associated with the second user, wherein the textual display is positioned adjacent to the digital representation of the physical object; update a user account of the first user confirming a status of the physical object in response to input by the second user; generate a photograph of the physical object based on the digital model of the local physical environment; and store the photograph with the user account of the first user.
“7. The method of claim 1, further comprising: on a date subsequent to the capturing, and in response to a request from an entity associated with the second user, capturing an updated local physical environment of the first user via the external facing environment cameras of the first XR device; generating, using the captured updated local physical environment, a digital representation of an updated version of the physical object; and transmitting the digital representation of the updated version of the physical object for display; wherein, in response to the digital representation of the updated version of the physical object, an insurance claim based on a difference between the digital representation of the physical object and the digital representation of the updated version of the physical object is established.
“8. The method of claim 1, further comprising: on a date subsequent to the capturing, and in response to a request from an entity associated with the second user, capturing an updated local physical environment of the first user via the external facing environment cameras of the first XR device; generating, using the captured updated local physical environment, a digital representation of an updated version of the physical object; transmitting the digital representation of the updated version of the physical object for display; wherein, in response to the digital representation of the updated version of the physical object, an insurance claim based on a difference between the digital representation of the physical object and the digital representation of the updated version of the physical object is established; and displaying, on the first XR device, replacement items for the physical object.
“9. The method of claim 1, further comprising: causing identification of characteristics of the physical object via computer vision executed with aspects of the digital model of the local physical environment of the first user an input; and receiving, by the first XR device, hand gesture input via the external facing environment cameras, wherein the hand gesture input confirms the identification of the characteristics of the physical object.
“10. The method of claim 1, further comprising: causing identification of a make and model of the physical object via computer vision executed with aspects of the digital model of the local physical environment of the first user as input; and displaying on the first XR device, a monetary value for the physical object, the monetary value determined based on the make and model.
“11. The method of claim 1, further comprising: associating the digital model of the local physical environment of the first user to a user account; and associating an eyeprint image captured by the first XR device as a password that enables access to the user account.
“12. The method of claim 1, further comprising: associating the digital model of the local physical environment of the first user to a user account; and displaying, on the first XR device, user account details.
“13. The method of claim 12, wherein the user account details include identification of a service representative and contact information thereto, the method further comprising: receiving input from the first user that annotates or communicates specific details of an insurance claim; comparing the digital representation of the physical object from the digital model of the local physical environment at a first timestamp and a second timestamp via a machine learning model; and identifying an inconsistency between the insurance claim and the comparison of the digital representation of the physical object at the first timestamp and the second timestamp.
“14. A system comprising: one or more processors; a first head mounted extended reality device (“XR device”) including: an external facing environment camera that captures a local physical environment of a first user via of the XR device; a memory including instructions that, when executed by the one or more processors, cause the first XR device to generate a digital model of the local physical environment of the first user; a display that renders a digital avatar of a second user positioned within the local physical environment; and a network interface that: receives instructions, from the second user transmitted to the first user, directing the first user to aim external facing environment cameras of the first XR device at a physical object within the local physical environment of the first user, wherein multiple images captured by one or more of the cameras are each a depth map frame including a digital representation of the physical object; wherein the instructions from the second user are provided, by the system, to the first user; and wherein the memory further includes instructions that, when executed, cause the one or more processors to generate, by combining the depth map frame and a corresponding visible light frame, the digital representation of the physical object within the local physical environment of the first user, the visible light frame being combined with its corresponding depth map frame so as to provide color to a portion of the digital model of the local environment corresponding to the digital representation of the physical object; and transmits the digital representation of the physical object to a device associated with the second user for display; wherein the transmission of the digital representation of the physical object causes a display of the second user to render a digital representation of the physical object within the local physical environment of the first user.
“15. The system of claim 14, wherein the display of the second user is a second XR device worn by the second user.”
There are additional claims. Please visit full patent to read further.
For additional information on this patent, see: Argumedo, Marta. Local physical environment modeling in extended reality environments.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Issued for Data analytics system to automatically recommend risk mitigation strategies for an enterprise (USPTO 11710101): Hartford Fire Insurance Company
Patent Issued for PKI-based user authentication for web services using blockchain (USPTO 11711219): United Services Automobile Association
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News