Patent Issued for Systems and methods for enhancing and developing accident scene visualizations (USPTO 11823337): State Farm Mutual Automobile Insurance Company
2023 DEC 08 (NewsRx) -- By a
The assignee for this patent, patent number 11823337, is
Reporters obtained the following quote from the background information supplied by the inventors: “The traditional description of damage events typically relies on witness statements, and, in some instances, two dimensional (2D) pictures, used to describe the location, scene, time, and/or individuals or things involved in the damage event. Damage events can relate to a moving or parked automobile accident, a household fire, a household water damage event, or any other damage event, which each typically include damaged items, such as a damaged vehicle or damaged home. Damage events can include damage scenes and damaged items, including an automobile accident that occurred at an intersection resulting in damaged vehicles, or a house fire that occurred at a home owner’s household resulting in a damaged room or room(s). Typically, such events happen suddenly, and in some cases, with few or no witnesses, such as a water damage event in a household basement.
“Accordingly, a problem arises during the aftermath of such events, where witness statements, or two dimensional (2D) pictures taken at different times, such as before, during or after the event, do not coincide or are otherwise inconsistent to provide a holistic understanding of the damages event. Such inconsistent witness statements or pictures can make it difficult to understand the timing, scenery, facts, or other circumstances that caused the accident or describe or show how the item was damaged. For example, this can create issues for companies or individuals involved in remedial or other post-damage event services, such as insurance companies or repair services, in determining the cause of damage or otherwise determining or recreating the scene or environment when or at what location the related damage event occurred.”
In addition to obtaining background information on this patent, NewsRx editors also obtained the inventors’ summary information for this patent: “Accordingly, systems and methods are needed in order to annotate and visualize damages scenes with additional data or information in order to more accurately portray the environment at the time of damage (or times just before or after the time of damage). In various embodiments disclosed herein, virtual reality (VR) visualizations can be used to visualize a damages scene. The VR visualization can include annotations, including graphics, text, video, or other information that can be used to more accurately describe the timing, characteristics or other circumstances related to the damage event or one or more damaged items associated with the damage scene or damage event. The VR visualizations can allow individuals, such as an individual associated with remedial or other post-damage event services, for example, an insurance claims adjuster or other insurance representative, to visualize a damage scene that may be augmented with annotations or other information via multiple point-of-view perspectives to recreate the actual scene at the time of damage event. In some embodiments, the VR visualizations may be used to create a real-time simulation video of an accident, such an automobile crash. Accordingly, various embodiments herein allow an individual, including individuals who were not present during the damage event, to visualize and determine the damage scene from a witness perspective. In various embodiments, the visualizations can allow an individual to visualize where a witness was located during the event so as to determine what the witness would have been able to see in the damages scene at the time of the damage event.
“In various embodiments disclosed herein, the visualization systems and methods also provide benefits to an individual associated with a damaged item, such as the owner of a damaged vehicle or damaged household, or a claims adjuster or other insurance representative associated with an insurance claim filed for a damaged item. In some embodiments, the owner may be a customer or policy holder associated with an insurance company. In such embodiments, the visualization systems and methods can enable the owner, customer, insurance representative, or other individual to generate or capture immersive multimedia images associated with a damages scene by taking one or more immersive multimedia images with a computing device at the location of the damages scene. In some aspects, the immersive multimedia images may then be augmented with one or more annotations to associate additional information with the damage event or scene. In various embodiments, the augmented immersive multimedia images may then be analyzed, inspected, viewed, further annotated or otherwise manipulated by other individuals to further enhance or more accurately represent the damage scene. In some embodiments, the immersive multimedia images, augmented or otherwise, may be visualized in a virtual reality device for analysis, inspection, viewing, further annotation or other manipulation.
“In some embodiments, for example, the augmented immersive multimedia images may be used to determine an outcome associated with the damage scene or event. For example, in certain embodiments, the augmented immersive multimedia images may be used to visualize the damage scene where an insurance claims adjuster, or other insurance representative, may adjust or otherwise determine a damage amount associated with an insurance claim related to the damage event or damage scene. In such embodiments, for example, the immersive multimedia images may be submitted by the owner as part of an insurance claims filing process.
“In various embodiments, systems and methods are described herein for annotating and visualizing a damage scene. The systems and methods may use one or more processors to generate immersive multimedia image(s), where the one or more immersive multimedia images can be associated with a damage scene, such as a damages scene related to a vehicle accident or a property damage event. The immersive multimedia image(s) may include 360-degree photographs, panoramic photographs, 360-degree videos, panoramic videos, or one or more photographs or videos for creating an immersive multimedia image.
“In some embodiments, the immersive multimedia image(s) may be augmented with annotation(s) to create respective annotated immersive multimedia image(s). In various embodiments, the annotation(s) can include any of a text-based annotation, a voice-based annotation, a graphical annotation, a video-based annotation, an augmented reality annotation, or a mixed reality annotation. For example, the damages scene may be annotated with an augmented reality annotation or a mixed reality annotation that describes the damages scene at or near a time when the related damage event occurred in order to enhance the damage scene for visualization purposes as described herein. In some embodiments, the annotated immersive multimedia image(s) may be annotated with metadata, which, for example, can include weather or GPS information associated with the damage scene at the time of the damage event. In some embodiments, a user can select from a display listing of preexisting annotations that may be used to augment immersive multimedia images to create annotated immersive multimedia images.”
The claims supplied by the inventors are:
“1. A damage scene visualization system, comprising: a virtual reality (VR) device; and a visualization processor communicatively coupled to the VR device, the visualization processor configured to: receive a digital image depicting a damage item located within a damage scene; receive telematics data associated with the damage item; generate, based on the digital image, a VR image of the damage scene, wherein the VR image illustrates the damage item, and a visual representation of an environment at which the damage item is disposed; augment the VR image based on the telematics data, wherein the augmented VR image provides at least a portion of the telematics data; receive, from the VR device, first information indicative of a head location; based at least in part on the first information, cause the VR device to render a 3D visualization of the damage scene from a first viewpoint corresponding to the head location, the 3D visualization including the augmented VR image; receive second information, the second information being indicative of a value of the damage item in an undamaged condition; and determine a damage amount of the damage item based on the value, and the 3D visualization of the damage scene as rendered from the first viewpoint.
“2. The damage scene visualization system of claim 1, wherein the visualization processor is further configured to: receive an annotation to be applied to the VR image; generate, based on the annotation, a mixed reality VR image comprising the VR image augmented to include the annotation; and cause the VR device to render the mixed reality VR image.
“3. The damage scene visualization system of claim 2, wherein the annotation comprises a graphic illustrating an additional item and identifying a location of the additional item in the environment.
“4. The damage scene visualization system of claim 2, wherein the visualization processor is further configured to: receive third information, from the VR device, indicative of a voice command or body gesture; and modify, based on the third information, the mixed reality VR image, wherein modifying the mixed reality VR image comprises one or more of moving, altering, or highlighting the annotation.
“5. The damage scene visualization system of claim 1, wherein the digital image includes metadata comprising a time of capture and a geographic location of capture, and the visualization processor is further configured to: generate a mixed reality VR image comprising the VR image augmented to include the metadata; and cause the VR device to render the mixed reality VR image.
“6. The damage scene visualization system of claim 5, wherein the visualization processor is further configured to: access a map of the geographic location of capture; and update the mixed reality VR image to include at least one of an image of the map, a link to the map, a location name derived from the map, or an aerial image of the geographic location of capture.
“7. The damage scene visualization system of claim 6, wherein updating the mixed reality VR image comprises causing the VR device to render an overlay of an image of the map on the VR image.
“8. The damage scene visualization system of claim 5, wherein the visualization processor is further configured to: access weather information associated with the geographic location of capture, the weather information indicating a weather condition, of the geographic location of capture, at a time of occurrence of damage; and update the mixed reality VR image to include the weather information.
“9. The damage scene visualization system of claim 1, wherein the digital image is a first digital image and the visualization processor is further configured to: receive a second digital image of the damage item in the undamaged condition; generate a mixed reality VR image comprising the second digital image overlaid on the VR image; and cause the VR device to render, the mixed reality VR image.
“10. The damage scene visualization system of claim 1, wherein the visualization processor is further configured to: receive location information indicative of a location of a witness in the environment; and based at least in part on the location information, cause the VR device to render the 3D visualization of the damage scene from a second viewpoint corresponding to the location of the witness.
“11. The damage scene visualization system of claim 1, wherein the visualization processor is further configured to: cause the VR device to render a selectable list of one or more annotations and a visible indicator indicative of a current position on the selectable list; receive third information, from the VR device, of a confirmation associated with the selectable list; and based on receiving the confirmation, associate a particular annotation located at the current position on the selectable list with the VR image.
“12. The damage scene visualization system of claim 1, wherein the damage amount is associated with an insurance claim filed for the damage item and is determined based on damage illustrated in the 3D visualization.
“13. A computer-implemented visualization method, comprising: receiving, at a visualization processor communicatively coupled to a virtual reality (VR) device, a digital image depicting a damage item located within a damage scene; receiving, at the visualization processor, telematics data associated with the damage item; generating, by the visualization processor, based on the digital image, a VR image of the damage scene, the VR image illustrating the damage item, and a visual representation of a geographic location at which the damage item is disposed; augmenting, by the visualization processor, the VR image based on the telematics data, wherein the augmented VR image provides at least a portion of the telematics data; receive, from the VR device, first information indicative of a head location; based at least in part on the first information, causing the VR device to render a 3D visualization of the damage scene from a first viewpoint corresponding to the head location, the 3D visualization including the augmented VR image; receiving, by the visualization processor, second information, the second information being indicative of a predetermined value of the damage item in an undamaged condition; and determining, by the visualization processor, a damage amount of the damage item based on the predetermined value, and the 3D visualization of the damage scene from the first viewpoint.
“14. The computer-implemented visualization method of claim 13, further comprising: receiving, by the VR device, an input comprising a request to modify the damage amount of the damage item; determining, by the visualization processor, an adjustment amount based on the 3D visualization; and determining, by the visualization processor, a modified damage amount of the damage item based at least in part on the predetermined value of the damage item, the input, and the adjustment amount.
“15. The computer-implemented visualization method of claim 13, wherein the damage item is a vehicle, and the telematics data comprises at least one of a speed of the vehicle, a deployment status of an airbag of the vehicle, or proximity information of another vehicle.
“16. The computer-implemented visualization method of claim 13, wherein the damage item is a vehicle, and the telematics data is captured by a sensor associated with the vehicle or an additional vehicle disposed proximate to the vehicle.
“17. One or more non-transitory computer-readable media storing instructions that, when executed by a visualization processor operable to visualize images in virtual reality (VR), cause the visualization processor to: receive a digital image depicting a damage item located within a damage scene; receive telematics data associated with the damage item; generate, based on the digital image, a VR image of the damage scene, the VR image illustrating the damage item, and a visual representation of a geographic location at which the damage item is disposed; augment the VR image based on the telematics data, wherein the augmented VR image provides at least a portion of the telematics data; receive first information indicative of a head location; based at least in part on the first information, cause a VR device to render a 3D visualization of the damage scene from a first viewpoint corresponding to the head location, the 3D visualization including the augmented VR image; receive second information, the second information being indicative of a predetermined value of the damage item in an undamaged condition; and determine a damage amount of the damage item based on the predetermined value, and the 3D visualization of the damage scene from the first viewpoint.
“18. The one or more non-transitory computer-readable media of claim 17, wherein the visualization processor is further configured to: generate a simulated video of the damage item and the damage scene, based at least in part on the telematics data; and render, within the VR device, the simulated video.
“19. The one or more non-transitory computer-readable media of claim 17, wherein the visualization processor is further configured to: cause the VR device to render a selectable list of one or more annotations; receive, by the VR device, an input comprising a selection of a particular annotation of the one or more annotations; determine an adjustment amount based on the 3D visualization and the particular annotation; and determine a modified damage amount of the damage item based at least in part on the predetermined value of the damage item and the adjustment amount.”
There are additional claims. Please visit full patent to read further.
For more information, see this patent:
(Our reports deliver fact-based news of research and discoveries from around the world.)
Data on Diabetes Mellitus Discussed by Researchers at Ohio State University (Diabetes Mellitus In Privately Insured Autistic Adults In the United States): Nutritional and Metabolic Diseases and Conditions – Diabetes Mellitus
New Risk Management Data Has Been Reported by a Researcher at National Cheng Kung University (The trigger of Ethiopian famine and its impacts from 1950 to 1991): Risk Management
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News