Patent Issued for Event information collection system (USPTO 11336955): United Services Automobile Association
2022 JUN 08 (NewsRx) -- By a
The patent’s assignee for patent number 11336955 is
News editors obtained the following quote from the background information supplied by the inventors: “Insurance companies process many vehicle and personal injury claims from accidents and events. For example, there are approximately six million vehicle accidents every year in
“Video data and other recorded information of events often exists. Dash cameras and sensors that detect telematics data are generally inexpensive and easy to install. In addition, many people have a video-enabled smart phone that they use to film events. Video from events is often shared electronically via social networking applications and video websites.
“The techniques introduced here may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals indicate identical or functionally similar elements.”
As a supplement to the background information on this patent, NewsRx correspondents also obtained the inventors’ summary information for this patent: “Aspects of the present disclosure are directed to locating, identifying, and retrieving video data of an accident or other event that can be used to determine the circumstances of the event. When an event occurs, such as two vehicles colliding, it may be difficult to determine who was at fault. There may be contributing factors associated with the event, such as rate of speed, weather, obstacles such as traffic, pedestrians and bicyclists, etc. Sometimes a vehicle has a dash camera installed that shows one view of the event. Witnesses can be hard or impossible to locate as they drive by the scene without stopping. Some witnesses will have dash camera video data of the scene, while others may record video data on their phone. If available, this video data could help resolve the circumstances around the event. Technological advantages are realized as multiple views of an event, including information from before and after the event, can be helpful when resolving a dispute or determining the circumstances of the event.
“In some embodiments, users can download an application onto their personal device that tracks their position (e.g., using global positioning system (GPS) data) and uploads their location information to a memory. When an event occurs, a location and time associated with an event can be determined, either by a person, GPS data, meta-data associated with video data, etc. From this, a geographic range of interest and a time duration of interest can automatically be determined. The memory storing many users’ GPS data can be searched to identify a match with the event. For example, a user not involved with the event may have driven by the event and have applicable dash camera video data, or the uninvolved user may have been present in the area and filmed the event, either on purpose or inadvertently, such as in the background of another video. An entity can contact the user to ask if they have knowledge or video of the event. This provides a technological improvement by providing a new interface that can temporally and geographically tag recorded data and query against it to identify both people who may have witnessed an event and helpful video data to resolve disputes involving accidents and other events.
“In other embodiments, an entity can identify the geographic range of interest and time duration of interest based on a location and time of an event. The entity can then search their own video database as well as others for a match. The entity can further search video websites, networks or aggregators (e.g., YouTube™, etc.), such as by reviewing meta-data associated with video data, to find a match. The entity can submit queries to other businesses that may acquire video data (e.g., fleet-based businesses, smart city infrastructure, etc.) to request video data based on the identified parameters. Furthermore, machine learning can be used to identify whether content in the video data corresponds to the event. This provides a technological improvement as video websites can have millions of posted videos, making it previously impossible for an individual to independently search for and review posted video data to locate a match with the event. However, the data aggregation system described herein, which can use video meta-data and machine learning to search these video websites for event data introduce this new technical ability to locate video matching an event.
“In yet other embodiments, the application on the personal device can upload video data and other information associated with the event to the entity. The user may or may not be involved in the event, and the user may or may not be a member of the entity organization. The user can be offered an incentive to upload the video data and other data, improving the possibility that the entity will receive data that contributes to the resolution of, or clarifies circumstances related to, the event. A technological improvement is realized as automated video and other data submissions matching event characteristics can be forwarded to the entity in a timely manner (even in real-time), providing information that can eliminate the need to access alternate data sources (e.g., police reports, witness accounts, etc.)
“Several implementations are discussed below in more detail in reference to the figures. FIG. 1 is a block diagram illustrating an overview of devices on which some implementations of the disclosed technology can operate. The devices can comprise hardware components of a device 100 that can receive video data, record video data, and/or provide information associated with an accident or event to another entity in various formats (e.g., text, audio, video, etc.). Device 100 can include one or more input devices 120 that provide input to the Processor(s) 110 (e.g. CPU(s), GPU(s), HPU(s), etc.), notifying it of actions. The actions can be mediated by a hardware controller that interprets the signals received from the input device 120 and communicates the information to the processors 110 using a communication protocol. Input devices 120 include, for example, a mouse, a keyboard, a touchscreen, an infrared sensor, a touchpad, a wearable input device, a camera- or image-based input device, a microphone, or other user input devices.”
The claims supplied by the inventors are:
“1. A non-transitory computer-readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform a process comprising: determining an event location and an event time associated with an event, wherein the event time includes at least one of: a time or a time range, wherein the event comprises at least one of: damage to property, damage to a vehicle, or injury to one or more persons; identifying A) a geographic range of interest associated with the event location and B) a time duration of interest associated with the event time; determining a quantity of video files to identify, wherein the video files comprise video data associated with the event, the video files located in at least two video storage sources; generating first query parameters for querying meta-data associated with the video data stored in a first video storage source and second query parameters for querying the meta-data associated with the video data stored in a second video storage source, wherein the first and second query parameters comprise GPS data and time data associated with the event; automatically querying, using the first and second query parameters, the first and second video storage sources to obtain the video files of video data that correspond to the geographic range of interest and the time duration of interest, wherein the first video storage source includes at least video data acquired by cameras associated with vehicles, wherein the second video storage source is a publicly available video aggregator that provides a point of access for publicly searching the metadata of a plurality of videos posted by a plurality of entities; in response to obtaining the quantity of the video files, terminating the querying; and automatically analyzing the video data, using a trained machine learning model, to determine whether content in the video data corresponds to the event.
“2. The non-transitory computer-readable storage medium of claim 1, wherein the video storage source further includes at least one of a video website or a video network.
“3. The non-transitory computer-readable storage medium of claim 1, wherein the video storage source further includes a storage storing video data recorded by a plurality of fleet vehicles that include GPS tracking and dash cameras.
“4. The non-transitory computer-readable storage medium of claim 1, wherein the obtaining the video files further comprises obtaining a copy of the video data from the video storage source.
“5. The non-transitory computer-readable storage medium of claim 1, the process further comprising: determining that an initial query did not provide video data depicting the event and in response: performing the identifying the geographic range by increasing a previously determined geographic range of interest or identifying the time duration of interest by increasing a previously determined time duration of interest.
“6. A method for requesting video data associated with an event, the method comprising: determining an event location and an event time associated with an event, wherein the event comprises at least one of: damage to property, damage to a vehicle, or injury to one or more persons; determining, by a processor, A) a geographic range of interest associated with the event location and B) a time duration of interest associated with the event time; automatically identifying, by the processor, one or more users who have recorded information associated with a location within the geographic range of interest and associated with a time within the time duration of interest, wherein the location and the time are stored in a memory; transmitting, by the processor, to the one or more identified users, a request for at least one of information or video data associated with the geographic range of interest and the time duration of interest; receiving, in response to the request, the recorded information; automatically querying GPS data and time data stored in meta-data from at least two publicly searchable and publicly accessible video storage sources to obtain publicly available video data that corresponds to at least a portion of the geographic range of interest and at least a portion of the time duration of interest; downloading or copying the publicly available video data that corresponds to the at least a portion of the geographic range of interest and at least a portion of the time duration of interest; automatically analyzing the recorded information and obtained publicly available video data to identify that the recorded information or the publicly available video data depicts at least part of the event; and in response to receiving a sufficient amount of the recorded information and the publicly available video data, terminating the querying.
“7. The method of claim 6, wherein the event location and event time are determined based on telematics from a vehicle associated with the event.
“8. The method of claim 6, wherein the event location and the event time were specified, in a user interface, by one of the one or more users.
“9. The method of claim 6, further comprising: receiving, from one or more electronic devices, global positioning system (GPS) tracking data associated with the one or more users; and storing, with the processor, the tracking data in the memory, the tracking data including at least one of GPS data and associated time data; wherein the electronic device is one of: a personal electronic device or a dash camera installed in a vehicle; and wherein the automatic identification of the one or more users is based on the GPS tracking data.
“10. The method of claim 6, wherein the transmitting the request further comprises at least one of: transmitting the request to an application on a user device, sending a text, sending an email, or calling a user device.
“11. The method of claim 6, further comprising, in response to the automatic identifying of the one or more users, offering, by the processor to at least one of the one or more users, an incentive to electronically transmit the recorded information to an identified address over a network.
“12. The method of claim 6, wherein the automatic analyzing of the recorded information is performed using machine learning to identify whether content in the video data corresponds to the event.
“13. A computing system comprising: one or more processors; and one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to perform a process comprising: causing a selectable icon to be displayed by the computing system, wherein the selectable icon initiates an event reporting process; receiving an identification of the icon being selected; in response to the icon being selected, causing an interactive display, displayed by the computing system, to include at least one of: at least one fillable form or at least a second selectable icon; receiving, through the interactive display, information about the event, wherein the information includes location and time of the event; receiving, through the interactive display, a web address of posted video data associated with the event, wherein the web address links to a publicly searchable and publicly accessible video storage source; wherein the publicly searchable and publicly accessible video storage source is a video aggregator that provides a point of access for publicly searching GPS data and time data associated with a plurality of videos posted by a plurality of entities; and transmitting the web address of the video data to a second computing system.
“14. The computing system of claim 13, further comprising an application, created by an entity that received the transmitted video data and stored in the one or more memories, wherein the application, when executed by the one or more processors, causes the selectable icon and the interactive display to be displayed, and causes the transmitting of the video data.
“15. The computing system of claim 13, further comprising a camera, wherein the video data was acquired with the camera.
“16. The computing system of claim 13, wherein the video data was acquired with a dash camera in a vehicle, wherein the process further comprises receiving the video data from the dash camera.
“17. The computing system of claim 13, wherein the video data further includes meta-data comprising GPS data and time data.
“18. The computing system of claim 13, wherein the information about the event received through the interactive display further comprises at least one of: a text statement, an audio statement, a video statement, or map data associated with the event.”
For additional information on this patent, see: Dahlman, Matthew C. Event information collection system.
(Our reports deliver fact-based news of research and discoveries from around the world.)
The Hartford to consolidate hundreds of employees from Windsor to downtown Hartford [Hartford Courant]
House Agriculture Committee – Hearing
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News