Patent Issued for Event information collection system (USPTO 11659235): United Services Automobile Association
2023 JUN 12 (NewsRx) -- By a
The assignee for this patent, patent number 11659235, is
Reporters obtained the following quote from the background information supplied by the inventors: “Insurance companies process many vehicle and personal injury claims from accidents and events. For example, there are approximately six million vehicle accidents every year in
“Video data and other recorded information of events often exists. Dash cameras and sensors that detect telematics data are generally inexpensive and easy to install. In addition, many people have a video-enabled smart phone that they use to film events. Video from events is often shared electronically via social networking applications and video websites.
“The techniques introduced here may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals indicate identical or functionally similar elements.”
In addition to obtaining background information on this patent, NewsRx editors also obtained the inventors’ summary information for this patent: “Aspects of the present disclosure are directed to locating, identifying, and retrieving video data of an accident or other event that can be used to determine the circumstances of the event. When an event occurs, such as two vehicles colliding, it may be difficult to determine who was at fault. There may be contributing factors associated with the event, such as rate of speed, weather, obstacles such as traffic, pedestrians and bicyclists, etc. Sometimes a vehicle has a dash camera installed that shows one view of the event. Witnesses can be hard or impossible to locate as they drive by the scene without stopping. Some witnesses will have dash camera video data of the scene, while others may record video data on their phone. If available, this video data could help resolve the circumstances around the event. Technological advantages are realized as multiple views of an event, including information from before and after the event, can be helpful when resolving a dispute or determining the circumstances of the event.
“In some embodiments, users can download an application onto their personal device that tracks their position (e.g., using global positioning system (GPS) data) and uploads their location information to a memory. When an event occurs, a location and time associated with an event can be determined, either by a person, GPS data, meta-data associated with video data, etc. From this, a geographic range of interest and a time duration of interest can automatically be determined. The memory storing many users’ GPS data can be searched to identify a match with the event. For example, a user not involved with the event may have driven by the event and have applicable dash camera video data, or the uninvolved user may have been present in the area and filmed the event, either on purpose or inadvertently, such as in the background of another video. An entity can contact the user to ask if they have knowledge or video of the event. This provides a technological improvement by providing a new interface that can temporally and geographically tag recorded data and query against it to identify both people who may have witnessed an event and helpful video data to resolve disputes involving accidents and other events.
“In other embodiments, an entity can identify the geographic range of interest and time duration of interest based on a location and time of an event. The entity can then search their own video database as well as others for a match. The entity can further search video websites, networks or aggregators (e.g., YouTube™, etc.), such as by reviewing meta-data associated with video data, to find a match. The entity can submit queries to other businesses that may acquire video data (e.g., fleet-based businesses, smart city infrastructure, etc.) to request video data based on the identified parameters. Furthermore, machine learning can be used to identify whether content in the video data corresponds to the event. This provides a technological improvement as video websites can have millions of posted videos, making it previously impossible for an individual to independently search for and review posted video data to locate a match with the event. However, the data aggregation system described herein, which can use video meta-data and machine learning to search these video websites for event data introduce this new technical ability to locate video matching an event.
“In yet other embodiments, the application on the personal device can upload video data and other information associated with the event to the entity. The user may or may not be involved in the event, and the user may or may not be a member of the entity organization. The user can be offered an incentive to upload the video data and other data, improving the possibility that the entity will receive data that contributes to the resolution of, or clarifies circumstances related to, the event. A technological improvement is realized as automated video and other data submissions matching event characteristics can be forwarded to the entity in a timely manner (even in real-time), providing information that can eliminate the need to access alternate data sources (e.g., police reports, witness accounts, etc.)
“Several implementations are discussed below in more detail in reference to the figures. FIG. 1 is a block diagram illustrating an overview of devices on which some implementations of the disclosed technology can operate. The devices can comprise hardware components of a device 100 that can receive video data, record video data, and/or provide information associated with an accident or event to another entity in various formats (e.g., text, audio, video, etc.). Device 100 can include one or more input devices 120 that provide input to the Processor(s) 110 (e.g. CPU(s), GPU(s), HPU(s), etc.), notifying it of actions. The actions can be mediated by a hardware controller that interprets the signals received from the input device 120 and communicates the information to the processors 110 using a communication protocol. Input devices 120 include, for example, a mouse, a keyboard, a touchscreen, an infrared sensor, a touchpad, a wearable input device, a camera- or image-based input device, a microphone, or other user input devices.”
The claims supplied by the inventors are:
“1. A method for requesting video data associated with an event, the method comprising: determining an event location and an event time associated with the event; identifying A) a geographic range of interest associated with the event location and B) a time duration of interest associated with the event time; determining video files to identify, wherein the video files comprise video data associated with the event, wherein the video files are located in at least two video storage sources; generating query parameters for querying meta-data associated with the video data stored in the at least two video storage sources; automatically querying, using the query parameters, the at least two video storage sources to obtain the video files of the video data that correspond to the geographic range of interest and the time duration of interest, wherein a first video storage source of the at least two video storage sources includes at least video data acquired by cameras associated with vehicles, wherein a second video storage source of the at least two video storage sources is a publicly available video aggregator that provides a point of access for publicly searching metadata of a plurality of videos posted by a plurality of entities; and in response to obtaining a quantity of video files for determining a circumstance of the event, terminating the querying.
“2. The method of claim 1, further comprising: receiving, from at least one electronic device, global positioning system (GPS) tracking data associated with at least one user; and storing the tracking data, wherein the tracking data includes at least one of GPS data and associated time data, wherein the at least one electronic device is one of: a personal electronic device or a dash camera installed in a vehicle.
“3. The method of claim 1, further comprising: determining that an initial query did not provide the video data depicting the event and in response: identifying the geographic range of interest by increasing a previously determined geographic range of interest or identifying the time duration of interest by increasing a previously determined time duration of interest.
“4. The method of claim 1, further comprising: analyzing the video files using machine learning to identify whether content in the video data corresponds to the event, wherein the circumstance of the event includes at least one of fault of accident, a rate of speed, weather, or presence of an obstacle.
“5. The method of claim 1, wherein the at least two video storage sources further include a storage storing video data recorded by a plurality of fleet vehicles that include GPS tracking and dash cameras.
“6. The method of claim 1, wherein the event location and the event time are determined based on telematics from a vehicle associated with the event.
“7. The method of claim 1, wherein the event location and the event time were specified, in a user interface, by at least one user.
“8. A non-transitory computer-readable medium storing instructions that, when executed by a computing system, cause the computing system to perform operations for requesting video data associated with an event, the operations comprising: determining an event location and an event time associated with the event; identifying A) a geographic range of interest associated with the event location and B) a time duration of interest associated with the event time; determining video files to identify, wherein the video files comprise video data associated with the event, wherein the video files are located in at least two video storage sources; generating query parameters for querying meta-data associated with the video data stored in the at least two video storage sources; automatically querying, using the query parameters, the at least two video storage sources to obtain the video files of the video data that correspond to the geographic range of interest and the time duration of interest, wherein a first video storage source of the at least two video storage sources includes at least video data acquired by cameras associated with vehicles, wherein a second video storage source of the at least two video storage sources is a publicly available video aggregator that provides a point of access for publicly searching metadata of a plurality of videos posted by a plurality of entities; and in response to obtaining a quantity of video files for determining a circumstance of the event, terminating the querying.
“9. The non-transitory computer-readable medium of claim 8, wherein the operations further comprise: receiving, from at least one electronic device, global positioning system (GPS) tracking data associated with at least one user; and storing the tracking data, wherein the tracking data includes at least one of GPS data and associated time data, wherein the at least one electronic device is one of: a personal electronic device or a dash camera installed in a vehicle.
“10. The non-transitory computer-readable medium of claim 8, wherein the operations further comprise: determining that an initial query did not provide the video data depicting the event and in response: identifying the geographic range of interest by increasing a previously determined geographic range of interest or identifying the time duration of interest by increasing a previously determined time duration of interest.
“11. The non-transitory computer-readable medium of claim 8, wherein the operations further comprise: analyzing the video files using machine learning to identify whether content in the video data corresponds to the event, wherein the circumstance of the event includes at least one of fault of accident, a rate of speed, weather, or presence of an obstacle.
“12. The non-transitory computer-readable medium of claim 8, wherein the at least two video storage sources further include a storage storing video data recorded by a plurality of fleet vehicles that include GPS tracking and dash cameras.
“13. The non-transitory computer-readable medium of claim 8, wherein the event location and the event time are determined based on telematics from a vehicle associated with the event.
“14. The non-transitory computer-readable medium of claim 8, wherein the event location and the event time were specified, in a user interface, by at least one user.
“15. A system comprising: one or more processors; and one or more memories storing instructions that, when executed by the one or more processors, cause the system to perform a process for requesting video data associated with an event, the process comprising: determining an event location and an event time associated with the event; identifying A) a geographic range of interest associated with the event location and B) a time duration of interest associated with the event time; determining video files to identify, wherein the video files comprise video data associated with the event, wherein the video files are located in at least two video storage sources; generating query parameters for querying meta-data associated with the video data stored in the at least two video storage sources; automatically querying, using the query parameters, the at least two video storage sources to obtain the video files of the video data that correspond to the geographic range of interest and the time duration of interest, wherein a first video storage source of the at least two video storage sources includes at least video data acquired by cameras associated with vehicles, wherein a second video storage source of the at least two video storage sources is a publicly available video aggregator that provides a point of access for publicly searching metadata of a plurality of videos posted by a plurality of entities; and in response to obtaining a quantity of video files for determining a circumstance of the event, terminating the querying.
“16. The system according to claim 15, wherein the process further comprises: receiving, from at least one electronic device, global positioning system (GPS) tracking data associated with at least one user; and storing the tracking data, wherein the tracking data includes at least one of GPS data and associated time data, wherein the at least one electronic device is one of: a personal electronic device or a dash camera installed in a vehicle.
“17. The system according to claim 15, wherein the process further comprises: determining that an initial query did not provide the video data depicting the event and in response: identifying the geographic range of interest by increasing a previously determined geographic range of interest or identifying the time duration of interest by increasing a previously determined time duration of interest.
“18. The system according to claim 15, wherein the process further comprises: analyzing the video files using machine learning to identify whether content in the video data corresponds to the event, wherein the circumstance of the event includes at least one of fault of accident, a rate of speed, weather, or presence of an obstacle.
“19. The system according to claim 15, wherein the at least two video storage sources further include a storage storing video data recorded by a plurality of fleet vehicles that include GPS tracking and dash cameras.
“20. The system according to claim 15, wherein the event location and the event time are determined based on telematics from a vehicle associated with the event.”
For more information, see this patent: Dahlman, Matthew C. Event information collection system.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Issued for Systems and methods for managing smart devices based upon electrical usage data (USPTO 11656585): State Farm Mutual Automobile Insurance Company
Patent Application Titled “Systems And Methods For Identification And Management Of Compliance-Related Information Associated With Enterprise It Networks” Published Online (USPTO 20230162060): Patent Application
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News