Patent Issued for Dynamic driving metric output generation using computer vision methods (USPTO 11430228): Allstate Insurance Company
2022 SEP 19 (NewsRx) -- By a
The assignee for this patent, patent number 11430228, is
Reporters obtained the following quote from the background information supplied by the inventors: “Aspects of the disclosure relate to enhanced processing systems for providing dynamic driving metric outputs using improved computer vision methods. In particular, one or more aspects of the disclosure relate to dynamic driving metric output platforms that utilize video footage to compute driving metrics.
“Many organizations and individuals rely on vehicle metrics such as speed and acceleration to perform driving and/or accident evaluations. In many instances, however, a vehicle may be equipped with an array of sensors, which include cameras and sensors such as LIDAR and radar, and thus all the speeds and distances are obtained from these sensors. This situation may present limitations to those without access to a fully equipped vehicle with cameras, LIDAR and radar that can capture the important distance, speed and acceleration metrics. There remains an ever-present need to develop alternative solutions to calculate vehicle metrics.”
In addition to obtaining background information on this patent, NewsRx editors also obtained the inventors’ summary information for this patent: “Aspects of the disclosure provide effective, efficient, scalable, and convenient technical solutions that address and overcome the technical problems associated with determining driving metrics from video captured by a vehicle camera by implementing advanced computer vision methods and dynamic driving metric output generation. In accordance with one or more arrangements discussed herein, a computing platform having at least one processor, a communication interface, and memory may receive video footage from a vehicle camera. The computing platform may determine that a reference marker in the video footage has reached a beginning of a road marking by determining that a first brightness transition in the video footage exceeds a predetermined threshold. In response to determining that the reference marker has reached the beginning of the road marking, the computing platform may insert, into the video footage, a first time stamp indicating a time at which the reference marker reached the beginning of the road marking. The computing platform may determine that the reference marker has reached an end of the road marking by determining that a second brightness transition in the video footage exceeds the predetermined threshold. In response to determining that the reference marker has reached the end of the road marking, the computing platform may insert, into the video footage, a second time stamp indicating a time at which the reference marker reached the end of the road marking. Based on the first time stamp and the second time stamp, the computing platform may determine an amount of time during which the reference marker covered the road marking. Based on a known length of the road marking and the amount of time during which the reference marker covered the road marking, the computing platform may determine a vehicle speed. Based on the vehicle speed, the computing platform may generate driving metric output information. The computing platform may generate one or more commands directing an accident analysis platform to generate and cause display of a driving metric interface based on the driving metric output information. The computing platform may establish a first wireless data connection with the accident analysis platform. While the first wireless data connection is established, the computing platform may send, to the accident analysis platform, the driving metric output information and the one or more commands directing the accident analysis platform to generate and cause display of the driving metric interface based on the driving metric output information. In some arrangements, the computing platform may establish a second wireless data connection with a vehicle camera, wherein the video footage is received while the second wireless data connection is established.
“In some arrangements, the computing platform may establish a second wireless data connection with a vehicle camera, wherein the video footage is received while the second wireless data connection is established. In some arrangements, the computing platform may cause the computing platform to determine that the video footage contains a road marking associated with a standard length.
“In some arrangements, the computing platform may insert, into the video footage, a reference marker, wherein the reference marker corresponds to a fixed position in the video footage. In some arrangements, the computing platform may generate one or more commands directing a vehicle attribute database to provide vehicle parameters for a vehicle corresponding to the vehicle camera. The computing platform may establish a third wireless data connection with the vehicle attribute database. While the third wireless data connection is established, the computing platform may send the one or more commands directing the vehicle attribute database to provide the vehicle parameters.
“In some arrangements, the computing platform may receive a vehicle parameter output corresponding to the vehicle parameters. Based on the vehicle parameters and the distance between the vehicle camera and an object in the video footage, the computing platform may determine a distance between the vehicle and the object in the video footage.
“In some arrangements, the computing platform may update the driving metric output information based on the distance between the vehicle and the object in the video footage. The computing platform may generate one or more commands directing the accident analysis platform to generate and cause display of an updated driving metric interface based on the updated driving metric output information. While the first wireless data connection is established, the computing platform may send, to the accident analysis platform, the updated driving metric output information and the one or more commands directing the accident analysis platform to generate and cause display of the updated driving metric interface based on the updated driving metric output information.”
The claims supplied by the inventors are:
“1. A computing platform, comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: receive video footage from a vehicle camera, wherein the vehicle camera is located inside a vehicle, and wherein the video footage includes a point of interest that is a longitudinal distance (D) in front of the vehicle camera and a horizontal distance (L) to a side of the vehicle camera; determine, a height (H) of the vehicle camera corresponding to a height of the vehicle camera above a ground plane, a focal length (fl) for the vehicle camera corresponding to a distance between a center of projection and an image plane for the vehicle camera, and a vertical distance (d) between a middle point of the image plane and an intersection point of the image plane and a line connecting the center of projection and the point of interest on the ground plane; compute D based on H, fl, and d; determine a horizontal distance (w) between the middle point of the image plane and the intersection point of the image plane and the line connecting the center of projection and the point of interest on the ground plane; compute L based on w, D, and fl; compute, based on D and L, a distance between the vehicle and the point of interest; generate, based on the distance between the vehicle and the point of interest, driving metric output information and one or more commands directing an accident analysis platform to cause display of a driving metric output interface based on the driving metric output information; and send, to the accident analysis platform, the driving metric output information and the one or more commands directing the accident analysis platform to cause display of the driving metric output interface based on the driving metric output information, wherein sending the driving metric output information and the one or more commands directing the accident analysis platform to cause display of the driving metric output interface based on the driving metric output information causes the accident analysis platform to cause display of the driving metric output interface based on the driving metric output information.
“2. The computing platform of claim 1, wherein determining H comprises determining h based on information received from a vehicle attribute database.
“3. The computing platform of claim 1, wherein computing D comprises applying the following formula: <maths id=”MATH-US-00015” num=”00015”> <math overflow=”scroll”> <mrow> <mi> D <mo> = <mrow> <mfrac> <mrow> <mi> fl <mo> * <mi> H <mi> d <mo> .
“4. The computing platform of claim 1, wherein computing L comprises applying the following formula: <maths id=”MATH-US-00016” num=”00016”> <math overflow=”scroll”> <mrow> <mi> L <mo> = <mrow> <mfrac> <mrow> <mi> w <mo> * <mi> D <mi> fl <mo> .
“5. The computing platform of claim 1, wherein computing the distance between the vehicle and the point of interest comprises computing √{square root over (D2+L2)}.
“6. The computing platform of claim 1, wherein: the vehicle is in a first lane, and the point of interest is a second vehicle in a second lane.
“7. The computing platform of claim 1, wherein the point of interest is located a distance (Z) above the ground plane.
“8. The computing platform of claim 7, wherein the memory stores additional computer-readable instructions that, when executed by the at least one processor, further cause the computing platform to: compute Z using the formula: <maths id=”MATH-US-00017” num=”00017”> <math overflow=”scroll”> <mrow> <mrow> <mrow> <mi> H <mo> - <mi> Z <mo> = <mfrac> <mrow> <mi> r <mo> * <mi> D <mi> fl <mo> , where r is a distance between a horizon line and an intersection between the image plane and a line connecting the center of projection to the point of interest.
“9. The computing platform of claim 8, wherein computing the distance between the vehicle and the point of interest comprises computing √{square root over (D2+L2+(H-Z)2)}.
“10. The computing platform of claim 1, wherein the memory stores additional computer-readable instructions that, when executed by the at least one processor, further cause the computing platform to: compute, using L, a lateral speed for the vehicle.
“11. A method comprising: at a computing platform comprising at least one processor, a communication interface, and memory: receiving video footage from a vehicle camera, wherein the vehicle camera is located inside a vehicle, and wherein the video footage includes a point of interest that is a longitudinal distance (D) in front of the vehicle camera and a horizontal distance (L) to a side of the vehicle camera; determining, a height (H) of the vehicle camera corresponding to a height of the vehicle camera above a ground plane, a focal length (fl) for the vehicle camera corresponding to a distance between a center of projection and an image plane for the vehicle camera, and a vertical distance (d) between a middle point of the image plane and an intersection point of the image plane and a line connecting the center of projection and the point of interest on the ground plane; computing D using the formula: <maths id=”MATH-US-00018” num=”00018”> <math overflow=”scroll”> <mrow> <mrow> <mi> D <mo> = <mrow> <mfrac> <mrow> <mi> fl <mo> * <mi> H <mi> d <mo> . <mo> ; determining a horizontal distance (w) between the middle point of the image plane and the intersection point of the image plane and the line connecting the center of projection and the point of interest on the ground plane; computing L using the formula: <maths id=”MATH-US-00019” num=”00019”> <math overflow=”scroll”> <mrow> <mrow> <mi> L <mo> = <mfrac> <mrow> <mi> w <mo> * <mi> D <mi> fl <mo> ; computing a distance between the vehicle and the point of interest by computing √{square root over (D2+L2)}; generating, based on the distance between the vehicle and the point of interest, driving metric output information and one or more commands directing an accident analysis platform to cause display of a driving metric output interface based on the driving metric output information; and sending, to the accident analysis platform, the driving metric output information and the one or more commands directing the accident analysis platform to cause display of the driving metric output interface based on the driving metric output information, wherein sending the driving metric output information and the one or more commands directing the accident analysis platform to cause display of the driving metric output interface based on the driving metric output information causes the accident analysis platform to cause display of the driving metric output interface based on the driving metric output information.
“12. The method of claim 11, wherein determining H comprises determining h based on information received from a vehicle attribute database.
“13. The method of claim 11, wherein: the vehicle is in a first lane, and the point of interest is a second vehicle in a second lane.
“14. The method of claim 11, wherein the point of interest is located a distance (Z) above the ground plane.
“15. The method of claim 14, further comprising: computing Z using the formula: <maths id=”MATH-US-00020” num=”00020”> <math overflow=”scroll”> <mrow> <mrow> <mrow> <mi> H <mo> - <mi> Z <mo> = <mfrac> <mrow> <mi> r <mo> * <mi> D <mrow> <mi> f <mo> <mi> l <mo> , where r is a distance between a horizon line and an intersection between the image plane and a line connecting the center of projection to the point of interest.
“16. The method of claim 15, wherein computing the distance between the vehicle and the point of interest comprises computing √{square root over (D2+L2+(H-Z)2)}.
“17. The method of claim 11, further comprising: computing, using L, a lateral speed for the vehicle.”
There are additional claims. Please visit full patent to read further.
For more information, see this patent: Aragon, Juan Carlos. Dynamic driving metric output generation using computer vision methods.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Boston Children’s Hospital Reports Findings in Machine Learning (Confederated learning in healthcare: Training machine learning models using disconnected data separated by individual, data type and identity for Large-Scale health system …): Machine Learning
Patent Issued for Systems and methods for collecting, tracking, and storing system performance and event data for computing devices (USPTO 11429506): Assurant Inc.
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News