Patent Issued for Systems and methods for 3D image distification (USPTO 11670097): State Farm Mutual Automobile Insurance Company
2023 JUN 27 (NewsRx) -- By a
The patent’s assignee for patent number 11670097 is
News editors obtained the following quote from the background information supplied by the inventors: “Images and video taken from modern digital camera and video recording devices can be generated and stored in a variety of different formats and types. For example, digital cameras may capture dimensional (2D) images and store them in a vast array of data formats, including, for example, JPEG (
“These 2D formats are typically based on rasterized image data captured by the camera or recording device where the rasterized image data is typically generated and stored to produce a rectangular grid of pixels, or points of color, viewable via a computer screen, paper, or other display medium. Other 2D formats may also be based on, for example, vector graphics. Vector graphics may use polygons, control points or nodes to produce images on a computer screen, for example, where the points and nodes can define a position on x and y axes of a display screen. The images may be produced by drawing curves or paths from the positions and assigning various attributes, including such values as stroke color, shape, curve, thickness, and fill.
“Other file formats can store 3D data. For example, the PLY (Polygon File Format) format can store data including a description of a 3D object as a list of nominally flat polygons, with related points or coordinates in 3D space, along with a variety of properties, including color and transparency, surface normal, texture coordinates and data confidence values. A PLY file can include large number of points to describe a 3D object. A complex 3D object can require thousands or tens-of-thousands of 3D points in a PLY file to describe the object.
“A problem exists with the amount of different file formats and image types. Specifically, while the use, functionality, and underlying data structures of the various image and video formats are typically transparent to a common consumer, the differences in the compatibility of the various formats and types creates a problem for computer systems or other electronic devices that need to analyze or otherwise coordinate the various differences among the competing formats and types for specific applications. This issue is exacerbated because different manufacturers of the camera and/or video devices use different types or formats of image and video files. This combination of available different file formats and types, together with various manufacturer’s decisions to use differing file formats and types, creates a vast set of disparate image and video files and data that are incompatible and difficult to interoperate for specific applications.”
As a supplement to the background information on this patent, NewsRx correspondents also obtained the inventors’ summary information for this patent: “Accordingly, there is a need for systems and methods to provide compatibility, uniformity, and interoperability among the various image file formats and types. For example, certain embodiments disclosed herein address issues that derive from the complexity and/or size of the data formats themselves. For example, a 3D file, such as a PLY file can have tens-of-thousands numbers of 3D points to describe a 3D image. Such a fine level of granularity may not be necessary to analyze the 3D image to determine, for example, items of interest within the 3D image, such as, for example, human features or behaviors identifiable in the 3D image.
“Moreover, certain embodiments herein further address that each 3D file, even files using the same format, e.g., a PLY file, can include sequences of 3D data points in different, unstructured orders, such that the sequencing of 3D points of one 3D file can be different from the sequencing of 3D points of another file. This unstructured nature can create an issue when analyzing 3D images, especially when analyzing a series of 3D images, for example, from frames of a 3D movie, because there is no uniform structure to comparatively analyze the 3D images against.
“For the foregoing reasons, systems and methods are disclosed herein for “Distification” of 3D imagery. As further described herein, Distification can provide an improvement in the accuracy of predictive models, such as the prediction models disclosed herein, over known normalization methods. For example, the use of Distification on 3D image data can improve the predictive accuracy, classification ability, and operation of a predictive model, even when used in known or existing predictive models, neural networks or other predictive systems and methods.
“As described herein, a computing device may provide 3D image Distification by first obtaining a three dimensional (3D) image that includes rules defining a 3D point cloud. The computing device may then generate a two dimensional (2D) image matrix based upon the 3D image. The 2D image matrix may include 2D matrix point(s) mapped to the 3D image. Each 2D matrix point can be associated with a horizontal coordinate and a vertical coordinate. The computing device can generate an output feature vector that includes, for at least one of the 2D matrix points, the horizontal coordinate and the vertical coordinate of the 2D matrix point, and a depth coordinate of a 3D point in the 3D point cloud of the 3D image. The 3D point can have a nearest horizontal and vertical coordinate pair that corresponds to the horizontal and vertical coordinates of the at least one 2D matrix point.
“In some embodiments, the output feature vector may indicate one or more image feature values associated with the 3D point. The feature values can define one or more items of interest in the 3D image. The items of interest in the 3D image can include, for example, a person’s head, a person’s facial features, a person’s hand, or a person’s leg. In some aspects, the output feature vector is input into a predictive model for making predictions with respect to the items of interest.”
The claims supplied by the inventors are:
“1. A computing device configured to Distify 3D imagery, the computing device comprising one or more processors configured to: obtain a three dimensional (3D) image, wherein the 3D image defines a 3D point cloud; generate a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; and generate an output feature vector as a data structure that includes at least one 2D matrix point of the 2D image matrix, and a 3D point in the 3D point cloud of the 3D image, wherein the 3D point in the 3D point cloud is mapped to a coordinate pair comprised of the horizontal coordinate and the vertical coordinate of the at least one 2D matrix point of the 2D image matrix, and wherein the output feature vector is input into a predictive model.
“2. The computing device of claim 1, wherein the output feature vector indicates one or more image feature values associated with the 3D point, wherein each image feature value defines one or more items of interest in the 3D image.
“3. The computing device of claim 2, wherein the one or more items of interest in the 3D image include one or more of the following: a person’s head, a person’s facial features, a person’s hand, or a person’s leg.
“4. The computing device of claim 1, wherein the output feature vector is input into a predictive model for determining a user behavior.
“5. The computing device of claim 1, wherein the output feature vector further includes a distance value generated based on the distance from the at least one 2D matrix point to the 3D point.
“6. The computing device of claim 1, wherein the 3D image and rules defining the 3D point cloud are obtained from one or more respective PLY files or PCD files.
“7. The computing device of claim 1, wherein the 3D image is a frame from a 3D movie.
“8. The computing device of claim 1, wherein the 3D image is obtained from one or more of the following: a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device or a tablet computing device.
“9. The computing device of claim 1, wherein a total quantity of the one or more 2D matrix points mapped to the 3D image is less than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.
“10. The computing device of claim 1, wherein the computing device is further configured to Distify a second 3D image in parallel with the 3D image.
“11. A computer-implemented method for Distification of 3D imagery using one or more processors, the method comprising: obtaining a three dimensional (3D) image, wherein the 3D image defines a 3D point cloud; generating a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; and generating an output feature vector as a data structure that includes at least one 2D matrix point of the 2D image matrix, and a 3D point in the 3D point cloud of the 3D image, wherein the 3D point in the 3D point cloud is mapped to a coordinate pair comprised of the horizontal coordinate and the vertical coordinate of the at least one 2D matrix point of the 2D image matrix, and wherein the output feature vector is input into a predictive model.
“12. The computer-implemented method of claim 11, wherein the output feature vector indicates one or more image feature values associated with the 3D point, wherein each image feature value defines one or more items of interest in the 3D image.
“13. The computer-implemented method of claim 12, wherein the one or more items of interest in the 3D image include one or more of the following: a person’s head, a person’s facial features, a person’s hand, or a person’s leg.
“14. The computer-implemented method of claim 11, wherein the output feature vector is input into a predictive model for determining a user behavior.
“15. The computer-implemented method of claim 11, wherein the output feature vector further includes a distance value generated based on the distance from the at least one 2D matrix point to the 3D point.
“16. The computer-implemented method of claim 11, wherein the 3D image and rules defining the 3D point cloud are obtained from one or more respective PLY files or PCD files.
“17. The computer-implemented method of claim 11, wherein the 3D image is a frame from a 3D movie.
“18. The computer-implemented method of claim 11, wherein the 3D image is obtained from one or more of the following: a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device or a tablet computing device.
“19. The computer-implemented method of claim 11, wherein a total quantity of the one or more 2D matrix points mapped to the 3D image is less than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.
“20. The computer-implemented method of claim 11 further comprising Distifying a second 3D image in parallel with the 3D image.”
For additional information on this patent, see: Balota, Eric. Systems and methods for 3D image distification.
(Our reports deliver fact-based news of research and discoveries from around the world.)
New Brain Injury Study Findings Have Been Reported by Investigators at University Hospital (A Neuropsychological Approach In a Paediatric Acquired Brain Injury Unit Under the Public Health System): Central Nervous System Diseases and Conditions – Brain Injury
Patent Issued for Sensing peripheral heuristic evidence, reinforcement, and engagement system (USPTO 11670153): State Farm Mutual Automobile Insurance Company
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News