Patent Issued for Systems And Methods For 3D Image Distification (USPTO 10,504,003) - Insurance News | InsuranceNewsNet

InsuranceNewsNet — Your Industry. One Source.™

Sign in
  • Subscribe
  • About
  • Advertise
  • Contact
Home Now reading Newswires
Topics
    • Advisor News
    • Annuity Index
    • Annuity News
    • Companies
    • Earnings
    • Fiduciary
    • From the Field: Expert Insights
    • Health/Employee Benefits
    • Insurance & Financial Fraud
    • INN Magazine
    • Insiders Only
    • Life Insurance News
    • Newswires
    • Property and Casualty
    • Regulation News
    • Sponsored Articles
    • Washington Wire
    • Videos
    • ———
    • About
    • Advertise
    • Contact
    • Editorial Staff
    • Newsletters
  • Exclusives
  • NewsWires
  • Magazine
  • Newsletters
Sign in or register to be an INNsider.
  • AdvisorNews
  • Annuity News
  • Companies
  • Earnings
  • Fiduciary
  • Health/Employee Benefits
  • Insurance & Financial Fraud
  • INN Exclusives
  • INN Magazine
  • Insurtech
  • Life Insurance News
  • Newswires
  • Property and Casualty
  • Regulation News
  • Sponsored Articles
  • Video
  • Washington Wire
  • Life Insurance
  • Annuities
  • Advisor
  • Health/Benefits
  • Property & Casualty
  • Insurtech
  • About
  • Advertise
  • Contact
  • Editorial Staff

Get Social

  • Facebook
  • X
  • LinkedIn
Newswires
Newswires RSS Get our newsletter
Order Prints
December 25, 2019 Newswires
Share
Share
Tweet
Email

Patent Issued for Systems And Methods For 3D Image Distification (USPTO 10,504,003)

Insurance Daily News

2019 DEC 25 (NewsRx) -- By a News Reporter-Staff News Editor at Insurance Daily News -- A patent by the inventors Flowers, Elizabeth (Bloomington, IL); Dua, Puneit (Bloomington, IL); Balota, Eric (Bloomington, IL); Phillips, Shanna L. (Bloomington, IL), filed on May 16, 2017, was published online on December 23, 2019, according to news reporting originating from Alexandria, Virginia, by NewsRx correspondents.

Patent number 10,504,003 is assigned to State Farm Mutual Automobile Insurance Company (Bloomington, Illinois, United States).

The following quote was obtained by the news editors from the background information supplied by the inventors: “Images and video taken from modern digital camera and video recording devices can be generated and stored in a variety of different formats and types. For example, digital cameras may capture dimensional (2D) images and store them in a vast array of data formats, including, for example, JPEG (Joint Phonographic Experts Group), TIFF (Tagged Image File Format), PNG (Portable Network Graphics), BMP (Windows Bitmap), or GIF (Graphics Interchange Format). Digital videos typically have their own formats and types, including, for example, FLV (Flash Video), AVI (Audio Video Interleave), MOV (QuickTime Format), WMV (Windows Media Video), and MPEG (Moving Picture Experts Group).

“These 2D formats are typically based on rasterized image data captured by the camera or recording device where the rasterized image data is typically generated and stored to produce a rectangular grid of pixels, or points of color, viewable via a computer screen, paper, or other display medium. Other 2D formats may also be based on, for example, vector graphics. Vector graphics may use polygons, control points or nodes to produce images on a computer screen, for example, where the points and nodes can define a position on x and y axes of a display screen. The images may be produced by drawing curves or paths from the positions and assigning various attributes, including such values as stroke color, shape, curve, thickness, and fill.

“Other file formats can store 3D data. For example, the PLY (Polygon File Format) format can store data including a description of a 3D object as a list of nominally flat polygons, with related points or coordinates in 3D space, along with a variety of properties, including color and transparency, surface normal, texture coordinates and data confidence values. A PLY file can include large number of points to describe a 3D object. A complex 3D object can require thousands or tens-of-thousands of 3D points in a PLY file to describe the object.

“A problem exists with the amount of different file formats and image types. Specifically, while the use, functionality, and underlying data structures of the various image and video formats are typically transparent to a common consumer, the differences in the compatibility of the various formats and types creates a problem for computer systems or other electronic devices that need to analyze or otherwise coordinate the various differences among the competing formats and types for specific applications. This issue is exacerbated because different manufacturers of the camera and/or video devices use different types or formats of image and video files. This combination of available different file formats and types, together with various manufacturer’s decisions to use differing file formats and types, creates a vast set of disparate image and video files and data that are incompatible and difficult to interoperate for specific applications.”

In addition to the background information obtained for this patent, NewsRx journalists also obtained the inventors’ summary information for this patent: “Accordingly, there is a need for systems and methods to provide compatibility, uniformity, and interoperability among the various image file formats and types. For example, certain embodiments disclosed herein address issues that derive from the complexity and/or size of the data formats themselves. For example, a 3D file, such as a PLY file can have tens-of-thousands numbers of 3D points to describe a 3D image. Such a fine level of granularity may not be necessary to analyze the 3D image to determine, for example, items of interest within the 3D image, such as, for example, human features or behaviors identifiable in the 3D image.

“Moreover, certain embodiments herein further address that each 3D file, even files using the same format, e.g., a PLY file, can include sequences of 3D data points in different, unstructured orders, such that the sequencing of 3D points of one 3D file can be different from the sequencing of 3D points of another file. This unstructured nature can create an issue when analyzing 3D images, especially when analyzing a series of 3D images, for example, from frames of a 3D movie, because there is no uniform structure to comparatively analyze the 3D images against.

“For the foregoing reasons, systems and methods are disclosed herein for ‘Distification’ of 3D imagery. As further described herein, Distification can provide an improvement in the accuracy of predictive models, such as the prediction models disclosed herein, over known normalization methods. For example, the use of Distification on 3D image data can improve the predictive accuracy, classification ability, and operation of a predictive model, even when used in known or existing predictive models, neural networks or other predictive systems and methods.

“As described herein, a computing device may provide 3D image Distification by first obtaining a three dimensional (3D) image that includes rules defining a 3D point cloud. The computing device may then generate a two dimensional (2D) image matrix based upon the 3D image. The 2D image matrix may include 2D matrix point(s) mapped to the 3D image. Each 2D matrix point can be associated with a horizontal coordinate and a vertical coordinate. The computing device can generate an output feature vector that includes, for at least one of the 2D matrix points, the horizontal coordinate and the vertical coordinate of the 2D matrix point, and a depth coordinate of a 3D point in the 3D point cloud of the 3D image. The 3D point can have a nearest horizontal and vertical coordinate pair that corresponds to the horizontal and vertical coordinates of the at least one 2D matrix point.

“In some embodiments, the output feature vector may indicate one or more image feature values associated with the 3D point. The feature values can define one or more items of interest in the 3D image. The items of interest in the 3D image can include, for example, a person’s head, a person’s facial features, a person’s hand, or a person’s leg. In some aspects, the output feature vector is input into a predictive model for making predictions with respect to the items of interest.

“In some embodiments, the output feature vector can further include a distance value generated based on the distance from the at least one 2D matrix point to the 3D point. In other embodiments, a total quantity of the 2D matrix points mapped to the 3D image can be less (i.e., to create a courser granularity) than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.

“In other embodiments, the 3D imagery, and rules defining the 3D point cloud, are obtained from one or more respective PLY files or PCD files. The 3D imagery may be a frame from a 3D movie. The 3D images may be obtained from various computing devices, including, for example, any of a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device, or a tablet computing device.

“In other embodiments, Distification can be executed in parallel such that the computing device, or various networked computing devices, can Distify multiple 3D images at the same time.

“Distification can be performed, for example, as a preprocessing technique for a variety of applications, for example, for use with 3D predictive models. For example, systems and methods are disclosed herein for generating an image-based prediction model. As described, a computing device may obtain a set of one or more 3D images from a 3D image data source, where each of the 3D images are associated with 3D point cloud data. In some embodiments, the 3D image data source is a remote computing device (but it can also be collocated). The Distification process can be applied to the 3D point cloud data of each 3D image to generate output feature vector(s) associated with the 3D images. A prediction model may then be generated by training a model with the output feature vectors. For example, in certain embodiments, the prediction model may be trained using a neural network, such as a convolutional neural network.

“In some embodiments, training the prediction model can include using one or more batches of output feature vectors, where batches of the output feature vectors correspond to one or more subsets of 3D images from originally obtained 3D images.

“In certain embodiments, the 3D images used to generate the prediction model may depict driver behaviors. The driver behaviors can include, for example, driver gestures such as: left hand calling, right hand calling, left hand texting, right hand texting, eating, drinking, adjusting the radio, or reaching for the backseat. The prediction model may determine a driver behavior classification and corresponding probability value for a 3D image, where the probability value can indicate the probability that the 3D image is associated with a driver behavior classification, e.g., ‘eating.’ The 3D image may then be associated with the driver behavior classification, such that the 3D image is said to identify or otherwise indicate the driver behavior for the driver.

“In some embodiments, the driver behavior classification and the probability value can be transmitted to a different computing device, such as a remote computing device or a local, but separate computing device.

“Distification can also be used for interoperating 3D imagery with 2D imagery. For example, the differing file formats and types are especially problematic when comparing or attempting to interoperate 3D and 2D image types, which typically have vastly different file formats tailored to 3D and 2D imagery, respectively. For example, a 2D JPEG image uses a rasterized grid of pixels to form an image. 2D images are typically concerned with data compression (for file size purposes), color, and relative positioning (with respect to the other pixels) within the rasterized grid forming the image, and are typically not concerned with where the pixels or points of the 2D image that are within, for example, some larger space outside of the rasterized grid. 3D images, on the other hand, depend on 3D coordinates and positioning in 3D space in order to represent a 3D object built, for example, by numerous polygon shapes that each have their own vertices (e.g., x, y and z coordinate positions) that define the position of the polygons, and, ultimately, the object itself in 3D space. Other attributes of a 3D file format may be concerned with color, shape, texture, line size, etc., but such attributes are typically indicated in a 3D file in a completely different format from 2D file formats to accommodate the rendering of the images in 3D space versus 2D rasterisation.

“For the foregoing reasons, systems and methods are disclosed herein for generating an enhanced prediction from a 2D and 3D image-based ensemble model. As described herein, a computing device may be configured to obtain one or more sets of 2D and 3D images. Each of the 2D and 3D images may be standardized to allow for comparison and interoperability between the images. In one embodiment, the 3D images are standardized using Distification. In addition, corresponding 2D and 3D image pairs (i.e., a ‘2D3D image pair’) may be determined from the standardized 2D and 3D pairs where, for example, the 2D and 3D images correspond based on a common attribute, such as a similar timestamp or time value. The enhanced prediction may utilize separate underlying 2D and 3D prediction models, where, for example, the corresponding 2D and 3D images of a 2D3D pair are each input to the respective 2D and 3D prediction models to generate respective 2D and 3D predict actions.

“The predict actions can include classifications and related probability values for those classifications for each of the 2D and 3D images. For example, the 2D prediction model may generate a 20% value for a ‘texting’ class for a given 2D image and the 3D prediction model may generate a 50% value for the same ‘texting’ class for a given 3D image, such as a 3D image paired with the 2D image in the 2D3D image pair. The ensemble model may then generate an enhanced prediction for the 2D3D image pair, where the enhanced prediction can determine an overall 2D3D image pair classification for the 2D3D image based upon the 2D and 3D predict actions. Thus, for example, the 2D3D image pair may indicate that the driver was ‘texting.’ In some embodiments, the enhanced prediction determines the 2D3D image pair classification by summing one or more probability values associated with the 2D predict actions and the 3D predict actions to determine a maximum summed probability value, wherein the maximum summed probability value is determined from the sums of one or more classification probability values associated with each of the 2D predict actions and the 3D predict actions. Thus, for the example above, the 20% probability value and the probably 50% value from the 2D and 3D models, respectively, could be summed to compute an overall 70% value. If the 70% summed value was the maximum value, when compared to other classifications, e.g., ‘eating,’ then the classification (e.g., ‘texting’) associated with the maximum summed probability can be identified as the 2D3D image pair classification for the 2D3D image pair.

“In some embodiments, the 2D and 3D images input into the ensemble model are sets of images defining a ‘chunk’ of images sharing a common timeframe, such as images 2D and 3D images taken at the same time for a movie. In some embodiments, a chunk classification can be determined for the common timeframe, where the chunk classification is based on one or more 2D3D image pair classifications of the 2D3D image pairs that make up the movie.

“In other embodiments, the ensemble model can generate a confusion matrix that includes one or more 2D3D image pair classifications. The confusion matrix can be used for further analysis or review of the ensemble model, for example, to compare the accuracy of the model with other prediction models.

“In some embodiments, the ensemble model may be used to generate a data structure series that can indicate one or more driver behaviors as determined from one or more 2D3D image pair classifications. The driver behaviors can be used to determine or develop a risk factor for a given driver. As mentioned herein, the driver behaviors can include any of left hand calling, right hand calling, left hand texting, right hand texting, eating, drinking, adjusting the radio, or reaching for the backseat.

“Advantages will become more apparent to those of ordinary skill in the art from the following description of the preferred embodiments which have been shown and described by way of illustration. As will be realized, the present embodiments may be capable of other and different embodiments, and their details are capable of modification in various respects. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.”

The claims supplied by the inventors are:

“What is claimed is:

“1. A computing device configured to Distify 3D imagery, the computing device comprising one or more processors configured to: obtain a three dimensional (3D) image, wherein the 3D image includes rules defining a 3D point cloud; generate a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points mapped to the 3D image, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; and generate an output feature vector as a data structure that includes (1) a first set of values comprising a first horizontal coordinate and a first vertical coordinate of at least one 2D matrix point of the 2D image matrix, and (2) a second set of values comprising a second vertical coordinate, a second horizontal coordinate, and a depth coordinate of a 3D point in the 3D point cloud of the 3D image, wherein the second horizontal coordinate and the second vertical coordinate of the 3D point comprise a nearest horizontal and vertical coordinate pair having a nearest distance with respect to a first horizontal and vertical coordinate pair comprised of the first horizontal coordinate and the first vertical coordinate of the at least one 2D matrix point of the 2D image matrix, and wherein the output feature vector is input into a predictive model.

“2. The computing device of claim 1, wherein the output feature vector indicates one or more image feature values associated with the 3D point, wherein each image feature value defines one or more items of interest in the 3D image.

“3. The computing device of claim 2, wherein the one or more items of interest in the 3D image include one or more of the following: a person’s head, a person’s facial features, a person’s hand, or a person’s leg.

“4. The computing device of claim 1, wherein the output feature vector further includes a distance value generated based on the distance from the at least one 2D matrix point to the 3D point.

“5. The computing device of claim 1, wherein the 3D image and rules defining the 3D point cloud are obtained from one or more respective PLY files or PCD files.

“6. The computing device of claim 1, wherein the 3D image is a frame from a 3D movie.

“7. The computing device of claim 1, wherein the 3D image is obtained from one or more of the following: a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device or a tablet computing device.

“8. The computing device of claim 1, wherein a total quantity of the one or more 2D matrix points mapped to the 3D image is less than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.

“9. The computing device of claim 1, wherein the computing device is further configured to Distify a second 3D image in parallel with the 3D image.

“10. A computer-implemented method for Distification of 3D imagery using one or more processors, the method comprising: obtaining a three dimensional (3D) image, wherein the 3D image includes rules defining a 3D point cloud; generating a two dimensional (2D) image matrix based upon the 3D image, wherein the 2D image matrix includes one or more 2D matrix points mapped to the 3D image, and wherein each 2D matrix point has a horizontal coordinate and a vertical coordinate; and generate an output feature vector as a data structure that includes (1) a first set of values comprising a first horizontal coordinate and a first vertical coordinate of at least one 2D matrix point of the 2D image matrix, and (2) a second set of values comprising a second vertical coordinate, a second horizontal coordinate, and a depth coordinate of a 3D point in the 3D point cloud of the 3D image, wherein the second horizontal coordinate and the second vertical coordinate of the 3D point comprise a nearest horizontal and vertical coordinate pair having a nearest distance with respect to a first horizontal and vertical coordinate pair comprised of the first horizontal coordinate and the first vertical coordinate of the at least one 2D matrix point of the 2D image matrix, and wherein the output feature vector is input into a predictive model.

“11. The computer-implemented method of claim 10, wherein the output feature vector indicates one or more image feature values associated with the 3D point, wherein each image feature value defines one or more items of interest in the 3D image.

“12. The computer-implemented method of claim 11, wherein the one or more items of interest in the 3D image include one or more of the following: a person’s head, a person’s facial features, a person’s hand, or a person’s leg.

“13. The computer-implemented method of claim 10, wherein the output feature vector further includes a distance value generated based on the distance from the at least one 2D matrix point to the 3D point.

“14. The computer-implemented method of claim 10, wherein the 3D image and rules defining the 3D point cloud are obtained from one or more respective PLY files or PCD files.

“15. The computer-implemented method of claim 10, wherein the 3D image is a frame from a 3D movie.

“16. The computer-implemented method of claim 10, wherein the 3D image is obtained from one or more of the following: a camera computing device, a sensor computing device, a scanner computing device, a smart phone computing device or a tablet computing device.

“17. The computer-implemented method of claim 10, wherein a total quantity of the one or more 2D matrix points mapped to the 3D image is less than a total quantity of horizontal and vertical coordinate pairs for all 3D points in the 3D point cloud of the 3D image.

“18. The computer-implemented method of claim 10, wherein the computing device is further configured to Distify a second 3D image in parallel with the 3D image.”

URL and more information on this patent, see: Flowers, Elizabeth; Dua, Puneit; Balota, Eric; Phillips, Shanna L. Systems And Methods For 3D Image Distification. U.S. Patent Number 10,504,003, filed May 16, 2017, and published online on December 23, 2019. Patent URL: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=10,504,003.PN.&OS=PN/10,504,003RS=PN/10,504,003

(Our reports deliver fact-based news of research and discoveries from around the world.)

Older

Susman: A Leader who never asked permission

Newer

MNsure says health plan signups decline by 5%

Advisor News

  • Global economic growth will moderate as the labor force shrinks
  • Estate planning during the great wealth transfer
  • Main Street families need trusted financial guidance to navigate the new Trump Accounts
  • Are the holidays a good time to have a long-term care conversation?
  • Gen X unsure whether they can catch up with retirement saving
More Advisor News

Annuity News

  • Prudential launches FlexGuard 2.0 RILA
  • Lincoln Financial Introduces First Capital Group ETF Strategy for Fixed Indexed Annuities
  • Iowa defends Athene pension risk transfer deal in Lockheed Martin lawsuit
  • Pension buy-in sales up, PRT sales down in mixed Q3, LIMRA reports
  • Life insurance and annuities: Reassuring ‘tired’ clients in 2026
More Annuity News

Health/Employee Benefits News

  • LTC riders: More education is needed, NAIFA president says
  • SENATE PANEL ADVANCES 'MENOPAUSE COVERAGE ACT'
  • DURBIN CALLS OUT REPUBLICAN VOTE TO RAISE HEALTH INSURANCE PREMIUMS
  • BLACKBURN PRESSES CVS ON DRIVING UP HEALTH CARE COSTS AND FORCING TAXPAYERS TO FUND FRAUD
  • Commentary: ACA tax credits helped more Oregonians find coverage. Will Congress keep them?
Sponsor
More Health/Employee Benefits News

Life Insurance News

  • LTC riders: More education is needed, NAIFA president says
  • Best’s Market Segment Report: AM Best Maintains Stable Outlook on Malaysia’s Non-Life Insurance Segment
  • Report Summarizes Kinase Inhibitors Study Findings from Saga University Hospital (Simulation of Perioperative Ibrutinib Withdrawal Using a Population Pharmacokinetic Model and Sparse Clinical Concentration Data): Drugs and Therapies – Kinase Inhibitors
  • Flawed Social Security death data puts life insurance benefits at risk
  • EIOPA FLAGS FINANCIAL STABILITY RISKS RELATED TO PRIVATE CREDIT, A WEAKENING DOLLAR AND GLOBAL INTERCONNECTEDNESS
More Life Insurance News

- Presented By -

Top Read Stories

More Top Read Stories >

NEWS INSIDE

  • Companies
  • Earnings
  • Economic News
  • INN Magazine
  • Insurtech News
  • Newswires Feed
  • Regulation News
  • Washington Wire
  • Videos

FEATURED OFFERS

Slow Me the Money
Slow down RMDs … and RMD taxes … with a QLAC. Click to learn how.

ICMG 2026: 3 Days to Transform Your Business
Speed Networking, deal-making, and insights that spark real growth — all in Miami.

Your trusted annuity partner.
Knighthead Life provides dependable annuities that help your clients retire with confidence.

Press Releases

  • National Life Group Announces Leadership Transition at Equity Services, Inc.
  • SandStone Insurance Partners Welcomes Industry Veteran, Rhonda Waskie, as Senior Account Executive
  • Springline Advisory Announces Partnership With Software And Consulting Firm Actuarial Resources Corporation
  • Insuraviews Closes New Funding Round Led by Idea Fund to Scale Market Intelligence Platform
  • ePIC University: Empowering Advisors to Integrate Estate Planning Into Their Practice With Confidence
More Press Releases > Add Your Press Release >

How to Write For InsuranceNewsNet

Find out how you can submit content for publishing on our website.
View Guidelines

Topics

  • Advisor News
  • Annuity Index
  • Annuity News
  • Companies
  • Earnings
  • Fiduciary
  • From the Field: Expert Insights
  • Health/Employee Benefits
  • Insurance & Financial Fraud
  • INN Magazine
  • Insiders Only
  • Life Insurance News
  • Newswires
  • Property and Casualty
  • Regulation News
  • Sponsored Articles
  • Washington Wire
  • Videos
  • ———
  • About
  • Advertise
  • Contact
  • Editorial Staff
  • Newsletters

Top Sections

  • AdvisorNews
  • Annuity News
  • Health/Employee Benefits News
  • InsuranceNewsNet Magazine
  • Life Insurance News
  • Property and Casualty News
  • Washington Wire

Our Company

  • About
  • Advertise
  • Contact
  • Meet our Editorial Staff
  • Magazine Subscription
  • Write for INN

Sign up for our FREE e-Newsletter!

Get breaking news, exclusive stories, and money- making insights straight into your inbox.

select Newsletter Options
Facebook Linkedin Twitter
© 2025 InsuranceNewsNet.com, Inc. All rights reserved.
  • Terms & Conditions
  • Privacy Policy
  • InsuranceNewsNet Magazine

Sign in with your Insider Pro Account

Not registered? Become an Insider Pro.
Insurance News | InsuranceNewsNet