Patent Issued for Computer vision systems and methods for automatically detecting, classifying, and pricing objects captured in images or videos (USPTO 11783384): Insurance Services Office Inc.
2023 OCT 27 (NewsRx) -- By a
The patent’s assignee for patent number 11783384 is
News editors obtained the following quote from the background information supplied by the inventors:
“Technical Field
“The present disclosure relates generally to the field of computer vision. More specifically, the present disclosure relates to computer visions systems and methods for automatically detecting, classifying, and pricing objects captured in images or videos.
“Related Art
“Accurate and rapid identification and depiction of objects from digital images (e.g., aerial images, smartphone images, etc.) and video data is increasingly important for a variety of applications. For example, information related to properties and structures thereon (e.g., buildings) is often used by insurance adjusters to determine the proper costs for insuring homes and apartments. Further, in the home remodeling industry, accurate information about personal property can be used to determine the costs associated with furnishing a dwelling.
“Various software systems have been developed for processing images to identify objects in the images. Computer visions systems, such as convolutional neural networks, can be trained to detect and identify different kinds of objects. For example, key point detectors may yield numerous key point candidates that must be matched against other key point candidates from different images.
“Currently, professionals such as insurance adjusters need to manually determine or “guesstimate” the value of a person’s possessions. This is a time-consuming and mistake-ridden process that could lead to inaccurate insurance estimates. As such, the ability to quickly detect objects in a location and determine their value is a powerful tool for insurance and other professionals. Accordingly, the computer vision systems and methods disclosed herein solve these and other needs by providing a robust object detection, classification, and identification system.”
As a supplement to the background information on this patent, NewsRx correspondents also obtained the inventors’ summary information for this patent: “The present disclosure relates to computer vision systems and methods for automatically detecting, classifying, and pricing objects captured in images or videos. The system first receives one or more images or video data. For example, the images or video data can be received from an insurance adjuster taking photos and/or videos using a smartphone. The system then detects and classifies the objects in the images and/or video data. The detecting and classifying steps can be performed by the system using a convolutional neural network. Next, the system extracts the objects from the images or video data. The system then classifies each of the detected objects. For example, the system compares the detected objects to images in a database in order to classify the objects. Next, the system determines the price of the detected object. Lastly, the system generates a pricing report. The pricing report can include the detected and classified objects, as well as a price for each object.”
The claims supplied by the inventors are:
“1. A system for automatically detecting, classifying, and processing objects captured in an image, comprising: a processor in communication with an image source; and computer system code executed by the processor, the computer system code causing the processor to: receive an image from the image source; detect one or more objects in the image; perform a high-level classification of each of the one or more objects in the image by labeling each of the one or more objects in the image; extract each of the one or more objects from the image; perform a specific classification of each of the one or more objects by; identifying at least one stored image from a database of stored images that includes at least one attribute in common with the one or more objects; determine at least one of a make, a model, or a price of each of the one or more objects based by retrieving a stored make, model, or price associated with the at least one stored image identified from the database of stored images; and generate a report comprising the at least one make, model, or price of each of the one or more objects.
“2. The system of claim 1, wherein the image comprises a photograph or a video frame.
“3. The system of claim 1, wherein the processor performs the steps of detecting the at least one object in the image and performing the high-level classification of the at least one object in the image using a convolutional neural network (“CNN”).
“4. The system of claim 3, wherein the
“5. The system of claim 4, wherein each of the one or more bounding boxes are assigned a confidence score.
“6. The system of claim 5, wherein the system retains each of the one or more bounding boxes with a confidence score higher than a predetermined threshold and discards each of the one or more bounding boxes with a confidence score lower than the predetermined threshold.
“7. The system of claim 6, wherein the system selects a single bounding box when there are more than one bounding boxes, using a non-maximal suppression method.
“8. The system of claim 4, wherein the system transforms the one or more bounding boxes into an original image space using scaling parameters.
“9. The system of claim 4, wherein the processor performs the step of extracting the at least object from the image by cropping out the one or more bounding boxes.
“10. The system of claim 1, wherein the system tracks each of the one or more objects in further images using a tracking algorithm.
“11. The system of claim 10, wherein the tracking algorithm comprises a Multiple Instance Learning algorithm or a Kernelized Correlation Filters algorithm.
“12. The system of claim 10, wherein when a new object is detected in the further images, the system performs a high-level classification of the new object.
“13. The system of claim 1, wherein step of performing the specific classification comprises generating a score between the object and each of the stored images.
“14. The system of claim 13, wherein the system generates the score using a key point matching algorithm.
“15. The system of claim 14, wherein the key point matching algorithm generates key point descriptors at locations on the object and at locations on each of the stored images and compares the descriptors to identify matching points.
“16. The system of claim 15, wherein the descriptors comprise at least one of scale-invariant feature transform (“SIFT”) descriptors, histogram of oriented gradients (“HoG”) descriptors, or KAZE descriptors.
“17. The system of claim 1, wherein, prior to the step of detecting the at least one object in the image, the system preprocesses the image using a normalization process or a channel value centering process.
“18. A method for automatically detecting, classifying, and processing objects captured in an image, comprising steps of: receiving an image; detecting one or more objects in the image; performing a high-level classification of each of the one or more objects in the image by labeling each of the one or more objects in the image; extracting each of the one or more objects from the image; performing a specific classification of each of the one or more objects by; identifying at least one stored image from a database of stored images that includes at least one attribute in common with the one or more objects; determining at least one of a make, a model, or a price of each of the one or more objects based by retrieving a stored make, model, or price associated with the at least one stored image identified from the database of stored images; and generating a report comprising the at least one make, model, or price of each of the one or more objects.
“19. The method of claim 18, wherein the image comprises a photograph or a video frame.
“20. The method of claim 18, wherein the steps of detecting the at least one object in the image and performing the high-level classification of the at least one object in the image are performed using a convolutional neural network (“CNN”).
“21. The method of claim 20, wherein the
“22. The method of claim 21, wherein each of the one or more bounding boxes are assigned a confidence score.
“23. The method of claim 22, further comprising retaining each of the one or more bounding boxes with a confidence score higher than a predetermined threshold and discarding each of the one or more bounding boxes with a confidence score lower than the predetermined threshold.
“24. The method of claim 23, further comprising selecting a single bounding box when there are more than one bounding boxes, using a non-maximal suppression method.
“25. The method of claim 21, further comprising transforming the one or more bounding boxes into an original image space using scaling parameters.
“26. The method of claim 21, further comprising performing step of extracting the at least object from the image by cropping out the one or more bounding boxes.
“27. The method of claim 18, further comprising tracking each of the one or more objects in further images using a tracking algorithm.
“28. The method of claim 27, wherein the tracking algorithm comprises a Multiple Instance Learning algorithm or a Kernelized Correlation Filters algorithm.
“29. The method of claim 27, further comprising performing a high-level classification of a new object when the new object is detected in the further images.
“30. The method of claim 18, wherein the step of performing the specific classification comprises generating a score between the object and each of the stored images.
“31. The method of claim 30, further comprising generating the score with a key point matching algorithm.
“32. The method of claim 31, wherein the key point matching algorithm generates key point descriptors at locations on the object and at locations on each of the stored images and compares the descriptors to identify matching points.
“33. The method of claim 32, wherein the descriptors comprise at least one of scale-invariant feature transform (“SIFT”) descriptors, histogram of oriented gradients (“HoG”) descriptors, or KAZE descriptors.
“34. The method of claim 18, further comprising, prior to the step of detecting the at least one object in the image, preprocessing the image using a normalization process or a channel value centering process.”
For additional information on this patent, see: Lebaron, Dean. Computer vision systems and methods for automatically detecting, classifying, and pricing objects captured in images or videos.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Issued for Efficient startup and logon (USPTO 11783020): United Services Automobile Association
Patent Application Titled “Peripheral Device Case with Separate Computer Hardware for Recording and Protecting Health Information” Published Online (USPTO 20230328163): Mymee Inc.
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News