Patent Issued for System and method for vegetation modeling using satellite imagery and/or aerial imagery (USPTO 11869192): General Electric Company
2024 JAN 30 (NewsRx) -- By a
The assignee for this patent, patent number 11869192, is
Reporters obtained the following quote from the background information supplied by the inventors: “Vegetation management is important to maintain reliable power distribution, as appropriate vegetation management may prevent forest fires, and unexpected power shutdown due to vegetation related incidences. A utility company may provide the power via power lines. As part of vegetation management, the utility company may determine the proximity of the vegetation to the power lines. Conventionally this determination may be based on an individual walking the grounds near the power lines to observe the status of the vegetation. However, the manual walking and observation process may be time consuming and infrequent. For example, the walking/observation may be on a fixed trimming schedule in combination with reactive trimming based on outages and utility customer complaints. An attempt has been made to automate the determination of the proximity of the vegetation to the power lines via a LiDAR (Light detection and ranging) sensor aimed out of helicopters and/or drones to generate a 3D representation of the vegetation. However, the use of LiDAR sensors is very expensive, and may take a long time to collect and process.
“It would be desirable to provide systems and methods to improve vegetation management.”
In addition to obtaining background information on this patent, NewsRx editors also obtained the inventors’ summary information for this patent: “According to some embodiments, a system is provided including a vegetation module to receive image data from an image source; a memory for storing program instructions; a vegetation processor, coupled to the memory, and in communication with the vegetation module, and operative to execute program instructions to: receive image data; estimate a vegetation segmentation mask; generate at least one of a 3D point cloud and a 2.5D Digital Surface Model based on the received image data; estimate a relief surface using a digital terrain model; generate a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and at least one of the 3D point cloud and the 2.5D DSM; generate a canopy height model based on the generated vegetation masked digital surface model; and generate at least one analysis with an analysis module, wherein the analysis module receives the generated canopy height model prior to execution of the analysis module, and wherein the analysis module uses the generated canopy height model in the generation of the at least one analysis.
“According to some embodiments, a method is provided including receiving image data; estimating a vegetation segmentation mask; generating at least one of a 3D point cloud and a 2.5D Digital Surface Model based on the received image data; estimating a relief surface using a digital terrain model; generating a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and at least one of the 3D point cloud and the 2.5D DSM; generating a canopy height model based on the generated vegetation masked digital surface model; and generating at least one analysis with an analysis module, wherein the analysis module receives the generated canopy height model prior to execution of the analysis module, and wherein the analysis module uses the generated canopy height model in the generation of the at least one analysis.
“According to some embodiments, a non-transitory computer-readable medium is provided, the medium storing program instructions that when executed by a computer processor cause the processor to perform a method including receiving image data; estimating a vegetation segmentation mask; generating at least one of a 3D point cloud and a 2.5D Digital Surface Model based on the received image data; estimating a relief surface using a digital terrain model; generating a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and at least one of the 3D point cloud and the 2.5D DSM; generating a canopy height model based on the generated vegetation masked digital surface model; and generating at least one analysis with an analysis module, wherein the analysis module receives the generated canopy height model prior to execution of the analysis module, and wherein the analysis module uses the generated canopy height model in the generation of the at least one analysis.
“A technical effect of some embodiments of the invention is an improved and/or computerized technique and system for determining a location of vegetation in an area of interest (AOI) and estimating a height of this located vegetation. One or more embodiments may provide for the fusion of satellite data coming at different frequencies and resolutions for analysis of trim cycles for the vegetation, risk management, and other analysis. The fusion of data from multiple sources may be used to provide an accurate status of vegetation and assist an operator in making informed decisions. One or more embodiments may provide for satellite/aerial image analysis of vegetation by fusing information from medium-, high- and ultra-high-resolution data. Medium- and high-resolution data may be satellite and aerial imagery data that is freely available (e.g., sentinel satellite data and National Agriculture Imagery Program (NAIP) imagery, respectively) and whose ground sampling distance (GSD)>=60 cm. As used herein, ultra-high-resolution satellite data may refer to commercial satellite data with GSD of less than 60 cm. It is noted that the different categories of medium, high, and ultra-high may point to different resolutions of data coming from different sources at different resolutions. It is further noted that the classification of medium, high and ultra-high may or may not be strictly based on sampling distance. One or more embodiments may provide for the segmentation of trees in the satellite data to estimate the tree area cover and tree line length to trim. One or more embodiments may provide for the 3D reconstruction of trees and estimation of their height to provide accurate volume of tree trimming tasks. The satellite-based analysis may help in identifying various large assets to the utility company/user and may provide accurate localization of them. Currently a lot of the assets are not accurately mapped. The satellite and/or aerial data used in one or more embodiments may avoid the need for expensive and slow LiDAR data collection, and may provide height information for the vegetation. One or more embodiments may provide for significant saving of resources and money with better vegetation optimized trimming schedules, better planning of resources and better planning of expected work, by moving away from fixed vegetation trimming schedules to need-based/risk-based scheduling. The fusion of analytics on multi-modal data, such as LiDAR, aerial imagery and satellite data, in one or more embodiments, may improve the trim cycle of the transmission and distribution providers by providing accurate analytical data to plan trimmings in advance and distribute resources based on need. One or more embodiments may use the multi-modal data to compute different vegetation-related key performance indicators to provide a more accurate status of the vegetation for decisioning. One or more embodiments may provide for reduced outages and may transition the vegetation management system from reactive to preventive maintenance, which may result in economic savings. For example, risk modeling associated with vegetation may be provided by one or more embodiments, and may help in prioritizing resource allocation and planning.
“One or more embodiments provide for a vegetation module to generate a vegetation segmentation mask. A deep neural network-based vegetation segmentation process may be applied to the satellite/aerial data to obtain vegetation cover. A multi-view 3D reconstruction pipeline may be used, in one or more embodiments, with ultra-high-resolution satellite data to obtain height associated with vegetation. Using the information from the vegetation cover and the height, one or more embodiments may provide accurate volume of tree trimming required for an area of interest (AOI) for a user, including but not limited to utility companies. One or more embodiments may identify all the transmission/distribution/power lines where vegetation is encroaching within a given buffer zone. The vegetation module may be deployed on a cloud-based computer infrastructure, local machine or be part of a webservice to a larger vegetation management system. In one or more embodiments, the vegetation module may use the satellite/aerial data to model terrain/hilly regions in an AOI, which may be used to model the risk associated with vegetation falling in those areas; as well as to identify roads and pavement regions which may be used to measure accessibility and help plan better for vegetation trimming schedules.”
The claims supplied by the inventors are:
“1. A system comprising: a vegetation module to receive image data from an image source; a memory for storing program instructions; a vegetation processor, coupled to the memory, and in communication with the vegetation module, and operative to execute program instructions to: receive image data; estimate a vegetation segmentation mask; generate a 2.5D Digital Surface Model based on the received image data, wherein the 2.5D Digital Surface Model is a representation of a 3D point cloud as a 2.5D grid format and the representation is generated via a binning process that combines multiple depth images of a same area of interest to merge the multiple depth images into a single depth image; estimate a relief surface using a digital terrain model, wherein the digital terrain model is a model of shapes of a terrain over a region and across a range of terrain types; generate a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and the 2.5D Digital Surface Model; generate a canopy height model based on the generated vegetation masked digital surface model; and generate at least one analysis with an analysis module, wherein the analysis module receives the generated canopy height model prior to execution of the analysis module, and wherein the analysis module uses the generated canopy height model in the generation of the at least one analysis.
“2. The system of claim 1, further comprising program instructions to: compute at least one of: at least one Key Performance Indicator and one or more vegetation-related statistics.
“3. The system of claim 2, further comprising program instructions to: receive the computation by at least one of a vegetation management module, an allocation module for resource planning and a tree-trimming scheduling module.
“4. The system of claim 1, wherein estimation of the vegetation segmentation mask further comprises program instructions to: receive the image data at a segmentation process, wherein the image data includes a plurality of pixels representing an area of interest; identify a category for each pixel in the area of interest; mark the received image with the identified category for each pixel; and output the marked received image as the vegetation segmentation mask.
“5. The system of claim 4, wherein the mark identifies the vegetation pixels.
“6. The system of claim 1, wherein the received image is a pair of images for an area of interest, and a first image of the pair of images is an image of the area of interest taken at a different angle from a second image of the pair of images.
“7. The system of claim 6, wherein generation of the 2.5D Digital Surface Model based on the received image data further comprises program instructions to: match all of the pixels from the first image to corresponding pixels in the second image; and generate the 3D point cloud.
“8. The system of claim 7, wherein the 2.5D Digital Surface Model represents a surface of each element in the area of interest.
“9. A method comprising: receiving image data; estimating a vegetation segmentation mask; generating a 2.5D Digital Surface Model based on the received image data, wherein the 2.5D Digital Surface Model is a representation of a 3D point cloud as a 2.5D grid format and the representation is generated via a binning process that combines multiple depth images of a same area of interest to merge the multiple depth images into a single depth image; estimating a relief surface using a digital terrain model, wherein the digital terrain model is a model of shapes of a terrain over a region and across a range of terrain types; generating a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and the 2.5D Digital Surface Model; generating a canopy height model based on the generated vegetation masked digital surface model; and generating at least one analysis with an analysis module, wherein the analysis module receives the generated canopy height model prior to execution of the analysis module, and wherein the analysis module uses the generated canopy height model in the generation of the at least one analysis.
“10. The method of claim 9, further comprising: computing at least one of: at least one Key Performance Indicator and one or more vegetation-related statistics.
“11. The method of claim 10, further comprising: receiving the computation by at least one of a vegetation management module, an allocation module for resource planning and a tree-trimming scheduling module.
“12. The method of claim 9, wherein estimating the vegetation segmentation mask further comprises: receiving the image data at a segmentation process, wherein the image data includes a plurality of pixels representing an area of interest; identifying a category for each pixel in the area of interest; marking the received image with the identified category for each pixel; and outputting the marked received image as the vegetation segmentation mask.
“13. The method of claim 12, wherein the mark identifies the vegetation pixels.
“14. The method of claim 9, wherein the received image is a pair of images for an area of interest, and a first image of the pair of images is an image of the area of interest taken at a different angle from a second image of the pair of images.
“15. The method of claim 14, wherein generating the 2.5D Digital Surface Model based on the received image data further comprises: matching all of the pixels from the first image to corresponding pixels in the second image; and generating the 3D point cloud.
“16. A non-transitory, computer-readable medium storing instructions that, when executed by a computer processor, cause the computer processor to perform a method comprising: receiving image data; estimating a vegetation segmentation mask; generating a 2.5D Digital Surface Model based on the received image data, wherein the 2.5D Digital Surface Model is a representation of a 3D point cloud as a 2.5D grid format and the representation is generated via a binning process that combines multiple depth images of a same area of interest to merge the multiple depth images into a single depth image; estimating a relief surface using a digital terrain model, wherein the digital terrain model is a model of shapes of a terrain over a region and across a range of terrain types; generating a vegetation masked digital surface model based on the digital terrain model, the vegetation segmentation mask and the 2.5D Digital Surface Model; generating a canopy height model based on the generated vegetation masked digital surface model; and generating at least one analysis with an analysis module, wherein the analysis module receives the generated canopy height model prior to execution of the analysis module, and wherein the analysis module uses the generated canopy height model in the generation of the at least one analysis.
“17. The computer-readable medium of claim 16, wherein estimating the vegetation segmentation mask further comprises: receiving the image data at a segmentation process, wherein the image data includes a plurality of pixels representing an area of interest; identifying a category for each pixel in the area of interest; marking the received image with the identified category for each pixel; and outputting the marked received image as the vegetation segmentation mask.
“18. The medium of claim 17, wherein the mark identifies the vegetation pixels.”
For more information, see this patent:
(Our reports deliver fact-based news of research and discoveries from around the world.)
“Immune Profiling Using Small Volume Blood Samples” in Patent Application Approval Process (USPTO 20240011075): Patent Application
Best's Market Segment Report: AM Best Maintains Stable Outlook on Japan Non-Life Insurance Industry
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News