Patent Issued for Error documentation assistance (USPTO 11693726): State Farm Mutual Automobile Insurance Company
2023 JUL 24 (NewsRx) -- By a
Patent number 11693726 is assigned to
The following quote was obtained by the news editors from the background information supplied by the inventors: “In many organizations, individual development teams are often tasked with developing various applications and components for use by other teams within the organization. When it comes to addressing errors or “bugs” within the developed applications, the team that developed the application is typically also tasked with fixing such bugs so that the application can continue to be used. Accordingly, within the organization, each development team may utilize one or more respective databases to track and manage the various bugs associated with their applications and the corresponding fixes that the team develops. Although different development teams for the same or similar applications may share best practices with each other, when it comes to error documentation and solution finding, the databases described above commonly remain siloed within individual teams. Such siloed databases can result in the duplication of efforts and can reduce the efficiency of the teams as a whole, particularly as applications grow more complex and interdependent. Accordingly, there is a need for an error documentation system that could assist in documenting the errors and solutions across multiple teams to promote solution sharing.”
In addition to the background information obtained for this patent, NewsRx journalists also obtained the inventors’ summary information for this patent: “This disclosure is directed to an error documentation system, including an analysis tool configured to assist with collecting application defect data triggered by error events and a query tool configured to share defect data and solutions. The error events may be triggered by computer errors (e.g., null pointers, code exceptions, etc.) or triggered by preconfigured rules for alerts. In some examples, the preconfigured rules may include rules generated by operators (e.g., software developers) to track specific events occurring on their application. In additional examples, the system may use a logging tool to assist the analysis tool with data collecting. In response to the error event, the logging tool may log metrics from the applications running on end-user devices and may push the metrics to a data repository (e.g., a cloud server) for analysis. In some examples, an end-user device may include any user device able to execute the application and may include a developer testing device during any stage of development cycle for the application.
“In various examples, the error documentation system may document individual error events as event logs and may generate log identifiers to associate with the event logs. An event log, which includes the data logged for the error event, may be tagged or otherwise associated with a respective log identifier (e.g., writing the log identifier to the metadata). The system may analyze the event log to determine if the error event is associated with a new unidentified defect or an existing identified defect. If the error event is associated with an unidentified defect, the system may generate a new defect ticket.
“In various examples, the system may automatically generate and/or populate information on a defect ticket. The system may populate a defect ticket with information gathered based on analyzing the event log and additional information inferred. The information may include but is not limited to an error type, an error message, time stamp, user identifier, response, a stack trace, an exposed endpoint, identifier for a line of code, application and/or application component that triggered the alert, developer identifier (e.g., name of a coder or a team lead), end-user device type, operating system, related and/or dependent applications, infrastructure defect, defect identifier, severity level, priority level, tasks, correlated defects, correlated solutions, and the like.
“In some examples, the system may generate a task to review a ticket and may automatically publish notifications to any subscribers (e.g., project managers, developers, quality assurance members, operators, etc.). If the error event is associated with an identified defect, the system may append the input event log to the existing defect ticket by adding the log identifier to the ticket. In various examples, the system may determine whether the identified defect is resolved or unresolved based on whether a solution is found as indicated on the ticket. In some examples, if new event log information is added to an unresolved defect ticket, the system may automatically generate a notification to alert a subscriber to review the new event log. In various examples, the system may escalate a ticket by automatically increasing the priority level of the ticket based on a predetermined criterion. The criterion may include determining that the number of event logs added to the defect ticket has exceeded a threshold escalation count.
“In various examples, the error documentation system may train one or more machine learning (ML) models using training data from stored event logs and defects databases to classify input data based on correlated defects. The ML models may use the training data to learn various error patterns and corresponding solutions to generate suggested solutions. In some examples, the ML models may provide a suggested solution for a new defect found in a first application based on a verified solution for an identified defect found in a second application. In some examples, the error documentation system may provide a query tool, including a chatbot, for operators to query the defects database for similar defects and solutions. In additional examples, the error documentation system may automatically generate a suggested solution entry, add it to the defect ticket, and publish a notification for a subscriber to review the suggested solution.”
The claims supplied by the inventors are:
“1. A system comprising: one or more processors; and a non-transitory computer-readable media storing a plurality of software components that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving, from one or more computing devices, an event log associated with an error event, the event log including data logged in response to the error event triggered on an application; identifying, based at least in part on the event log, a defect and corresponding defect information; determining, by inputting the defect information into one or more correlation models, the defect correlates to a resolved defect identified in a defects database; identifying a correlated solution from the resolved defect indicated in the defects database; determining, by a defect analyzer component of the plurality of software components and based at least in part on identifying the correlated solution, to generate a defect ticket to associate the defect with the correlated solution; generating the defect ticket for the defect including the defect information and indicating the correlated solution; storing the defect ticket in the defects database; receiving confirmation that the correlated solution is a resolution for the defect; creating training data that includes the defect ticket, the correlated solution, and the confirmation; and retraining the one or more correlation models using the training data.
“2. The system of claim 1, the defect information including one or more of: an error type, an error message, a sequence log, a response time of a request, a sequence code, a stack trace, an exposed endpoint, an application identifier, a stage of development cycle, and a severity level.
“3. The system of claim 1, the operations further comprising: generating a task to request review of the defect ticket; generating a notification for the task; publishing the notification to a subscriber of events associated with the application; and sending the notification to a device associated with the subscriber.
“4. The system of claim 3, wherein determining the defect correlates to the resolved defect includes: generating a confidence score associated with the defect correlating to the resolved defect; and determining the confidence score is above a threshold.
“5. The system of claim 1, the operations further comprising: receiving a query indicating one of an error type or an error message; and retrieving, from the defects database, one or more solutions associated with the query.
“6. The system of claim 1, the operations further comprising: receiving, from the one or more computing devices, an additional event log associated with an additional error event; determining an additional defect associated with the additional event log matches the defect; and adding a log identifier associated with the additional event log to the defect ticket.
“7. The system of claim 6, the operations further comprising: determining a count of log identifiers associated with the defect ticket exceeds a threshold; and increasing a priority level of the defect ticket based at least in part on the count of log identifiers exceeding the threshold.
“8. A method, comprising: training, by one or more processors, a correlation model with training data to correlate input data to identified defects and to output associated confidence scores; receiving, by the one or more processors, an event log associated with an error event, the event log including data logged in response to the error event, and the error event being detected on an application; identifying, by the one or more processors and based at least in part on the event log, a defect and corresponding defect information; determining, by the one or more processors and by inputting the defect information into the correlation model, the defect correlates to an identified defect from a defects database; generating, by the one or more processors, a confidence score associated with the defect correlating to the identified defect; determining, by the one or more processors, the confidence score is above a threshold; determining, by a defect analyzer component of a plurality of software components when executed by the one or more processors and based at least in part on the confidence score being above the threshold, to generate a defect ticket to associate the defect with the identified defect; and generating, by the one or more processors, the defect ticket for the defect and indicating the identified defect.
“9. The method of claim 8, further comprising: identifying, by the one or more processors, a solution of the identified defect indicated in the defects database; and indicating, by the one or more processors, the solution on the defect ticket.
“10. The method of claim 9, further comprising: generating, by the one or more processors, a task to request review for the solution on the defect ticket; receiving, by the one or more processors, a review result that indicates applying the solution failed to fix the defect; creating, by the one or more processors, new training data that includes the defect ticket, the solution, and the review result; and retraining, by the one or more processors, the correlation model using the new training data.
“11. The method of claim 9, further comprising: generating, by the one or more processors, a task to request review for the solution on the defect ticket; receiving, by the one or more processors, confirmation that the solution is a resolution for the defect; creating, by the one or more processors, new training data that includes the defect ticket, the solution, and the confirmation; and retraining, by the one or more processors, the correlation model using the new training data.
“12. The method of claim 11, further comprising: indicating, by the one or more processors, a resolve status on the defect ticket; storing, by the one or more processors, the defect ticket in the defects database; and generating, by the one or more processors, a user interface including a query tool for the defects database.
“13. The method of claim 8, the defect information indicating a high severity level and further comprising: generating, by the one or more processors, a high alert notification for the defect ticket based at least in part on the high severity level; and pushing, by the one or more processors, the high alert notification to at least one user account having a lead team role associated with the application.
“14. The method of claim 8, the event log associated with the error event being received in real-time or in near real-time, and further comprising: determining, by the one or more processors and based at least in part on the corresponding defect information, a developer identifier associated with the error event and a stage of development cycle is associated with a development stage; and pushing, by the one or more processors, a high alert notification to at least a user account associated with the developer identifier.
“15. A method, comprising: creating, by one or more processors, training data by identifying sample data from a defects database; training, by the one or more processors, a machine learning (ML) model with the training data to correlate input to identified defects; receiving, by the one or more processors, an event log; determining, by the one or more processors and using the ML model, a defect associated with the event log correlates to an identified defect from the defects database; determining, by a defect analyzer component of a plurality of software components when executed by the one or more processors and based at least in part on the defect correlating to the identified defect, to generate a defect ticket to associate the defect with the identified defect; generating, by the one or more processors, the defect ticket for the defect with information including a solution of the identified defect indicated in the defects database; receiving, by the one or more processors, review results for applying the solution as a fix for the defect; creating, by the one or more processors, new training data including the defect ticket labeled with the review results; and training, by the one or more processors, a second ML model with the new training data.
“16. The method of claim 15, further comprising: receiving, by the one or more processors, an additional event log; determining, by the one or more processors and using the second ML model, an additional defect associated with the additional event log fails to correlate to a second identified defect from the defects database; and generating, by the one or more processors, an additional defect ticket for the additional defect.
“17. The method of claim 15, further comprising: receiving, by the one or more processors, an additional event log; determining, by the one or more processors and using the second ML model, an additional defect associated with the additional event log is a match for a second identified defect from the defects database; retrieving, by the one or more processors, a second defect ticket for the second identified defect from the defects database; generating, by the one or more processors, a log identifier for the additional event log; and indicating, by the one or more processors, the log identifier on the second defect ticket.”
There are additional claims. Please visit full patent to read further.
URL and more information on this patent, see: Gonzalez, Carlos. Error documentation assistance.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Issued for Systems and methods for live video financial deposit (USPTO 11694268): United Services Automobile Association
Patent Issued for Method and system for user-controlled invoice distribution (USPTO 11694240): Sentry Insurance Company
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News