Patent Issued for Systems and methods for determining whether an individual is sick based on machine learning algorithms and individualized data (USPTO 11869641): Aetna Inc.
2024 JAN 26 (NewsRx) -- By a
The patent’s inventors are Bates, III, Robert E. (
This patent was filed on
From the background information supplied by the inventors, news correspondents obtained the following quote: “Impacts of viruses and other diseases are significant even during a typical flu season and the prevention of another global pandemic is a desire shared by many people as well as enterprise organizations. One of the most common ways diseases spread is through the workplace. For example, an individual may be feeling unwell, but may be unsure as to whether they are actually sick. For instance, they may attribute their unease to allergies, sleep deprivation, grogginess when waking up, and/or other factors rather than identifying that they are actually sick. As such, the individuals may go into work, and if they are sick, then they may spread the disease to others within the workplace. This may lead to an entire office space being infected, which may cause projects to be delayed and/or other severe drawbacks. Traditionally, temperature checks may be used to determine whether an individual is sick. However, these temperature checks are typically inaccurate on their own as a person’s body temperature may rise and fall depending on external conditions (e.g., the temperature outside), which may lead individuals into a false sense of security. Accordingly, there remains a technical need to alert individuals that they are sick such that they may stay home and not infect others.”
Supplementing the background information on this patent, NewsRx reporters also obtained the inventors’ summary information for this patent: “In some examples, the present application may use machine learning (e.g., artificial intelligence) algorithms, models, and/or datasets to determine whether an individual is sick and/or infectious. For example, a user device (e.g., a smartphone) may receive images and/or voice recordings associated with an individual. The images may be images of the individual’s face and the voice recordings may be audio recordings of the individual saying a phrase (e.g., “Mary had a little lamb”). The user device may train machine learning datasets based on the received images and/or voice recordings such that the machine learning datasets are individualized for the particular individual. By individualizing the machine learning datasets for the particular individual, the machine learning datasets may better predict and/or determine whether the individual is actually sick. After training the machine learning datasets, the user device may receive a new image and a new voice recording associated with the individual. The user device may input the new image and voice recording into the trained machine learning datasets to determine whether the individual is actually sick. Subsequently, the user device may display an alert of the determination or provide information indicating the determination to a second device.
“In one aspect, a user device comprises one or more processors; and a non-transitory computer-readable medium having processor-executable instructions stored thereon. The processor-executable instructions, when executed, facilitate: obtaining a facial image of an individual; obtaining an audio file comprising a voice recording of the individual; determining a facial recognition confidence value associated with whether the individual is sick based on inputting the facial image into a facial recognition machine learning dataset that is individualized for the individual; determining a voice recognition confidence value associated with whether the individual is sick based on inputting the audio file into a voice recognition machine learning dataset that is individualized for the individual; determining whether the individual is sick based on the facial recognition confidence value and the voice recognition confidence value; and causing display of a prompt indicating whether the individual is sick.
“Examples may include one of the following features, or any combination thereof. For instance, in some examples, the user device further comprises an image capturing device. The processor-executable instructions, when executed, further facilitate: using the image capturing device to obtain training data comprising a plurality of facial images of the individual; and individualizing the facial recognition machine learning dataset for the individual based on training the facial recognition machine learning dataset using the plurality of facial images of the individual.
“In some instances, the user device further comprises a voice recording device. The processor-executable instructions, when executed, further facilitate: using the voice recording device to obtain training data comprising a plurality of voice recordings of the individual; and individualizing the voice recognition machine learning dataset for the individual based on training the voice recognition machine learning dataset using the plurality of voice recordings of the individual.
“In some variations, the processor-executable instructions, when executed, further facilitate: receiving, from a wearable device and at a first instance in time, first sensor information indicating first health characteristics associated with the individual; generating a baseline health model of the individual based on the first sensor information, and wherein determining whether the individual is sick is further based on the baseline health model.”
The claims supplied by the inventors are:
“1. A user device, comprising: one or more processors; and a non-transitory computer-readable medium having processor-executable instructions stored thereon, wherein the processor-executable instructions, when executed, facilitate: receiving, from a wearable device and at a first instance in time, first sensor information indicating first health characteristics associated with an individual; generating a baseline health model of the individual based on the first sensor information; obtaining a facial image of the individual; obtaining an audio file comprising a voice recording of the individual; determining a facial recognition confidence value associated with whether the individual is sick based on inputting the facial image into a facial recognition machine learning dataset that is individualized for the individual; determining a voice recognition confidence value associated with whether the individual is sick based on inputting the audio file into a voice recognition machine learning dataset that is individualized for the individual; determining whether the individual is sick based on the baseline health model, the facial recognition confidence value, and the voice recognition confidence value; and causing display of a prompt indicating whether the individual is sick.
“2. The user device of claim 1, further comprising: an image capturing device, and wherein the processor-executable instructions, when executed, further facilitate: using the image capturing device to obtain training data comprising a plurality of facial images of the individual; and individualizing the facial recognition machine learning dataset for the individual based on training the facial recognition machine learning dataset using the plurality of facial images of the individual.
“3. The user device of claim 1, further comprising: a voice recording device, and wherein the processor-executable instructions, when executed, further facilitate: using the voice recording device to obtain training data comprising a plurality of voice recordings of the individual; and individualizing the voice recognition machine learning dataset for the individual based on training the voice recognition machine learning dataset using the plurality of voice recordings of the individual.
“4. The user device of claim 1, wherein the processor-executable instructions, when executed, further facilitate: receiving, from the wearable device and at a second instance in time that is subsequent to the first instance in time, second sensor information indicating second health characteristics associated with the individual; and determining one or more health characteristic confidence values based on comparing the second sensor information with the generated baseline health model, wherein determining whether the individual is sick is further based on the one or more health characteristic confidence values.
“5. The user device of claim 4, wherein the first and second health characteristics comprises one or more of an oxygen level of the individual, a temperature reading of the individual, a pulse rate of the individual, and a humidity value associated with the individual.
“6. The user device of claim 1, wherein the processor-executable instructions, when executed, further facilitate: receiving, from the wearable device and at a third instance in time, third sensor information indicating third health characteristics associated with the individual; based on comparing the third health characteristics with the first health characteristics, causing display of a second prompt requesting user feedback associated with updating the baseline health model; and in response to the user feedback indicating for the baseline health model to be updated, updating the baseline health model using the third health characteristics.
“7. The user device of claim 1, wherein determining the facial recognition confidence value comprises: inputting the facial image into the facial recognition machine learning dataset to determine a preliminary facial recognition value; and calculating the facial recognition confidence value based on the preliminary facial recognition value and a facial recognition weighted value, and wherein determining the voice recognition confidence value comprises: inputting the audio file into the voice recognition machine learning dataset to determine a preliminary voice recognition value; and calculating the voice recognition confidence value based on the preliminary voice recognition value and a voice recognition weighted value.
“8. The user device of claim 7, wherein the processor-executable instructions, when executed, further facilitate: determining, based on second sensor information from the wearable device, a preliminary sensor information value, wherein the preliminary sensor information value is associated with an oxygen level of the individual, a temperature reading of the individual, a pulse rate of the individual, or a humidity value associated with the individual; calculating a health characteristic confidence value based on the preliminary sensor information value and a health characteristic weighted value, and wherein determining whether the individual is sick is further based on the health characteristic confidence value.
“9. The user device of claim 8, wherein the processor-executable instructions, when executed, further facilitate: providing, to an enterprise computing system, a request for a plurality of weighted values associated with a particular type of illness; and receiving, from the enterprise computing system, the voice recognition weighted value associated with the particular type of illness, the health characteristic weighted value associated with the particular type of illness, and the facial recognition weighted value associated with the particular type of illness.
“10. The user device of claim 1, further comprising: an image capturing device, and wherein the processor-executable instructions, when executed, further facilitate: using the image capturing device to obtain a second image of a portion of the individual’s body, wherein the portion of the individual’s body is any bodily portion of the individual other the individual’s face, and wherein determining whether the individual is sick is further based on the second image of the portion of the individual’s body.
“11. The user device of claim 1, wherein the prompt requests user feedback indicating whether to provide information to an enterprise computing system, and wherein the processor-executable instructions, when executed, further facilitate: based on the user feedback, providing information indicating the individual is sick to the enterprise computing system, wherein the information comprises geographical coordinates associated with the user device.
“12. A system, comprising: a health characteristic device, comprising: one or more first processors; and a first non-transitory computer-readable medium having first processor-executable instructions stored thereon, wherein the first processor-executable instructions, when executed, facilitate: obtaining current sensor information indicating current health characteristics associated with an individual; and providing the current sensor information to a user device; and the user device, wherein the user device comprises: one or more second processors; and a second non-transitory computer-readable medium having second processor-executable instructions stored thereon, wherein the second processor-executable instructions, when executed, facilitate: obtaining a facial image of the individual; obtaining an audio file comprising a voice recording of the individual; determining a facial recognition confidence value associated with whether the individual is sick based on inputting the facial image into a facial recognition machine learning dataset that is individualized for the individual; determining a voice recognition confidence value associated with whether the individual is sick based on inputting the audio file into a voice recognition machine learning dataset that is individualized for the individual; determining whether the individual is sick based on the facial recognition confidence value, the voice recognition confidence value, and the current sensor information from the health characteristic device; and causing display of a prompt indicating whether the individual is sick.
“13. The system of claim 12, wherein the first processor-executable instructions, when executed, further facilitate: obtaining first sensor information indicating first health characteristics associated with the individual; and providing the first sensor information to the user device, and wherein the second processor-executable instructions, when executed, further facilitate: generating a baseline health model of the individual based on the first sensor information, wherein determining whether the individual is sick is further based on comparing the current sensor information with the baseline health model.
“14. The system of claim 13, wherein the first and current sensor information comprises one or more of an oxygen level of the individual, a temperature reading of the individual, a pulse rate of the individual, and a humidity value associated with the individual.
“15. The system of claim 13, wherein the first processor-executable instructions, when executed, further facilitate: obtaining third sensor information indicating third health characteristics associated with the individual; and providing the third sensor information to the user device, and wherein the second processor-executable instructions, when executed, further facilitate: updating the baseline health model of the individual based on the third sensor information, wherein determining whether the individual is sick is further based on comparing the current sensor information with the updated baseline health model.”
There are additional claims. Please visit full patent to read further.
For the URL and additional information on this patent, see: Bates, III, Robert E. Systems and methods for determining whether an individual is sick based on machine learning algorithms and individualized data.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Application Titled “Secure Communication Tool For Use Alongside Non-Secure Communications” Published Online (USPTO 20240015018): Rhinogram Inc.
Patent Application Titled “Method and System for Preventing and Detecting Identity Theft” Published Online (USPTO 20240013309): Patent Application
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News