Researchers Submit Patent Application, “Medical Platform”, for Approval (USPTO 20210228276)
2021 AUG 17 (NewsRx) -- By a
No assignee for this patent application has been made.
News editors obtained the following quote from the background information supplied by the inventors: “Medical procedures are potentially life-changing events with enormous benefits and risks. Educating patients about the risks and benefits of medical procedures is an essential step in the patient intake process. Patient education is also an important part of the operative and post-operative phases of a procedure with patient knowledge and expectations standing as two cornerstones of patient safety and successful post operation recovery. In addition to keeping patients safe and helping them recover, enhanced platforms for patient education are needed to decrease the amount of unnecessary office visits and complications treatments caused by inaccurate self-diagnosis of procedure complications by patients that were not educated about the recovery process enough to form accurate expectations of how they should look, feel, and progress during recovery.
“Despite the importance of patient education and the abundant inefficiencies that result from undereducated patents, conversations between doctors and patients with limited visual aids represents the current state of the art of medical education. Although videos and images of most medical procedures exist somewhere on the Internet, this information is often unreliable and difficult to understand unless viewed in the presence of a medical expert. These visual aids provide an incremental improvement over purely oral methods but fail to deliver the comprehensive, interactive, and personalized experience patients need. Accordingly, there exists a well-established need for curating repositories of medical images, videos, simulations, graphical representations, lectures, descriptions, and other mixed media education materials and presenting the curated materials in an intuitive user interface that allows the patient to explore and interact with the material at his or her own pace.
“Augmented reality (AR) is a live direct view of a physical real-world environment whose elements are augmented by computer generated input. Unlike virtual reality (VR) which replaces a physical real world with a simulated one, AR platforms focus on enhancing user perception of real-world experiences by, for example, annotating the pages of a classic literary novel, simulating how a room in a building would collapse during an earthquake, classifying a plant or animal species in real time as it is found in the wild or simulating the results and events of a surgical or non-surgical procedure. Applications of AR are widespread and diverse, but each is based on the underlying concept of receiving real-world sensory input, for example, sound, video, haptics, or location data and adding further digital insights to that information.
“Methods of providing a realistic simulation of a real-world experience are especially suited to the medical field. Medical procedures are among the most costly, dangerous, and life changing events in a person’s life. Accordingly, it is extremely important for patients and physicians to comprehend the complexity, understand the risks, and predict the results of a medical procedure before it occurs.
“In light of the shortcomings of state-of-the-art visual aids, there exists a well-established need for realistic digital 3D simulations of medical procedures. To provide the tools patients need to make a truly informed decision about a potentially life changing procedure, such simulations should be personalized for the individual patient, procedure, doctor, and products used in the procedure. Additionally, the simulations should be interactive to show the changes that will occur to the patient’s body during and after the procedure. The simulations should also be interactive so that the patient can visualize physical changes to his or her body from every possible perspective and angle of view. Furthermore, the simulations should provide a comprehensive, step-by-step representation of each action during the procedure so that the patient develops a thorough understanding of the associated risks and potential complications.
“The process of obtaining informed patient consent is another essential medical process that needs to be improved. Due to the tremendous impact and expense associated with most medical procedures, obtaining informed patient consent before conducting a procedure is an integral component of regulatory compliance, medical ethics, insurance reimbursement, and limiting physician liability. Despite the fundamental role of patient consent in the medial field, the state-of-the-art process for obtaining patient consent is pen and paper. Most consent forms are long, full of complex legalese and medical jargon, and seldom read or understood by patients.
“In light of these shortcomings, there exists a well-established need for a patient consent process that is integrated with patient education so that the patient is actually informed about the procedure he or she is consenting to before providing consent. The consent process should be presented through a user interface within a software application to make the process of giving consent more efficient and flexible to fit patient preferences. Additionally, the patient consent software application should ensure the patient reviews all procedure education materials in an interactive way before consenting to the procedure. The patient consent software application should also save the patient’s manifestation of consent, whether it be a physical signature in ink, a digital signature, recording, or some other form, in digital format so it can be accessed at any point in the future by patients, doctors, insurance companies, or any other authorized third party.
“Post-operative patient monitoring and follow-up are essential components of successful patient recovery. Throughout the recovery process, it is important to report actual complications to physicians as soon as possible without burdening doctors with benign changes or misdiagnosed routine recovery developments. The vast majority of medical procedures are outpatient procedures meaning most of the recovery process is completed at home by the patient with only a few periodic check-ups. Accordingly, most of the responsibility for accurately diagnosing procedure complications falls on the patient who in most cases is not a medical expert and typically has little to no experience recovering from their particular procedure. To make matters worse, there are few technology-based tools for helping patients diagnose complications and monitor their recovery process. As a result, many harmful complications go undiagnosed and many routine recovery symptoms are falsely diagnosed. Both of these problems add significant cost to already expensive procedures while also reducing the efficiency of doctors and other healthcare providers.
“Accordingly, there exists a well-established need for automated diagnostic tools that can help patients diagnose complications during the recovery process. There also exists a need for a patient follow up and monitoring software application that can automatically track patient recovery progress and schedule an emergency appointment with a doctor if the patient reports symptoms that carry a high risk of being associated with complications or submits photos of the procedure area that suggest infection or another complication.
“Patient education, consent, and post operation monitoring and follow-up are three important but severely outdated medical processes that need to be improved in order to help doctors better care for their patients, to help patients recover faster, and help medical insurance companies and healthcare provides reduce the cost of medical care. Regarding patient education, there exists a well-established need for more realistic procedure education materials that are presented in a more interactive way. For patients without medical education, the process of learning about a new medical procedure should be intuitive, highly visual, and specific to the patient. Such an education experience would allow the patient to develop a keen, personal understanding of the procedure their body is about to endure along with an accurate set of expectations for how recovery should go as well as a clear list of action items to pursue if complications arise. The patient consent process should be integrated into the patient education materials so that patient consent is obtained only after the patient has clearly understood and interacted with the education materials presented to them. Finally, the post operation monitoring and follow-up process should offer more support to patients in the form of automated diagnostic tools that can help diagnose complications and recovery process simulations and reports that provide an accurate idea of what the patient should expect at each stage of the recovery process.”
As a supplement to the background information on this patent application, NewsRx correspondents also obtained the inventors’ summary information for this patent application: “In one aspect, the invention provides a computer-implemented method of simulating the effect on a patient’s body of a procedure, comprising: receiving a selection of a procedure; creating a pre-procedure 3D model of at least a part of the patient’s body that would be affected by the procedure; simulating the effects of the procedure on the patient’s body and generating a plurality of post-procedure 3D models from the pre-procedure 3D model, each post-procedure 3D model representing the patient’s body at a different time following the procedure; and displaying any of the pre-procedure 3D model and the post-procedure 3D models over a still image or a video of the patient.
“In this way, embodiments of the present invention enable the creation of time-based representations of the outcomes of a medical procedure such as a cosmetic or reconstructive procedure, or of the changes over time to a patient’s body as a result of implementing a diet or a physical fitness plan. Among many other advantages, this enables education of a patient, for obtaining informed consent and for managing expectations. By simulating outcomes using the patient’s body as a base model, the patient can much more easily see and understand those outcomes in order to make an informed decision.
“In one embodiment, the method further comprises: receiving a selection of a potential complication of the procedure; and simulating the effects of the complication on the patient’s body and generating a plurality of post-complication 3D models from either the pre-procedure 3D model or a post-procedure 3D model, each post-complication 3D model representing the patient’s body at a different time following the complication.
“Understanding potential complications of a procedure is an important part of obtaining informed consent. Furthermore, by simulating complications using the patient’s body as a base model, the patient will understand better what symptoms to look out for and will be able to report potential complications to a physician in a timely manner.
“In one embodiment, the method further comprises training a machine learning system on a training dataset comprising a plurality of 3D models of at least parts of a plurality of patients’ bodies at different times following a procedure and using the machine learning system to simulate the effects of the procedure on the patient’s body.
“The use of machine learning or artificial intelligence for simulation purposes means that simulated outcomes, including complications, are based on real-world results and not just on designed templates or mathematical formulas. Simulations based on real results are better able to educate patients and physicians. Following completion of the procedure, 3D models of the actual outcomes can be created from the patient’s body and these models can be added to the training dataset of the machine learning system to further improve its simulations.
“In one embodiment, the post-procedure 3D models include a model representing the patient’s body immediately after the procedure is completed, and at least one model representing the patient’s body at a selected time during the procedure. As well as informing a patient, simulated models of instances during the procedure, particularly a surgical procedure, can educate a physician to enable them to perform better.
“In another aspect, the invention provides a computer-implemented method of simulating the effects of a medical procedure on a patient’s body comprising: training a machine learning system on training data comprising the effects of the medical procedure on a plurality of patients’ bodies, as performed by a plurality of different physicians, to generate a plurality of predictive models; creating a 3D model of at least a part of the patient’s body that would be affected by the procedure; using a first predictive model, generating a first modified 3D model of the at least part of the patient’s body following the procedure, simulating the effects of the procedure as performed by a first physician; and using a second predictive model, generating a second modified 3D model of the at least part of the patient’s body following the procedure, simulating the effects of the procedure as performed by a second physician.
“The use of artificial intelligence to gain real world data about the performance of multiple different physicians enables the creation of both general purpose models to create generic simulations of the effects of a medical procedure, but also more specific models for creating physician specific simulations. This can provide invaluable insight, enabling the provision of a virtual second opinion on the effects of a medical procedure.
“In another aspect, the invention provides a computer-implemented method of obtaining patient consent for a medical procedure comprising: receiving a selection of a medical procedure; receiving patient information including at least a patient location; automatically determining consent requirements of the patient based on the patient location and the medical procedure and retrieving at least one consent workflow meeting the consent requirements from a store of consent workflows; automatically identifying at least one education course needed to educate the patient about the medical procedure and retrieving the at least one education course from a store of education courses; using the or each retrieved consent workflow and the or each retrieved education course, automatically assembling an education and consent workflow for educating the patient about the medical procedure and for capturing patient consent to the medical procedure; displaying the education and consent workflow; receiving affirmation of consent from the patient; and storing the education and consent workflow and the affirmation of consent.
“The computer-automated assembly of an education and consent workflow ensures that necessary laws in the jurisdiction in question can be complied with. Furthermore, assembling the consent requirements together with education courses and providing these together over a computer platform provides the patient has the best possible understanding of the procedure and what they are consenting to, while storing their affirmation of consent together with the education and consent workflow offers protection to the physician. On a digital platform, the affirmation of consent may even include video of the physician talking through the education course with the patient to demonstrate informed consent providing further legal protection for the physician.
“In one embodiment, the patient information includes at least one image of the patient’s body; and assembling an education and consent workflow comprises automatically simulating at least one outcome of the medical procedure using the or each image of the patient’s body to create a simulated representation of the at least one outcome on the patient’s body, and including the simulated representation in the education and consent workflow. Personalising an education course using simulated representations of outcomes on the patient’s actual body increases the patent’s understanding and better informs consent.
“In another aspect, the invention provides a computer-implemented method of diagnosing patient complications during recovery from a medical procedure comprising: receiving patient recovery data via a patient device; extracting, by a data analytics service, patient recovery parameters from the patient recovery data; ingesting, by a diagnostic AI, the patient recovery parameters in order to identify procedure complications within the patient recovery data based on the extracted patient recovery parameters; producing, by the diagnostic AI, a complications diagnosis; assembling, by a complications application, a complication report including the complications diagnosis and a treatment plan; and delivering the complication report to the patient device.
“Using an artificial intelligence system to diagnose complications from a medical procedure enables early assessment of potential complications or can set the patient’s mind at ease if the diagnosis is clear. For more serious complications, the complication report can be passed on to a physician for human review, and the computer system can optionally automatically schedule an appointment for the patient.
“In another aspect, the invention provides a computer-implemented method of generating an augmented reality (AR) rendering of a medical procedure, comprising: receiving a selection of a medical procedure affecting a body part of a patient, patient measurements, and an image of the body part of the patient; generating, by a 3D modelling engine, a 3D model of the body part comprising a three-dimensional mesh structure covered in a texture material, the patient mesh structure dimensioned according to patient measurements, and the texture material extracted from the image of the body part; simulating, by a simulation engine, modifications to the 3D model according to an anticipated result of the medical procedure; and matching, by an AR engine, the position and orientation of the 3D model with the position and orientation of the body part of the patient in a video of the patient, and augmenting the video with a rendering of the modified 3D model over the body part of the patient.
“Advantageously, preparing a 3D model of the patient’s body, textured with an image of the patient’s body, and then modifying that 3D model based on the likely result of a medical procedure, enables the patient to easily understand how the procedure will affect them. Furthermore, by matching the modified 3D model to a video (including a live video) of the patient’s body using an augmented reality system, the patient is immediately and intuitively able to observe the overall effects of the procedure on their body.”
There is additional summary information. Please visit full patent to read further.”
The claims supplied by the inventors are:
“1. A computer-implemented method of simulating the effect on a patient’s body of a procedure, comprising: receiving a selection of a procedure; creating a pre-procedure 3D model of at least a part of the patient’s body that would be affected by the procedure; simulating the effects of the procedure on the patient’s body and generating a plurality of post-procedure 3D models from the pre-procedure 3D model, each post-procedure 3D model representing the patient’s body at a different time following the procedure; and displaying any of the pre-procedure 3D model and the post-procedure 3D models over a still image or a video of the patient.
“2. The method of claim 1 wherein the procedure comprises one of a cosmetic procedure, a reconstructive procedure, bariatric surgery, and implementation of a diet and/or a physical fitness plan.
“3. The method of claim 1 further comprising: receiving a selection of a potential complication of the procedure; and simulating the effects of the complication on the patient’s body and generating a plurality of post-complication 3D models from either the pre-procedure 3D model or a post-procedure 3D model, each post-complication 3D model representing the patient’s body at a different time following the complication.
“4. The method of claim 1 further comprising training a machine learning system on a training dataset comprising a plurality of 3D models of at least parts of a plurality of patients’ bodies at different times following a procedure and using the machine learning system to simulate the effects of the procedure on the patient’s body.
“5. The method of claim 4 further comprising, following completion of the procedure, creating at least one 3D model of at least a part of the patient’s body that has been affected by the procedure at at least one different time following the procedure and adding the at least one 3D model to the training dataset of the machine learning system.
“6. The method of claim 1 wherein the post-procedure 3D models include a model representing the patient’s body immediately after the procedure is completed, and at least one model representing the patient’s body at a selected time during the procedure.
“7. The method of claim 1 wherein the post-procedure 3D models include a model representing the patient’s body immediately after the procedure, a model representing the patient’s body after full recovery from the procedure, and at least one model representing the patient’s body at a selected intervening time.
“8. The method of claim 1 further comprising placing any of the pre-procedure 3D model and the post-procedure 3D models over a live video of the patient for displaying in an augmented reality environment.
“9-32. (canceled)
“33. The method of claim 1 further comprising training a machine learning system on training data comprising the effects of the procedure on a plurality of different patients’ bodies, as performed by a plurality of different physicians, to generate a plurality of predictive models; wherein simulating the effects of the procedure on the patient’s body further comprises: using a first predictive model of the plurality of predictive models, generating a first post-procedure 3D model of the at least part of the patient’s body following the procedure, the first post-procedure 3D model simulating the effects of the procedure as performed by a first physician; and using a second predictive model of the plurality of predictive models, generating a second post-procedure 3D model of the at least part of the patient’s body following the procedure, the second post-procedure 3D model simulating the effects of the procedure as performed by a second physician.
“34. A system for simulating the effect on a patient’s body of a procedure, comprising: a computer processor for receiving a selection of a procedure via a user interface; a 3D modelling engine for creating a pre-procedure 3D model of at least a part of the patient’s body that would be affected by the procedure; a simulation engine for simulating the effects of the procedure on the patient’s body and generating a plurality of post-procedure 3D models from the pre-procedure 3D model, each post-procedure 3D model representing the patient’s body at a different time following the procedure; and rendering logic for displaying, on a display device, any of the pre-procedure 3D model and the post-procedure 3D models over a still image or a video of the patient.
“35. The system of claim 34 wherein the procedure comprises one of a cosmetic procedure, a reconstructive procedure, bariatric surgery, and implementation of a diet and/or a physical fitness plan.
“36. The system of claim 34 wherein, responsive to the computer processor receiving a selection of a potential complication of the procedure via a user interface, the simulation engine simulates the effects of the complication on the patient’s body and generates a plurality of post-complication 3D models from either the pre-procedure 3D model or a post-procedure 3D model, each post-complication 3D model representing the patient’s body at a different time following the complication.
“37. The system of claim 34 further comprising an artificial intelligence (AI) system, the AI system comprising a machine learning system, wherein the machine learning system is trained on a training dataset comprising a plurality of 3D models of at least parts of a plurality of patients’ bodies at different times following a procedure and using the machine learning system to simulate the effects of the procedure on the patient’s body.
“38. The system of claim 37 wherein, following completion of the procedure, the 3D modelling engine creates at least one 3D model of at least a part of the patient’s body that has been affected by the procedure at at least one different time following the procedure, and the at least one 3D model is added to the training dataset of the machine learning system.
“39. The system of claim 34 wherein the post-procedure 3D models include a model representing the patient’s body immediately after the procedure is completed, and at least one model representing the patient’s body at a selected time during the procedure.
“40. The system of claim 34 wherein the post-procedure 3D models include a model representing the patient’s body immediately after the procedure, a model representing the patient’s body after full recovery from the procedure, and at least one model representing the patient’s body at a selected intervening time.
“41. The system of claim 34 further comprising an augmented reality (AR) engine for placing any of the pre-procedure 3D model and the post-procedure 3D models over a live video of the patient for display by the rendering logic on the display device.
“42. The system of claim 34 further comprising an artificial intelligence (AI) system, the AI system comprising a machine learning system, wherein the machine learning system is trained on training data comprising the effects of the procedure on a plurality of different patients’ bodies, as performed by a plurality of different physicians, to generate a plurality of predictive models; and wherein the simulation engine is further for simulating the effects of the procedure on the patient’s body by: (a) using a first predictive model of the plurality of predictive models, generating a first post-procedure 3D model of the at least part of the patient’s body following the procedure, the first post-procedure 3D model simulating the effects of the procedure as performed by a first physician; and (b) using a second predictive model of the plurality of predictive models, generating a second post-procedure 3D model of the at least part of the patient’s body following the procedure, the second post-procedure 3D model simulating the effects of the procedure as performed by a second physician.
“43. A computer-implemented method of simulating the effects of a medical procedure on a patient’s body comprising: training a machine learning system on training data comprising the effects of the medical procedure on a plurality of patients’ bodies, as performed by a plurality of different physicians, to generate a plurality of predictive models; creating a 3D model of at least a part of the patient’s body that would be affected by the procedure; using a first predictive model of the plurality of predictive models, generating a first modified 3D model of the at least part of the patient’s body following the procedure that simulates the effects of the procedure as performed by a first physician; and using a second predictive model of the plurality of predictive models, generating a second modified 3D model of the at least part of the patient’s body following the procedure that simulates the effects of the procedure as performed by a second physician to obtain a virtual second opinion.”
For additional information on this patent application, see: Giraldez,
(Our reports deliver fact-based news of research and discoveries from around the world.)



Bondholders sue MBTA for more than $8 million over delayed Wollaston project
Advisor News
- Younger investors turn to ‘finfluencers’
- Using digital retirement modeling to strengthen client understanding
- Fear of outliving money at a record high
- Cognitive decline is a growing threat to financial security
- Two lessons career changers wish they knew before starting the CFP journey
More Advisor NewsAnnuity News
- Zinnia’s Zahara policy admin system adds FIA chassis to product library
- The Standard and Ignite Partners Announce Launch of Thrive Plus Fixed Indexed Annuity
- CareScout Joins Ensight™ Intelligent Quote LTC & Life Marketplace
- Axonic Insurance Annuities, Built for Banks, Broker-Dealers and RIAs, Now Available through WealthVest.
- Allianz Life Adds New Accumulation-Focused Fixed Index Annuities
More Annuity NewsHealth/Employee Benefits News
- NC State Health Plan expects to spend $1 billion more than planned. Here’s why
- FINEOS and Opifiny Partner to Modernize Medical Information Workflows for Claims and Absence Management Across North America
- ‘An outrage:’ CT insurers still flouting mental health parity law
- After health insurance subsidies end, 30,000 Idahoans will be uninsured, government report says
- Georgia’s ACA enrollment plunges, raising concerns for rural hospitals
More Health/Employee Benefits NewsLife Insurance News
- Redefining life insurance for a new era of trust and protection
- Agam Capital and 1823 Partners Announce Strategic Partnership to Provide Life Insurers with an End-to-End Value Chain Solution
- AM Best Revises Outlooks to Positive for Western & Southern Financial Group, Inc. and Its Subsidiaries
- Principal Financial Group Announces First Quarter 2026 Results
- SBLI Enhances its OmniTrak Term to Deliver Faster Decisions, More Client Coverage, and Improved Pricing
More Life Insurance News