Patent Issued for Systems And Methods For Presenting And Modifying Interactive Content (USPTO 10,825,058)
2020 NOV 16 (NewsRx) -- By a
The patent’s inventor is Knas, Michal (
This patent was filed on
From the background information supplied by the inventors, news correspondents obtained the following quote: “Interactions with potential clients are of utmost importance to sales efforts. As such, numerous efforts are constantly undertaken to improve the tools at the disposal of salespeople. Examples of such tools include mobile tools for presenting, remote access to presentation materials, videoconferencing tools, and the like.
“Though these and other efforts have attempted to provide better tools to salespeople, it is still left up to the experience of the individual salesperson to determine how to proceed with a sales presentation. Less experienced salespeople, then, may benefit less from the available sales tools than others who are more adept at using the available sales tools and gauging the potential client’s reactions. This in turn may lead to sub-optimal sales when a significant part of a sales force is not as experienced as a company would desire.
“When a salesperson is attempting to sell a product or service to a potential in a remote location, the salesperson cannot see how the potential client is reacting to the sales pitch. Even if the customer were sitting directly in front of the salesperson, the sales person likely could not determine where the potential client is focusing on a particular portion of a display. As a result, it is difficult to revise the sales pitch materials or sales pitch approach based on a remote potential client’s reaction to the content. Accordingly, there is a continuing need to provide tools capable of improving salesperson and client interactions. For example there is a need for a tool capable of communicating with a server and monitoring the salesperson, sales presentation, and the client in order to guide the salesperson towards a better sales presentation.”
Supplementing the background information on this patent, NewsRx reporters also obtained the inventor’s summary information for this patent: “The systems and methods disclosed herein attempt to address the problems associated with the conventional sales approaches by providing a computer implemented method and system to provide feedback to a salesperson about a remotely-located client such that the electronic content displayed to the client or the salesperson can be modified, updated, or dynamically generated in response to the feedback.
“Disclosed herein are systems and methods for presenting and modifying intelligent interactive content. The methods may include receiving user input on a first device from a first user to present content to a second user on a second device. The method may additionally include providing the second device with content data, presenting the content to the second user, monitoring the second user’s reaction to the content, and generating feedback data. The method may further include providing the feedback data to the first device and presenting feedback to the first user.
“In some embodiments, a first user interacting with a control device is presented with a series of options for presenting content via a user interface. The control device receives user input from the first user for displaying content on a content display device and provides content data to the content display device. The content display device receives the content data, processes the content data into content, and displays the content to a second user via a display. As the content is displayed, an eye-tracking sensor monitors the second user’s gaze and provides behavior data to the content display device. The content display device processes the behavior data to generate feedback data and provides the feedback data to the control device. The control device receives the feedback data, generates feedback based on the feedback data, and presents the feedback to the first user via the user interface. The first user can then use the feedback to alter user input provided to the control device.
“Disclosed here are methods for presenting intelligent interactive content, including the steps of receiving user input from a first user, sending content data to a content display device, presenting content to a second user, generating feedback data based on behavior data received from eye-tracking sensor, receiving feedback data and generating feedback based on the feedback data, and presenting feedback to the first user. In some embodiments, a control device receives user input from a first user interacting with the control device. The control device generates content data from the received user input and sends the content data to a content display device. The content display device produces content based on the content data, displays the content to the second user via a display, and generates feedback data based on behavior data associated with the second user that is received from eye-tracking sensor. The control device also receives the feedback data from the content display device, generates feedback based on the feedback data, and presents the feedback to the first user. The control device then determines if the first user interacting with the control device wishes to continue presenting content on a content display device. If so, the method receives further user input from the first user and repeats one or more of the aforementioned steps.
“Disclosed herein are also methods of generating feedback data, including the steps of presenting content to a user, monitoring user behavior, generating behavior data, processing behavior data to generate feedback data, and providing feedback data to a control device. In some embodiments, a content display device presents content to a user. The content display device then monitors the behavior of the user being presented content and generates behavior data. The content display device proceeds to process behavior data to generate feedback data. The content display device then provides feedback data to the control device.
“In one embodiment, a computer-implemented method comprises receiving, by a processing unit of a control device, input from a first user on a user interface of the control device; generating, by the processing unit of the control device, content data based on the input and configured for presentation on a content display device; transmitting, by the processing unit of the control device via a communications network, the content data to the content display device for display on the content display device; collecting, by the processing unit of the control device, behaviors of a second user sensed by a sensor of the content display device; generating, by the processing unit of the control device, feedback data based on the behavior of the second user sensed by the sensor of the content display device; and displaying, by the processing unit, the feedback data to the first user on the user interface of the control device.
“In another embodiment, a system comprises a content display device; a sensor communicatively coupled to the content display device; a communication network; and a control device comprising a processing unit, the control device configured to receive input from a first user on a user interface of the control device, generate content data based on the input and configured for presentation on a content display device, transmit via a communications network the content data to the content display device for display on the content display device, collect behaviors of a second user sensed by the sensor coupled to the content display device, generate feedback data based on the behavior of the second user sensed by the sensor coupled to the content display device, and display the feedback data to the first user on the user interface of the control device.
“In another embodiment, a computer-implemented method comprises collecting, by a processing unit of a control device, visual behavior of a second user sensed by a sensor of a content display device when the content display device is displaying first content data; generating, by the processing unit of the control device, feedback data based on the visual behavior of the second user sensed by the sensor of the content display device; displaying, by the processing unit, the feedback data to the first user on the user interface of the control device; receiving, by the processing unit of the control device, an input via the user interface of the control device based on the feedback data; generating, by the processing unit of the control device, second content data based on the input and configured for presentation on the content display device; and transmitting, by the processing unit of the control device via a communications network, the second content data to the content display device for display on the content display device.
“In another embodiment, a computer-implemented method comprises collecting, by a processing unit of a control device, visual behavior of a second user sensed by a sensor of a content display device when the content display device is displaying first content data; generating, by the processing unit of the control device, feedback data based on the behavior of the second user sensed by the sensor of the content display device; automatically generating, by the processing unit of the control device, second content data based on the feedback data and configured for presentation on the content display device; and transmitting, by the processing unit of the control device via a communications network, the second content data to the content display device for display on the content display device.
“In yet another embodiment, a computer-implemented method comprises collecting, by a processing unit of a control device, visual behavior of a second user sensed by a sensor of a content display device when the content display device is displaying first content data; generating, by the processing unit of the control device, feedback data based on the visual behavior of the second user sensed by the sensor of the content display device; displaying, by the processing unit, the feedback data to the first user on the user interface of the control device; receiving, by the processing unit of the control device, an input via the user interface of the control device based on the feedback data; generating, by the processing unit of the control device, second content data based on the input and configured for presentation on the content display device; and transmitting, by the processing unit of the control device via a communications network, the second content data to the content display device for display on the content display device.
“In still yet another embodiment, a computer-implemented method comprises collecting, by a processing unit of a control device, visual behavior of a second user sensed by a sensor of a content display device when the content display device is displaying first content data; generating, by the processing unit of the control device, feedback data based on the behavior of the second user sensed by the sensor of the content display device; automatically generating, by the processing unit of the control device, second content data based on the feedback data and configured for presentation on the content display device; and transmitting, by the processing unit of the control device via a communications network, the second content data to the content display device for display on the content display device.
“Numerous other aspects, features and benefits of the present disclosure may be made apparent from the following detailed description taken together with the drawing figures.”
The claims supplied by the inventors are:
“What is claimed is:
“1. A computer-implemented method comprising: generating and transmitting a first instruction, by a server to a sensor associated with a content display device, to collect visual behavior associated with a second user, wherein the content display device is displaying first content data to the second user, the first content data comprising a plurality of display segments, each display segment displaying at least a portion of the first content data, wherein the content display device comprises a graphical user interface; upon receiving the visual behavior of the second user, generating by the server, feedback data based on the visual behavior of the second user and further based on non-ocular interaction of the second user with the graphical user interface at the content display device displaying the first content data; determining, by the server, a relationship between the feedback data and the first content data displayed on the content display device, the relationship indicating which display segment within the plurality of display segments is being viewed by the second user and the non-ocular interaction of the second user with the graphical user interface, and a score for each display segment based on the relationship between the feedback data and the first content data; generating and transmitting, by the server to a control device, a second instruction to display a visual identifier to a first user on a user interface associated with a control device, wherein the visual identifier is based on the feedback data and wherein the control device is located remotely from the content display device; receiving, by the server, an input via the user interface associated with the control device based on the feedback data; generating and transmitting, by the server to the content display device, a third instruction to display second content data based on the input and configured for presentation on the content display device, wherein the second content data comprises content that is different from information displayed in one or more of the plurality of display segments having the score less than a predetermined threshold.
“2. The method according to claim 1, wherein the sensor associated with the content display device is an eye-tracking sensor communicatively coupled to the content display device.
“3. The method according to claim 1, wherein the feedback data comprises information characterizing the interest of the second user in one or more portions of the first content data displayed on the content display device as interested, not interested, or indifferent.
“4. The method according to claim 1, wherein the feedback data is a graphical representation of one or more portions of the feedback data.
“5. The method according to claim 1, wherein the first content, second content or both comprises instructions to display at least one of insurance product recommendations derived from big data analytics, sales illustrations for an insurance product, text associated with the demographics of a potential client for the insurance product, or images associated with the demographics of a potential client for the insurance product.
“6. The method according to claim 1, wherein the first content, second content, or both comprises instructions to display the content data including at least one of an instruction for position on the content display device, content data size or visual emphasis of content data.
“7. The method according to claim 1, wherein the relationship between the feedback data and the first content data includes a duration of the second user viewing each display segment.
“8. The method according to claim 2, wherein the eye-tracking sensor extracts information about the second user’s eye movement and duration of the second users gaze within a boundary associated with one or more portion of content.
“9. The method according to claim 1, wherein the interaction of the second user with the graphical user interface at the content display device displaying the first content data is selected from the group consisting of one or more clicks made by the second user, one or more touch gestures made by the second user, touching a touchscreen of the content display device displaying the first content data, and one or more inputs to an interactive form of the first content data.
“10. The method according to claim 1, wherein generating the feedback data is further based on voice recognition of the second user to analyze emotional state via one or more of tone analysis and natural language recognition.
“11. A system comprising: a content display device having a display; a control device having a display and being located remotely from the content display device; a sensor communicatively coupled to the content display device; a communication network; and an analytics server comprising a processing unit, the analytics server configured to: generate and transmit a first instruction to the sensor communicatively coupled to the content display device, to collect visual behavior associated with a second user, wherein the content display device is displaying first content data to the second user, wherein the first content data comprising a plurality of display segments, each display segment displaying at least a portion of the first content data, wherein the content display device comprises a graphical user interface; upon receiving the visual behavior of the second user, generate feedback data based on the visual behavior of the second user and further based on non-ocular interaction of the second user with the graphical user interface at the content display device displaying the first content data; determine a relationship between the feedback data and the first content data displayed on the content display device, the relationship indicating which display segment within the plurality of display segments is being viewed by the second user and the non-ocular interaction of the second user with the graphical user interface, and a score for each display segment based on the relationship between the feedback data and the first content data; generate and transmit to a control device, a second instruction to display a visual identifier to a first user on a user interface associated with a control device, wherein the visual identifier is based on the feedback data; receive an input via the user interface associated with the control device based on the feedback data; generate and transmit to the content display device, a third instruction to display second content data based on the input and configured for presentation on the content display device, wherein the second content data comprises content that is different from information displayed in one or more of the plurality of display segments having the score less than a predetermined threshold.
“12. The system according to claim 11, wherein the sensor communicatively coupled to the content display device is an eye-tracking sensor.
“13. The system according to claim 11, wherein the feedback data comprises information characterizing the interest of the second user in one or more portions of the first content data displayed on the content display device as interested, not interested, or indifferent.
“14. The system according to claim 11, where in the feedback data is a graphical representation of one or more portions of the feedback data.
“15. The system according to claim 11, wherein the first content data, the second content data, or both comprises instructions to display at least one of insurance product recommendations derived from big data analytics, sales illustrations for an insurance product, text associated with the demographics of a potential client for the insurance product, or images associated with the demographics of a potential client for the insurance product.
“16. The system according to claim 11, wherein the first content data, the second content data, or both comprises instructions to display the content data including at least one of an instruction for position on the content display device, content data size or visual emphasis of content data.
“17. The system according to claim 11, wherein the relationship between the feedback data and the first content data includes a duration of the second user viewing each display segment.
“18. The system according to claim 12, wherein the eye-tracking sensor extracts information about the second user’s eye movement and duration of the second users gaze within a boundary associated with one or more portion of content.
“19. The system according to claim 11, wherein generate the feedback data is further based on voice recognition of the second user to analyze emotional state via one or more of tone analysis and natural language recognition.
“20. The system according to claim 11, wherein the interaction of the second user with the graphical user interface at the content display device displaying the first content data is selected from the group consisting of one or more clicks made by the second user, one or more touch gestures made by the second user, touching a touchscreen of the content display device displaying the first content data, and one or more inputs to an interactive form of the first content data.”
For the URL and additional information on this patent, see: Knas, Michal. Systems And Methods For Presenting And Modifying Interactive Content.
(Our reports deliver fact-based news of research and discoveries from around the world.)



Summer attends Clemson Tax School
Open Meeting of the Federal Advisory Committee on Insurance
Advisor News
- Principal builds momentum for 2026 after a strong Q4
- Planning for a retirement that could last to age 100
- Tax filing season is a good time to open a Trump Account
- Why aligning wealth and protection strategies will define 2026 planning
- Finseca and IAQFP announce merger
More Advisor NewsAnnuity News
- Half of retirees fear running out of money, MetLife finds
- Planning for a retirement that could last to age 100
- Annuity check fraud: What advisors should tell clients
- Allianz Life Launches Fixed Index Annuity Content on Interactive Tool
- Great-West Life & Annuity Insurance Company Trademark Application for “SMART WEIGHTING” Filed: Great-West Life & Annuity Insurance Company
More Annuity NewsHealth/Employee Benefits News
- Stop VA Claim Sharks: Why MOAA Backs the GUARD VA Benefits Act
- Soaring health insurance costs, revenue shortfalls put pressure on Auburn's budget
- Medicare Moments: Are clinical trial prescriptions covered by Medicare?
- Blue Cross Blue Shield settlement to start payouts from $2.67 billion class-action suit
- Why the Cost of Health Care in the US is Soaring
More Health/Employee Benefits NewsLife Insurance News