Patent Issued for Active presence detection with depth sensing (USPTO 11798283): Imprivata Inc.
2023 NOV 14 (NewsRx) -- By a
The assignee for this patent, patent number 11798283, is
Reporters obtained the following quote from the background information supplied by the inventors: “As computer systems become ubiquitous in both the home and industry, the ability for any one individual to access applications and data has increased dramatically. Although such ease of access has streamlined many tasks such as paying bills, ordering supplies, and searching for information, the risk of providing the wrong data or functionality to the wrong person can be fatal to an organization. Instances of data breaches at many consumer-product companies and the need to comply with certain statutory measures (e.g., Health Insurance Portability and Accountability Act (HIPAA), Child Online Protection Act (COPA), Sarbanes-Oxley (SOX), etc.) have forced many companies to implement much stricter system access policies.
“Historically, computer systems have relied on so-called “logical” authentication in which a user is presented a challenge screen and must provide one or more credentials such as a user ID, a password, and a secure token. In contrast, access to physical locations (e.g., server rooms, file rooms, supply rooms, etc.) is typically secured using physical authentication such as a proximity card or “smart card” that, when presented at a card reader, results in access to the room or area. More recently, these two authentication techniques have been incorporated into single-system access authentication platforms. When used in conjunction with other more complex identification modalities such as biometrics, it has become very difficult to gain unauthorized access to secure systems.
“Granting initial access is only half the story, however. Once a user has presented the necessary credentials to gain entry to a secure computer system, for example, he may circumvent the strict authentication requirements by allowing other users to “piggy-back” on his credentials. Users departing from an authenticated session may fail to terminate the session, leaving the session vulnerable to unauthorized access. As a result, sensitive data may be exposed to access by unauthorized individuals.
“Many currently available commercial solutions for detecting user presence and departure suffer from significant practical limitations. For example, when “timeouts” are used to terminate system access if keyboard or mouse activity is not detected during a pre-set period of time, the operator’s physical presence is insufficient to retain access, and erroneous termination may result in cases of extended passive interaction (e.g., when the user reads materials on the screen). Further, such systems cannot discriminate between different users, and a timeout period introduces the potential for unauthorized use during such period. Approaches that use radio-frequency (RF) or similar token objects to detect user departure based on an increase in distance between the token object and a base transceiver suffer from an inability to reliably resolve the distance between the token and receiver, which can result in a restricted or unstable detection zone. Furthermore, the token objects can be readily swapped or shared.
“Yet another solution involves detecting and tracking an operator visually. For example, operator detection and/or identification may be achieved using one or more video cameras mounted to the computer terminal in conjunction with object-recognition techniques (e.g., based on analysis of one or a sequence of images) to detect and locate a single operator, which generally involves differentiating the operator from non-operators and the background scene. Once an operator is identified, her movements within a predefined detection zone, such as a pyramidal volume extending radially outward from the secure computer terminal, are tracked to determine when and whether she interacts with the secure system. In certain implementations, this is done without having to continually re-identify the operator, instead relying on following the motion of the operator with the help of computer-vision motion analysis and other techniques. The position and size of the operator may be tracked to detect when she exits the detection zone, which is called a “walk-away event.” The reappearance of the operator after an absence from the detection zone may also be detected. For example, a stored exemplar of previously identified operators may be used to detect and authenticate the operator upon reappearance and within a pre-defined time window.
“One problem associated with currently available visual presence-detection systems is their reliance on relative face sizes to identify the operator among multiple people detected in the field of view of the camera. While, on average, the operator’s face (due to his proximity to the camera) appears largest in the image, variations in people’s head sizes as well as different hair styles and head covers that occlude the face to varying degrees can result in the misidentification of the operator. An even greater problem of conventional systems is the high rate of false alarms signaling walk-away events. This issue arises from the use of color, intensity, and/or gradient information (or similar two-dimensional cues) in the images to compare tracked foreground patches in previous image frames to query patches in the current frame. If background objects have cues similar to those of the tracked foreground object, which is generally true for faces, false matches are frequently generated-e.g., the face of a person in the background may be incorrectly matched to the face of the operator in a previous image. Thus, when the person in the background subsequently leaves the scene, a walk-away event is falsely declared, and, conversely, when the person in the background remains in the scene, the operator’s departure goes unnoticed by the system.
“A need exists, accordingly, for improved visual approaches to presence detection and, in particular, for systems and techniques that detect walk-away events more reliably.”
In addition to obtaining background information on this patent, NewsRx editors also obtained the inventors’ summary information for this patent: “Embodiments of the present invention relate to systems and methods that use depth information-alone or, e.g., in conjunction with color and/or intensity gradient information-to identify and track operators of secure systems more reliably and, thus, avert or reduce both false positives and false negatives in the detection of walk-away events (i.e., the false detection of walk-away events as well as the failure to detect actual walk-away events). Depth-sensing cameras based on various technologies (e.g., stereo cameras, time-of-flight cameras, interferometric cameras, or cameras equipped with laser rangefinders) are commercially available, and may readily be mounted at or near the computer terminal (or other secure system), replacing the traditional desk-top cameras used in existing visual presence detection systems. Using information about depth, which corresponds to distances of objects from the computer terminal, the face of an operator at the terminal can be more readily distinguished from faces of persons in the background.
“Various embodiments in accordance herewith employ face detection to find an operator within a three-dimensional detection zone, followed by head tracking to monitor the operator’s movements and detect his departure from and/or reentry into the detection zone. The detection zone may have a depth boundary (or “depth threshold”), i.e., it may be limited to a specified maximum distance from the terminal. The boundary may, for example, correspond to a distance from the terminal beyond which an operator would ordinarily not be expected, or a somewhat larger distance beyond which people would not be able to discern normally sized text or other screen content by eye. In some embodiments, face finding is limited at the outset to image portions whose associated depth values are below the depth threshold. Alternatively, faces may be detected in the images first, and then filtered based on the depth threshold. Among multiple faces within the detection zone, the face that is closest to the terminal may be deemed to be that of the operator. In addition to utilizing absolute distance from the terminal to distinguish between an operator and a person in the background, the system may also use relative depth information as a “spoof filter,” i.e., to discriminate between the three-dimensional surface of a real-life face and a two-dimensional, flat image of a face.
“During head tracking, depth information associated with a collection of tracked features may be used to increase tracking robustness and frame-to-frame depth consistency, and thus avoid tracking errors that involve jumps from the operator’s face to the face of another person located farther away, e.g., beyond the detection zone. For example, based on the assumption that the operator does not move away from the screen at a speed faster than a certain maximum speed (consistent with the speed range of human motion), the difference in the depth of the tracked face or head between successive image frames may be required to fall below a corresponding threshold, or else a tracking error is declared. In some embodiments, the collection of tracked features is from time to time re-initiated based on re-detection of the face or detection of a head-shoulder region. Depth consistency between the detected face or head-shoulder region and the tracked features may, in this approach, ensure that the re-initiation does not cause an erroneous jump to another person.
“In some implementations, following the detection of an operator’s face, a biometric signature of the face (e.g., a face template, or a list of features derived therefrom) is collected and stored in memory. This signature may later be used for re-authentication of a user who has left the terminal and subsequently returned. Further, face templates may be repeatedly captured and saved during head tracking, e.g., whenever the face is re-detected for purposes of re-initiating the tracked feature collection. The templates may be indexed on the basis of face posture, as computed, e.g., using the three-dimensional coordinate values on salient points or regions of the face, such as the eyes, nose, mouth, etc. This facilitates faster re-authentication and reduces the vulnerability of the procedure to false acceptances.
“Accordingly, in one aspect, the invention is directed to a computer-implemented method for monitoring an operator’s use of a secure system. The method includes acquiring images with a depth-sensing camera system co-located with an operator terminal of the secure system, analyzing one or more of the images to determine whether a face is (or faces are) present within a three-dimensional detection zone having a depth boundary relative to the terminal, and, if so, associating that face (or one of the faces) with an operator, and thereafter tracking the operator between successive images to detect when the operator leaves the detection zone. Tracking the operator is based, at least in part, on measured depth information associated with the operator, and may serve, e.g., to discriminate between the operator and background objects (such as persons in the background).”
The claims supplied by the inventors are:
“1. A computer-implemented method for reliably monitoring an operator’s use of a secure system to prevent unauthorized use of the secure system while preventing erroneous termination of the operator’s use of the secure system, the method comprising: (a) defining a detection zone extending from an operator terminal of the secure system to a depth boundary spaced apart from the operator terminal and corresponding to a distance from the operator terminal beyond which an operator cannot interact with the secure system; (b) using a depth-sensing camera system co-located with the operator terminal to acquire a series of images of one or more objects; © using the depth-sensing camera to measure distances from the operator terminal to the one or more objects in the series of images; (d) computationally determining whether any of the one or more objects is within the detection zone and is a face, and, when the one or more objects is determined to be within the detection zone and determined to be a face, electronically associating the detected face with an operator; and (e) following association of a detected face with the operator, using the depth-sensing camera to electronically track the distance from the operator terminal to the detected face and, when the distance extends beyond the depth boundary, signaling a walk-away event to prevent unauthorized use of the operator terminal, wherein (I) step (d) comprises using a face-finding algorithm to detect faces in the images and thereafter computationally determining, based on the distances measured using the depth-sensing camera, which, if any, of the detected faces are present within the detection zone while excluding detected faces disposed at distances, measured using the depth-sensing camera, which are beyond the depth boundary, and (II) steps (b)-(e) are only performed after the operator has logged on to the secure system at the operator terminal.
“2. The method of claim 1, wherein tracking the distance from the operator terminal to the detected face comprises using the distances measured by the depth-sensing camera to discriminate between the operator and background objects present in the series of images.
“3. The method of claim 1, wherein step (d) comprises identifying, among a plurality of faces present within the detection zone, the face closest to the secure system based on distances to the plurality of faces measured using the depth-sensing camera and computationally associating that face with the operator.
“4. The method of claim 1, wherein step (d) comprises analyzing relative depth information measured with the depth-sensing camera to discriminate between faces and two-dimensional images thereof.
“5. The method of claim 1, wherein step (e) comprises tracking a collection of trackable key features associated with the operator between the successive images based, at least in part, on distances associated therewith.
“6. The method of claim 5, wherein tracking the key features comprises matching the key features between the successive images based at least in part on the distances associated therewith.
“7. The method of claim 5, wherein tracking the key features comprises filtering identified matches of key features between the successive images based at least in part on the distances associated therewith.
“8. The method of claim 1, wherein step (e) further comprises periodically restarting the tracking based on at least one of re-detection of the face or detection of a head-shoulder portion associated with the operator, the at least one of re-detection of the face or detection of the head-shoulder portion being based at least in part on the distances measured using the depth-sensing camera.
“9. The method of claim 8, further comprising, repeatedly upon re-detection of the face, saving a face template for subsequent use during re-authentication.
“10. The method of claim 9, further comprising indexing the face templates based, at least in part, on face posture as determined from three-dimensional information contained therein.
“11. The method of claim 1, wherein signaling the walk-away event comprises issuing an alarm.
“12. The method of claim 1, wherein signaling the walk-away event comprises logging the operator out of the secure system.
“13. The method of claim 1, further comprising, after step (d), comparing the detected face to one or more previously acquired face exemplars of the operator.
“14. The method of claim 13, further comprising, when the detected face does not match a face exemplar of the operator, storing the detected face as a face exemplar of the operator.
“15. The method of claim 1, further comprising using a face-finding algorithm to detect, in images acquired by the depth-sensing camera, faces of one or more persons other than the operator and, based on distances measured using the depth-sensing camera, electronically warning the operator only when at least one said person is present within the detection zone.
“16. The method of claim 1, wherein step (d) comprises, when a plurality of objects are determined to be within the detection zone and to be faces, before electronically associating the detected face with the operator, electronically displaying a message on the operator terminal requesting all but one person to clear the detection zone.”
For more information, see this patent: Sengupta, Kuntal. Active presence detection with depth sensing.
(Our reports deliver fact-based news of research and discoveries from around the world.)
Patent Issued for Technologies for using image analysis to facilitate adjustments of vehicle components (USPTO 11794675): State Farm Mutual Automobile Insurance Company
Health care legislative committee adopts final report, targets prior authorizations
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News