Patent Issued for Voice commands for the visually impaired to move a camera relative to a document (USPTO 11398215): United Services Automobile Association
2022 AUG 11 (NewsRx) -- By a
The patent’s assignee for patent number 11398215 is
News editors obtained the following quote from the background information supplied by the inventors: “Currently, a user may electronically deposit a negotiable instrument, such as a check, in a financial services institution using scanning and imaging techniques. Conventionally, the user uploads an image of the negotiable instrument to the financial services institution where it is stored in a storage device. An advisory is sent to the user from the financial services institution confirming that the image was uploaded successfully. The user responds to the advisory, which in turn activates an image processing servlet at the financial services institution which processes the image to deposit the negotiable instrument into an account specified by the user.”
As a supplement to the background information on this patent, NewsRx correspondents also obtained the inventors’ summary information for this patent: “A user may electronically deposit a negotiable instrument (or other types of document, such as a contract) using a camera on a mobile device or apparatus. For example, the mobile device may include a screen that displays an alignment guide and the image of the negotiable instrument generated by the camera. In this regard, the screen provides the feedback for the user to determine whether the negotiable instrument is within the alignment guide. Visually impaired users may be unable to use the screen to determine whether the negotiable instrument is properly captured by the image. The discussion below focuses on capturing an image of a negotiable instrument. However, other types of documents, such as a contract or other legally binding document, are contemplated. In this regard, the below discussion relating to capturing the image of a negotiable instrument may be applied to the other types of documents. Further, one type of document comprises a financial document, which includes a negotiable instrument, a contract, or other documents of a financial nature.
“In one aspect, a method and apparatus for analyzing an image to detect one or more edges of a negotiable instrument is disclosed. An alignment guide may be integrated with, superimposed on, or used in combination with the image. For example, when displaying the image on a screen of the mobile device, the alignment guide may be superimposed thereon. The alignment guide may cover the entire field-of-view of the camera, or may cover less than the entire field-of-view of the camera, such as illustrated in FIG. 6A. The alignment guide may be used when detecting one or more edges of the negotiable instrument in the image. In one embodiment, different sections of the image, corresponding to different sections of the alignment guide, may be analyzed to determine whether there are edges of the negotiable instrument within the different sections of the image and/or a number of pixels that correspond to the edge. The different sections or portions of the image may be mutually exclusive of one another (e.g., no pixels from one section are included in another section) or may have some (but not entire) overlap (e.g., some pixels from one section may be shared with another section whereas other pixels from the one section are not included in any other section). For example, the alignment guide may comprise a rectangle. One division of the rectangular alignment guide comprises a top half and a bottom half. In one example, the area for the top half is equal to the area of the bottom half. In another example, the areas for the top half and the bottom half are not equal. In this division, the top half of the image (as defined by the top half of the rectangular alignment guide) is analyzed to determine whether edges are detected therein. Further, the bottom half of the image (as defined by the top half of the rectangular alignment guide) is analyzed to determine whether edges are detected therein.
“Another division of the rectangular alignment guide comprises a left half and a right half. In this division, the left half of the image (as defined by the left half of the rectangular alignment guide) is analyzed to determine whether edges are detected therein. Further, the right half of the image (as defined by the right half of the rectangular alignment guide) is analyzed to determine whether edges are detected therein.
“The image may be input to the camera in one of several ways. In one way, the user may input a request to take a picture of a negotiable instrument (or other document). The request may comprise the opening or activating of an app to analyze the image for edges and then take the picture. Or, the request may comprise, after opening the app, a user input to take a picture. In response to the request, images may be captured as individual frames of a video (taken by the camera) for the purpose of determining location. In response to determining that the negotiable instrument is properly positioned relative to the camera, the camera may then take a still photo. Alternatively, the images may be captured as a still photo for the purpose of determining location.
“Multiple types of analysis may be performed on the sections of the image. In the example of a top half of the rectangular alignment guide, the top portion of the image (corresponding to the top half of the rectangular alignment guide) may be analyzed from the top down, and from the bottom up. In particular, the top portion of the image may comprise columns of pixels, such as columns 1 to N. In this example, the pixels from column 1 are analyzed from top to bottom, and then from bottom to top to determine whether an edge is detected in the column. In this regard, the analysis is bi-directional (e.g., from top to bottom, and from bottom to top). As discussed in more detail below, the analysis of the values of the pixels in the column may determine whether the value for a specific pixel corresponds to an edge of the negotiable instrument. Further, the analysis may count the number of pixels for one, some, or all of the edges detected, as discussed in more detail below. Alternatively, the pixels from column 1 are analyzed from bottom to top, and then from top to bottom to determine whether an edge is detected in the column. In this regard, the pixels are analyzed in two different directions (top to bottom, and bottom to top). Similarly, the pixels from column 2 are analyzed from top to bottom, and then from bottom to top to determine whether an edge is detected in the column, and so on until column N. A similar analysis may be performed for the bottom portion of the image, corresponding to the bottom half of the rectangular alignment guide.”
The claims supplied by the inventors are:
“1. A method for outputting aural guidance to a visually impaired user, the method comprising: receiving, at a mobile computing device, a user input indicative of requesting to image a document; in response to receipt of the user input, a processor of the mobile computing device automatically searching for the document in part or all of a field of view of a camera associated with the mobile computing device; determining, with the processor of the mobile computing device, that at least a portion of the document is a certain amount and direction outside of the field of view; and responsive to determining that at least a portion of the document is a certain amount and direction outside the field of view, generating and transmitting an aural command from a sound generating interface, the aural command comprising a single command instructing movement of the camera or document in multiple directions and a relative magnitude of the movement in at least one of the multiple directions so that the document is at least a certain percentage of the field of view; determining, with the processor, whether the document is at least the certain percentage of the field of view after transmission of the aural command; and responsive to determining that the document is not at least the certain percentage of the field of view: searching, with the processor, an image of the document in order to identify a number of edges for the document; and determining, with the processor, based on the number of edges identified, an offset comprising a direction to move one or both of the document or the camera so that the document is at least the certain percentage within the field of view; wherein determining the offset of the document relative to the camera comprises: determining a first offset of the document relative to the camera based on a first image generated by the camera; determining a second offset of the document relative to the camera based on a second image generated by the camera, wherein the second image is generated by the camera later in time than the first image generated by the camera; and determining a change between the first offset and the second offset.
“2. The method of claim 1, further comprising: determining, with the processor of the mobile computing device, that the document is at least a certain percentage of the field of view; responsive to determining that the document is at least a certain percentage of the field of view, generating and transmitting a second aural command from the sound generating interface comprising a command to hold the camera steady; and after generating and transmitting the second aural command, automatically capturing an image of the document with the camera.
“3. The method of claim 1, further comprising: determining, with the processor of the mobile computing device, that the document is at least a certain percentage of the field of view; responsive to determining that the document is at least a certain percentage of the field of view: generating and transmitting a second aural command from the sound generating interface comprising a command to hold the camera steady; and automatically capturing an image of the document with the camera concurrently with generating and transmitting the second aural command.
“4. The method of claim 1, wherein the document comprises a negotiable instrument.
“5. The method of claim 1, wherein the aural command comprises instructions to move the camera closer to or further away from the document.
“6. A mobile apparatus to output aural guidance to a visually impaired user, the apparatus comprising: an image capture device; an input device configured to receive a user input indicative of requesting to image a document; an aural output device configured to output aural commands; and a controller in communication with the image capture device, the input device, and the aural output device, the controller configured to: in response to receipt of the user input, automatically search for the document in part or all of a field of view of image capture device; determine that at least a portion of the document is a certain amount and direction outside of the field of view; and responsive to the determining that at least a portion of the document is a the certain amount and direction outside of the field of view, generate and transmit an aural command from the aural output device, the aural command comprising a single command instructing movement of the image capture device or document in multiple directions and a relative magnitude of the movement in at least one of the multiple directions so that the document is at least a certain percentage of the field of view; determine whether the document is at least the certain percentage of the field of view after transmission of the aural command; and responsive to a determination that the document is not at least the certain percentage of the field of view: search an image of the document in order to identify a number of edges for the document; and determine, based on the number of edges identified, an offset comprising a direction to move one or both of the document or the camera so that the document is at least the certain percentage within the field of view; wherein to determine the offset of the document relative to the camera, the controller is configured to: determining a first offset of the document relative to the camera based on a first image generated by the camera; determining a second offset of the document relative to the camera based on a second image generated by the camera, wherein the second image is generated by the camera later in time than the first image generated by the camera; and determining a change between the first offset and the second offset.
“7. The mobile apparatus of claim 6, wherein the controller is further configured to: determine that the document is at least a certain percentage of the field of view; responsive to determining that the document is at least a certain percentage of the field of view, generate and transmit a second aural command from the aural output device comprising instructions to hold the image capture device steady; and after generating and transmitting the second aural command, automatically capture an image of the document with the image capture device.
“8. The mobile apparatus of claim 6, wherein the controller is further configured to: determine that the document is at least a certain percentage of the field of view; responsive to determining that the document is at least a certain percentage of the field of view, generate and transmit a second aural command from the aural output device comprising instructions to hold the image capture device steady; and automatically capture an image of the document with the image capture device concurrently with generating and transmitting the second aural command.
“9. The mobile apparatus of claim 6, wherein the document comprises a negotiable instrument.
“10. The mobile apparatus of claim 6, wherein the aural command comprises instructions to move the image capture device closer to or further away from the document.”
For additional information on this patent, see: Clauer Salyers,
(Our reports deliver fact-based news of research and discoveries from around the world.)
Researchers at Alfred Hospital Release New Data on Medical Devices and Surgical Technology [Hospital Costs and Factors Associated With Days Alive and At Home After Surgery (Dah(30))]: Medical Devices and Surgical Technology
Patent Issued for Intelligent touch care corresponding to a patient reporting a change in condition (USPTO 11398313): Cerner Innovation Inc.
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News