Nanomedicine, Volume I: Basic Capabilities
© 1999 Robert A. Freitas Jr. All Rights Reserved.
Robert A. Freitas Jr., Nanomedicine, Volume I: Basic Capabilities, Landes Bioscience, Georgetown, TX, 1999
220.127.116.11 Kinesthetic Inmessaging
With the support of an internal communications and navigational network, proprioceptive nanosensors resident in selected human tissues permit the direct detection of gross limb positions, limb velocities, and body orientation in space with tmeas ~ 10-3 sec or better (Sections 4.9.2 and 8.3.3). Thus, for example, data may be rapidly inmessaged using a highly literal manual sign language3297 designed for minimum ambiguity and maximum precision, with a minimum number of discrete symbols to memorize. (Native American sign language has ~400 root signs and other gestural languages have ~2000 signs,731 though a manual alphabet is also used.) Fingers and limbs can be comfortably moved at 0.1-1 m/sec; if ~1-cm positional increments can be adequately controlled by a patient and thus can be employed to convey one bit of information, then the maximum channel capacity of signing is 10-100 bits/sec per finger or 100-1000 bits/sec for all ten fingers. This estimate compares favorably with the ~100 bits/sec achieved by hearing-impaired signers translating speech at a conversational rate of ~150 words/min, and also with the record manual typing speed of 216 words/minute (~173 bits/sec) achieved on an IBM Selectric typewriter.739 Nanorobots may also eavesdrop on proprioceptive information detected by Pacinian proprioceptors.
Properly monitored voluntary large body motions such as break-dancing or karate programs, or lesser displacements such as head rolling or shrugging movements, leg jiggling or calisthenic exercises, rapid periodic lung inflations, anal sphincter contractions, blinking or eyeball rotations can similarly transfer useful information into the body at ~1 bit/sec. In combination with real-time retinal displays (Section 18.104.22.168), eyeball movement nanosensors can allow patients to use their eyes as an "ocular mouse" to point to an icon or array of alphanumeric characters superimposed on the visual field. This will allow them to initiate a programmed function or to spell out words or numbers. In 1997, an analogous system, known as the Ocular Vergence and Accommodation Sensor (OVAS) manufactured by Applied Modern Technologies Corp. of Garden Grove, California, used 12.5 W/m2 IR laser beams reflected from each eye's retina to provide information about eye movement and other biometric data. The OVAS design included a 12-component optics system and a Pentium processor with algorithms for processing data on the accommodative state, movements, and vergence of the eyes, as well as 10 other ocular functions.1295 In 1998, Canon EOS cameras used eye-control focus sensors to detect where in the frame the photographer was looking, and autofocus there, instead of defaulting to the center of the frame. Superimposing retinal displays over normal vision need not prove confusing to the patient. Experiments with head-mounted displays show identical ergonomic efficiency and accuracy for pointing tasks when control characters are projected either on a dark background screen or on a translucent background through which moderately distracting objects and moving traffic are visible.1307
Last updated on 19 February 2003