The 12, 24, or is it 26 cranial nerves?
- Correspondence to: Dr P D Welsby Regional Infectious Diseases Unit, Western General Hospital, Crewe Road, Edinburgh EH4 2XU, UK;
- Received 13 November 2003
- Accepted 16 March 2004
Many of our perceptions are gained through interpretative organs that we assume to be providing objective accounts. Notably, however, neither vision nor hearing provide an objective account of reality. This paper challenges the “conventional wisdoms” held regarding the optic, auditory, and hypoglossal nerves, and the nerves of eye movement.
The following paper deals with several less well known features of the human cranial nerves. It is apparent that there are several design faults, but I was not consulted at the time of design! The Canadian Economist John Kenneth Galbraith in his book The Affluent Society coined the expression “the conventional wisdom” and went on to remark that conventional wisdoms lagged behind development. While writing Clinical History and Examination Taking,1 I realised that this comment also applied to much conventional clinical teaching.
Most doctors claim to have 12 cranial nerves, whereas humans in fact have 24 (12 pairs). However, on semantic grounds, we probably have 26. The name “olfactory nerve” presupposes smell, a perception of an airborne chemical stimulus that reaches consciousness. Pheromones are not smelt and thus, on semantic grounds, cannot be described as being detectable using olfactory mechanisms. Although humans are said not to have a specific “pheromone nerve”, some part of the olfactory apparatus could fulfil the function of the vomeronasal nerve (nerve 0) as present in other vertebrates,2 but not, strictly speaking, in humans.
Contrary to the conventional wisdom, the optic nerves are not purely sensory nerves. At least 20% of the fibres are efferent and serve to continuously programme the computer peripheral that is the retina. Indeed, certain visual patterns are analysed locally in the retinal nerve complexes before transmission back to the brain.3
The conventional wisdom is that we see a scene as if it were a photograph. In fact, we only see “completely” the portion that falls in the macular region, which is only a small proportion of a scene. The further away from the macula that parts of a scene are focused, the less information is derived. The proof of this? The area of a visual field decreases when the peripheries of a visual field are identified with a small non-moving object, compared with the field obtained when using a small moving object. The area becomes even smaller if a small number is used as the object and the number has to be identified (fig 1). Thus, we all have tunnel vision in terms of accuracy. Despite this, we have a strong subjective illusion of a seamless, detailed reality all around us, which philosophers call the binding problem.3
A cinema screen provides a realistic viewing experience, as its size demands that we can only focus on a part of the scene (as we do when viewing real scenes), but when viewing smaller television screens, all the scene is in focus and this is unlike reality. Worse, when one eye is closed, what is seen is unchanged, despite the fact that the blind spot (about 15% of the open eye’s visual field) is unappreciated. An eye cannot see anything in its blind spot, but the brain fills in the defect.4 We cannot trust our vision!
Binocular vision probably evolved to enable threats to be distanced more accurately (as well as offering survival if one eye were damaged). Binocular vision requires that the outputs from complementary areas of the two retinas are integrated. Threats in the right visual field were seen (and distanced) by the left side of the left retina and the left side of the right retina. To integrate these, an optic decussation was required and the right sided threat was appreciated in the left occipital cortex (fig 2), thus, some sensory impulses have to cross the midline. However, if a close threat is perceived in the right visual field, it is obviously more efficient to move the right hand, thus some motor impulses also have to cross the midline. The overwhelming survival advantages of binocular vision probably explain why in mammals nearly all sensory and motor nervous pathways cross the midline. Interestingly, one subtle advantage of binocular vision is that the area obscured by the blind spot of one eye is often the area perceived by the macular region of the other eye.
Incidentally, the conventional teaching of visual field testing is tedious. A quick screening test is to ask the patient to stare at your face at a distance of about 12 inches and (with both eyes open and with each eye shut in turn) report if any part of your face is missing.
The retina has its light receptors behind the nervous tissue. An intelligent designer would disapprove of this but this inbuilt error has an evolutionary explanation (fig 3).
Irlen’s syndrome is a form of dyslexia in which there is not so much difficulty with words, but rather difficulty with vision processing. Sensory information from rods (black and white detection) and cones (colour detection) have different transmission speeds, and brain processing of black and white is separate from colour. It is thus not surprising that some forms of dyslexia, about 15%, can be “cured” by wearing tinted spectacles or putting coloured transparent sheets over areas of black and white text.5 Interestingly, but equally unsurprisingly, when an audience is asked if they prefer black on white text, black on red, black on brown, green on brown, etc, it becomes apparent that there are distinct populations of preferences. Even if we have no formal colour blindness, it is likely that we do not see exactly the same colours as each other when looking at the same colour. Surprisingly, there is evidence for this.6
NERVES OF EYE MOVEMENT III, V, AND V1
All the eye muscles are inserted across the diameter as seen from their direction of pull (fig 4). The medial rectus and lateral rectus respectively adduct and abduct the eye; the superior rectus and inferior rectus respectively elevate and depress the eye when the eye is looking laterally; and the superior oblique and inferior oblique respectively depress and elevate the eye when it is looking medially. Thus, moving a finger in a “rugby post” configuration is appropriate for testing eye movements (fig 5).
If there is dysfunction of muscles or their supplying nerves, the resulting positions of the eye are determined by these dysfunctions, and, just as important, by the unopposed action of other muscles or their supplying nerve(s).7 With nerve IV dysfunction (superior oblique muscle palsy) the patient cannot look downwards when looking medially; with nerve VI dysfunction (lateral rectus palsy) the patient cannot look laterally; with nerve III dysfunction (“rest of the eye muscles palsy”) the unopposed action of the nerve VI muscle (lateral rectus) pulls the eye laterally and the unopposed action of the nerve IV muscles (superior oblique) pulls the eye downwards. Thus, in a nerve III palsy, the eye looks down and out. The pupil is also dilated owing to unopposed sympathetic action because the parasympathetic impulses (which travels with nerve III) are absent.
So what explains the (often complete) ptosis with a nerve III lesion? The weakness of levator palpabrae superioris causing gravitational drooping of the eyelid? No; if one turns a patient upside down, ptosis persists (I have and it does). The ptosis is caused by the unopposed action of the fibres of the orbicularis occuli muscle (nerve VII), which arches over the eye within the upper eyelid (fig 6).
The eye is particularly poorly designed for piloting aircraft (fair enough, evolution has not had time to evolve a “flying eye” in humans). If two planes are on a collision course, then the angle subtended at the pilot’s eyes is constant (fig 7) and thus what pilots see is not something moving fast in their visual field, but a small dot that exponentially increases in size. It is precisely the planes with which pilots are likely to collide that are the most unlikely to be appreciated.
The conventional teaching (which, contrary to the theme of this paper, is correct) is that with diplopia, the peripheral image is the false image (fig 8). Many unhappy hours have been wasted by trying to explain to older patients, likely to be deaf and/or possibly dysphasic, who by virtue of diplopia have at least one nervous system dysfunction, that what you want them to report is “in which of the 16 positions is the double vision maximum and which image disappears when I cover one of your eyes?”. No one in their right mind asks a patient to do all this! Ask one thing at a time. Get a pair of spectacles with one lens tinted and the other clear, ask the patient to report where the double vision is most marked, and then ask the colour of the peripheral image.
The conventional wisdom is that the auditory nerve is a sensory nerve, but at least 20% of the fibres are efferent (motor) to the outer hair cells (fig 9) to partially enable adjustment of the inner hair cells’ sensitivity to pitch.
The conventional wisdom is that, in the cochlea, high frequencies are detected at the base, and low frequencies at the apex, as if a wave were propagating along the basilar membrane with a decreasing frequency and amplitude as the apex is approached, with the inner hair cells reporting the frequency at their specific sites; in other words, a place code. There is a major problem with this in that there are insufficient places. The cochlea is about 3.2 cm long, and can differentiate between about 1500 separate pitches, which would require a separate pitch for every 0.002 cm, which is unreasonable to attribute to a simple displacement of the basilar membrane. In fact, the inner hair cells are vibrating at varying rates, again determined by efferent auditory nerve fibres. They can respond (resonate) to incoming sounds and it is likely that it is the pattern rather than the place of resonance of the variously vibrating inner hair cells that identifies the pitch. This pattern suggestion fits in well with absence tones, in which two, usually high pitched, sounds can give rise to a third, much lower, perceived sound that has, literally, no base in reality.
The ear can listen selectively (a phenomenon my wife has previously observed on many occasions). When given correct prior warning of the pitch, 42 volunteers each heard 8000 Hz and 250 Hz played at their auditory threshold, but 26 did not hear 250 Hz played at their previously determined auditory threshold for that frequency when falsely advised that the tone would be of high pitch, and 18 did not hear a high tone played at their previously determined auditory threshold for that frequency when falsely advised that the tone would be of high pitch.8
Interestingly, hair bundles vibrate spontaneously and the efferent outer hair cells themselves can vibrate at a frequency of several kiloHertz. This constitutes a critical (Hopf) oscillating system that can maintain itself on the verge of oscillating and is particularly sensitive to its own frequency; an all or nothing response that boosts low amplitude impulses much more than strong ones, as well as tuning them to varying critical points. The system is non-linear, explaining how we can hear over a range of 120 dB (the decibel scale is logarithmic, with a loud rock band being 100 billion (100 million million)) times louder than the auditory threshold. The cochlea can be thought of as containing many “voices”, each of which is ready to sing along with any incoming sound that falls within its own range of pitch.9
The vibrating hair cells can sometimes be heard by observers or by sensitive microphones, constituting “spontaneous otoacoustic emissions” that can be evoked by administered sound stimuli. Thus hearing can be tested in neonates by observing these evoked otoacoustic emissions.10 Perhaps spontaneous otoacoustic emissions explain the sound of the seashore when a shell (or indeed any small container) is held to the ear. They may also explain how I, with my tinnitus induced by playing a wind instrument and with my eyes shut, can detect (presumably by echolocation) a plate held about 30 cm from my ears, an effect that is abolished if a cloth is put over the plate.
Our eyes are not suited for flying (see above) but neither are our ears. The horizontal semicircular canals are not quite horizontal and, in the absence of contradicting vision, if there are low grade vibrations (as there always are) that disturb the semicircular canal fluid, the pilot’s brain is informed that they are travelling upwards. Hence in a fog, pilots, if they ignore their instruments, will tend to tilt the plane nose downwards and crash into the ground.
The conventional wisdom is that contraction of the genioglossus causes the tongue to protrude. Obviously this is true to a minor extent (the rearmost fibres by contracting can push the front out a bit) but cannot possibly account for the tongue protrusion that is normally achievable (fig 10). So how is the tongue protruded when muscles can only contract and cannot actively elongate? The answer is that there are fibrous areas within the tongue (fig 11) such that contraction of the vertical and transverse muscle fibres causes the tongue to be squeezed forward. The longitudinal muscle in the front portion of the tongue can play no part in this protrusion.
Many of our perceptions are gained through interpretative organs that we assume to be providing objective accounts. Notably, however, neither vision nor hearing provide an objective account of reality.