Auditory brainstem response

The auditory brainstem response (ABR) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.[1][2][3]

The auditory structures that generate the auditory brainstem response are believed to be as follows:[2][4]

  • Wave I through III – generated by the auditory branch of cranial nerve VIII and lower
  • Wave IV and V – generated by the upper brainstem
  • More in depth location – wave I originates from the dendrites of the auditory nerve fibers, wave II from the cochlear nucleus, III showing activity in the superior olivary complex, and wave IV–V associated with the lateral lemniscus.

History of research

In 1967, Sohmer and Feinmesser were the first to publish ABRs recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively. In 1971, Jewett and Williston gave a clear description of the human ABR and correctly interpreted the later waves as arriving from the brainstem. In 1977, Selters and Brackman published landmark findings on prolonged inter-peak latencies in tumor cases (greater than 1 cm). In 1974, Hecox and Galambos showed that the ABR could be used for threshold estimation in adults and infants. In 1975, Starr and Achor were the first to report the effects on the ABR of CNS pathology in the brainstem.[2]

Long and Allen were the first to report the abnormal brainstem auditory evoked potentials (BAEPs) in an alcoholic woman who recovered from acquired central hypoventilation syndrome. These investigators hypothesized that their patient's brainstem was poisoned, but not destroyed, by her chronic alcoholism. Long, K.J.; Allen, N. (October 1984). "Abnormal brain-stem auditory evoked potentials following Ondine's curse". Arch. Neurol. 41 (10): 1109–10. doi:10.1001/archneur.1984.04050210111028. PMID 6477223.

Measurement techniques

Recording parameters

  • Electrode montage: most performed with a vertical montage (high forehead [active or positive], earlobes or mastoids [reference right & left or negative], low forehead [ground]
  • Impedance: 5 kΩ or less (also equal between electrodes)
  • Filter settings: 30–1500 Hz bandwidth
  • Time window: 10ms (minimum)
  • Sampling rate: usually high sampling rate of ca 20 kHz
  • Intensity: usually start at 70 dBnHL
  • Stimulus type: click (100 us long), chirp or toneburst
  • Transducer type: insert, bone vibrator, sound field, headphones
  • Stimulation or repetition rate: 21.1 (for example)
  • Amplification: 100-150K
  • n (# of averages/ sweeps): 1000 minimum (1500 recommended)
  • Polarity: rarefaction or alternating recommended

Interpretation of results

When interpreting the ABR, we look at amplitude (the number of neurons firing), latency (the speed of transmission), interpeak latency (the time between peaks), and interaural latency (the difference in wave V latency between ears). The ABR represents initiated activity beginning at the base of the cochlea and moving toward the apex over a 4ms period of time. The peaks largely reflect activity from the most basal regions on the cochlea because the disturbance hits the basal end first and by the time it gets to the apex, a significant amount of phase cancellation occurs.

Use

The ABR is used for newborn hearing screening, auditory threshold estimation, intraoperative monitoring, determining hearing loss type and degree, and auditory nerve and brainstem lesion detection, and in development of cochlear implants.

Advanced techniques

Stacked ABR

History

One use of the traditional ABR is site-of-lesion testing and it has been shown to be sensitive to large acoustic tumors. However, it has poor sensitivity to tumors smaller than 1 centimeter in diameter. In the 1990s, there were several studies that concluded that the use of ABRs to detect acoustic tumors should be abandoned. As a result, many practitioners only use MRI for this purpose now.[5]

The reason the ABR does not identify small tumors can be explained by the fact that ABRs rely on latency changes of peak V. Peak V is primarily influenced by high-frequency fibers, and tumors will be missed if those fibers aren't affected. Although the click stimulates a wide frequency region on the cochlea, phase cancellation of the lower-frequency responses occurs as a result of time delays along the basilar membrane.[6] If a tumor is small, it is possible those fibers won't be sufficiently affected to be detected by the traditional ABR measure.

Primary reasons why it is not practical to simply send every patient in for an MRI are the high cost of an MRI, its impact on patient comfort, and limited availability in rural areas and third-world countries. In 1997, Dr. Manuel Don and colleagues published on the Stacked ABR as a way to enhance the sensitivity of the ABR in detecting smaller tumors. Their hypothesis was that the new ABR-stacked derived-band ABR amplitude could detect small acoustic tumors missed by standard ABR measures.[7] In 2005, he stated that it would be clinically valuable to have available an ABR test to screen for small tumors.[5] In a 2005 interview in Audiology Online, Dr. Don of House Ear Institute defined the Stacked ABR as "..an attempt to record the sum of the neural activity across the entire frequency region of the cochlea in response to a click stimuli."[4]

Stacked ABR defined

The stacked ABR is the sum of the synchronous neural activity generated from five frequency regions across the cochlea in response to click stimulation and high-pass pink noise masking.[5] The development of this technique was based on the 8th cranial nerve compound action potential work done by Teas, Eldredge, and Davis in 1962.[8]

Methodology

The stacked ABR is a composite of activity from ALL frequency regions of the cochlea – not just high frequency.[4]

  • Step 1: obtain Click-evoked ABR responses to clicks and high-pass pink masking noise (ipsilateral masking)
  • Step 2: obtain derived-band ABRs (DBR)
  • Step 3: shift & align the wave V peaks of the DBR – thus, "stacking" the waveforms with wave V lined up
  • Step 4: add the waveforms together
  • Step 5: compare the amplitude of the Stacked ABR with the click-evoked ABR from the same ear

When the derived waveforms are representing activity from more apical regions along the basilar membrane, wave V latencies are prolonged because of the nature of the traveling wave. In order to compensate for these latency shifts, the wave V component for each derived waveform is stacked (aligned), added together, and then the resulting amplitude is measured.[6] In 2005, Don explains that in a normal ear, the sum of the Stacked ABR will have the same amplitude as the Click-evoked ABR. But, the presence of even a small tumor results in a reduction in the amplitude of the Stacked ABR in comparison with the Click-evoked ABR.

Application and effectiveness

With the intent of screening for and detecting the presence of small (less than or equal to 1 cm) acoustic tumors, the Stacked ABR is:[7]

  • 95% Sensitivity
  • 83% Specificity

(Note: 100% sensitivity was obtained at 50% specificity)

In a 2007 comparative study of ABR abnormalities in acoustic tumor patients, Montaguti and colleagues mention the promise of and great scientific interest in the Stacked ABR. The article suggests that the Stacked ABR could make it possible to identify small acoustic neuromas missed by traditional ABRs.[9]

The Stacked ABR is a valuable screening tool for the detection of small acoustic tumors because it is sensitive, specific, widely available, comfortable, and cost-effective.

Tone-burst ABR

Tone-burst ABR is used to obtain thresholds for children who are too young to otherwise reliably respond behaviorally to frequency-specific sound stimuli. The most common frequencies tested at 500, 1000, 2000, and 4000 Hz, as these frequencies are generally thought to be necessary for hearing aid programming.

Auditory steady-state response (ASSR)

ASSR defined

Auditory steady state response is an auditory evoked potential, elicited with modulated tones that can be used to predict hearing sensitivity in patients of all ages. It is an electrophysiologic response to rapid auditory stimuli and creates a statistically valid estimated audiogram (evoked potential used to predict hearing thresholds for normal hearing individuals and those with hearing loss). The ASSR uses statistical measures to determine if and when a threshold is present and is a "cross-check" for verification purposes prior to arriving at a differential diagnosis.

History

In 1981, Galambos and colleagues reported on the "40 Hz auditory potential" which is a continuous 400 Hz tone sinusoidally 'amplitude modulated' at 40 Hz and at 70 dB SPL. This produced a very frequency specific response, but the response was very susceptible to state of arousal. In 1991, Cohen and colleagues learned that by presenting at a higher rate of stimulation than 40 Hz (>70 Hz), the response was smaller but less affected by sleep. In 1994, Rickards and colleagues showed that it was possible to obtain responses in newborns. In 1995, Lins and Picton found that simultaneous stimuli presented at rates in the 80 to 100 Hz range made it possible to obtain auditory thresholds.[1]

Methodology

The same or similar to traditional recording montages used for ABR recordings are used for the ASSR. Two active electrodes are placed at or near vertex and at ipsilateral earlobe/mastoid with ground at low forehead. If collecting from both ears simultaneously, a two-channel pre-amplifier is used. When single channel recording system is used to detect activity from a binaural presentation, a common reference electrode may be located at the nape of the neck. Transducers can be insert earphones, headphones, a bone oscillator, or sound field and it is preferable if patient is asleep. Unlike ABR settings, the high pass filter might be approximately 40 to 90 Hz and low pass filter might be between 320 and 720 Hz with typical filter slopes of 6 dB per octave. Gain settings of 10,000 are common, artifact reject is left "on", and it is thought to be advantageous to have manual "override" to allow the clinician to make decisions during test and apply course corrections as needed.[10]

Vs. ASSR

Similarities:

  • Both record bioelectric activity from electrodes arranged in similar recording arrays.
  • Both are auditory evoked potentials.
  • Both use acoustic stimuli delivered through inserts (preferably).
  • Both can be used to estimate threshold for patients who cannot or will not participate in traditional behavioral measures.

Differences:

  • ASSR looks at amplitude and phases in the spectral (frequency) domain rather than at amplitude and latency.
  • ASSR depends on peak detection across a spectrum rather than across a time vs. amplitude waveform.
  • ASSR is evoked using repeated sound stimuli presented at a high rep rate rather than an abrupt sound at a relatively low rep rate.
  • ABR typically uses click or tone-burst stimuli in one ear at a time, but ASSR can be used binaurally while evaluating broad bands or four frequencies (500, 1k, 2k, & 4k) simultaneously.
  • ABR estimates thresholds basically from 1-4k in typical mild-moderate-severe hearing losses. ASSR can also estimate thresholds in the same range, but offers more frequency specific info more quickly and can estimate hearing in the severe-to-profound hearing loss ranges.
  • ABR depends highly upon a subjective analysis of the amplitude/latency function. The ASSR uses a statistical analysis of the probability of a response (usually at a 95% confidence interval).
  • ABR is measured in microvolts (millionths of a volt) and the ASSR is measured in nanovolts (billionths of a volt).[10]

Analysis is mathematically based and dependent upon the fact that related bioelectric events coincide with the stimulus rep rate. The specific method of analysis is based on the manufacturer's statistical detection algorithm. It occurs in the spectral domain and is composed of specific frequency components that are harmonics of the stimulus repetition rate. Early ASSR systems considered the first harmonic only, but newer systems also incorporate higher harmonics in their detection algorithms.[10] Most equipment provides correction tables for converting ASSR thresholds to estimated HL audiograms and are found to be within 10 dB to 15 dB of audiometric thresholds. Although there are variances across studies. Correction data depends on variables such as: equipment used, frequencies collected, collection time, age of subject, sleep state of subject, stimulus parameters.[11]

Hearing aid fittings

In certain cases where behavioral thresholds cannot be attained, ABR thresholds can be used for hearing aid fittings. New fitting formulas such as DSL v5.0 allow the user to base the settings in the hearing aid on the ABR thresholds. Correction factors do exist for converting ABR thresholds to behavioral thresholds, but vary greatly. For example, one set of correction factors involves lowering ABR thresholds from 1000–4000 Hz by 10 dB and lowering the ABR threshold at 500 Hz by 15 to 20 dB.[12] Previously, brainstem audiometry has been used for hearing aid selection by using normal and pathological intensity-amplitude functions to determine appropriate amplification.[13] The principal idea of the selection and fitting of the hearing instrument was based on the assumption that amplitudes of the brainstem potentials were directly related to loudness perception. Under this assumption, the amplitudes of brainstem potentials stimulated by the hearing devices should exhibit close-to-normal values. ABR thresholds do not necessarily improve in the aided condition.[14] ABR can be an inaccurate indicator of hearing aid benefit due to difficulty processing the appropriate amount of fidelity of the transient stimuli used to evoke a response. Bone conduction ABR thresholds can be used if other limitations are present, but thresholds are not as accurate as ABR thresholds recorded through air conduction.[15]

Advantages of hearing aid selection by brainstem audiometry include the following applications:

  • evaluation of loudness perception in the dynamic range of hearing (recruitment)
  • determination of basic hearing aid properties (gain, compression factor, compression onset level)
  • cases with middle ear impairment (contrary to acoustic reflex methods)
  • non-cooperative subjects even in sleep
  • sedation or anesthesia without influence of age and vigilance (contrary to cortical evoked responses).

Disadvantages of hearing aid selection by brainstem audiometry include the following applications:

  • in cases of severe hearing impairment including no or only poor information as to loudness perception
  • no control of compression setting
  • no frequency-specific compensation of hearing impairment

Cochlear implantation and central auditory development

There are about 188,000 people around the world who have received cochlear implants. In the United States alone, there are about 30,000 adults and over 30,000 children who are recipients of cochlear implants.[16] This number continues to grow as cochlear implantation is becoming more and more accepted. In 1961, Dr. William House began work on the predecessor for today's cochlear implant. William House is an Otologist and is the founder of House ear institute in Los Angeles, California. This groundbreaking device, which was manufactured by 3M company was approved by the FDA in 1984.[17] Although this was a single channel device, it paved the way for future multi channel cochlear implants. Currently, as of 2007, the three cochlear implant devices approved for use in the U.S. are manufactured by Cochlear, Med El, and Advanced Bionics. The way a cochlear implant works is sound is received by the cochlear implant's microphone, which picks up input that needs to be processed to determine how the electrodes will receive the signal. This is done on the external component of the cochlear implant called the sound processor. The transmitting coil, also an external component transmits the information from the speech processor through the skin using frequency modulated radio waves. The signal is never turned back into an acoustic stimulus, unlike a hearing aid. This information is then received by the cochlear implant's internal components. The receiver stimulator delivers the correct amount of electrical stimulation to the appropriate electrodes on the array to represent the sound signal that was detected. The electrode array stimulates the remaining auditory nerve fibers in the cochlea, which carry the signal on to the brain, where it is processed.

One way to measure the developmental status and limits of plasticity of the auditory cortical pathways is to study the latency of cortical auditory evoked potentials (CAEP). In particular, the latency of the first positive peak (P1) of the CAEP is of interest to researchers. P1 in children is considered a marker for maturation of the auditory cortical areas (Eggermont & Ponton, 2003; Sharma & Dorman, 2006; Sharma, Gilley, Dorman, & Baldwin, 2007).[18][19][20] The P1 is a robust positive wave occurring at around 100 to 300 ms in children. P1 latency represents the synaptic delays throughout the peripheral and central auditory pathways (Eggermont, Ponton, Don, Waring, & Kwong, 1997).[21]

P1 latency changes as a function of age, and is considered an index of cortical auditory maturation (Ceponiene, Cheour, & Naatanen, 1998).[22] P1 latency and age has a strong negative correlation, decrease in P1 latency with increasing age. This is most likely due to more efficient synaptic transmission over time. The P1 waveform also becomes broader as we age. The P1 neural generators are thought to originate from the thalamo-cortical portion of the auditory cortex. Researchers believe that P1 may be the first recurrent activity in the auditory cortex (Kral & Eggermont, 2007).[23] The negative component following P1 is called N1. N1 is not consistently seen in children until 12 years or age.

In 2006 Sharma & Dorman measured the P1 response in deaf children who received cochlear implants at different ages to examine the limits of plasticity in the central auditory system.[19] Those who received cochlear implant stimulation in early childhood (younger than 3.5 years) had normal P1 latencies. Children who received cochlear implant stimulation late in childhood (younger than seven years) had abnormal cortical responses latencies. However, children who received cochlear implant stimulation between the ages 3.5 and 7 years revealed variable latencies of the P1. Sharma also studied the waveform morphology of the P1 response in 2005 [24] and 2007.[20] She found that in early implanted children the P1 waveform morphology was normal. For late implanted children, the P1 waveforms were abnormal and had lower amplitudes when compared to normal waveform morphology. In 2008 Gilley and colleagues used source reconstruction and dipole source analysis derived from high density EEG recordings to estimate generators for the P1 in three groups of children: normal hearing children, children receiving a cochlear implant before the age of four, and children receiving a cochlear implant after the age of seven. Findings concluded that the waveform morphology of normal hearing children and early implanted children were very similar.[25]

Sedation protocols

Common sedative used

To achieve the highest-quality recordings for any recording potential, good patient relaxation is generally necessary. However, many recordings can be filled and contaminated with myogenic and movement artifacts. Patient restlessness and movement will contribute to threshold overestimation and inaccurate test results. In most cases, an adult is usually more than capable to provide a good extratympanic recording. In transtympanic recordings, a sedative can be used when time-consuming events need to take place. Most patients (especially infants) are given light anesthesia when test transtympanically.

Chloral Hydrate is a commonly prescribed sedative, and most common for inducing sleep in young children and infants for AEP recordings. It uses alcohol to depress the central nervous system, specifically the cerebral cortex. Side effects of chloral hydrate include vomiting, nausea, gastric irritation, delirium, disorientation, allergic reactions and occasionally excitement – a high level of activity rather than becoming tired and falling asleep. Chloral Hydrate is readily available in three forms – syrup, capsule and suppository. Syrup is most successful for those 4 months and older, proper dosage is poured in an oral syringe or cup. The syringe is used to squirt in the back of the mouth and then the child is encouraged to swallow. To induce sleep, dosages range anywhere from 500 mg to 2g, the recommended pediatric dose is equal to 50 mg per kg of body weight. A second dose no greater than the first dose, and an overall dose not exceeding 100 mg/kg of body weight can be used if the child does not fall asleep after the first dose. Sedation personnel should include a physician and a registered or practical nurse. Documentation and monitoring of physiologic parameters is required throughout the entire process. Sedatives should only be administered in the presence of those who are knowledgeable and skilled in airway management and cardiopulmonary resuscitation (CPR).

Increasingly, Propofol is used intravenously via infusion pump for sedation.

Procedures

A consent form must be signed and received from the patient or guardian indicating the conscious sedation and the procedure being performed. Documented medical evaluation for pre-sedation purposes including a focused airway examination either on the same day as the sedation process or within recent days that will include but not limited to:

  • Age and weight
  • A complete and thorough medical history including all current medications, drug allergies, relevant disease, adverse drug reactions (especially relevant if any previous reaction to sedatives) and all relevant family history
  • Verify any airway or respiratory problems
  • All medications taken (including dosage and history of specific drug use) on the day of the procedure
  • Food and fluid intake within the 8 hours prior to sedation – light breakfast or lunch 1–2 hours prior to testing reduces likelihood of gastric irritation (common with chloral hydrate).
  • All vital signs

All orders for conscious sedation for patients must be written. Prescriptions or orders received from areas outside of the conscious sedation area are not acceptable. There has to be a single individual assigned to monitor the sedated patient's cardiorespiratory status before, during and after sedation.

If patient is deeply sedated, the individual's only job should be to verify and record vital signs no less than every five minutes. All age and size appropriate equipment and medications used to sustain life should be verified before sedation and should be readily available at any time during and after sedation.

The medication should be administered by a physician or nurse and documented (dosage, name, time, etc.). Children should not receive the sedative without supervision of a skilled and knowledgeable medical personnel (at home, technician). Emergency equipment including crash cart must be readily available and respiration monitoring should be done visually or with stethoscope. Family member needs to remain in room with patient, especially if tester steps out. In this scenario, respiration can be monitored acoustically with a talk-back system microphone placed near patient's head. Medical personnel should be notified of slow respiration state.

After procedure is over, patient must be continuously observed in the facility that is appropriately equipped and staffed because patient's typically "floppy" and have poor motor control. Patients shouldn't stand on their own for the first few hours. No other medications with alcohol should be administered until patient is back to normal state. Drinking fluids is encouraged to reduce stomach irritation. Each facility should create and use their own discharge criteria. Verbal and written instructions should be provided on the topics of limitations of activity and anticipated changes in behavior. All discharge criteria must be met and documented before the patient leaves the facility.

Some criteria prior to discharge should include:

  • Stable vital signs similar to those taken pre-procedure
  • Patient is at the level of consciousness pre-procedure
  • Patient has received post-procedure care instructions.[12]
gollark: It might work as a very ethical source of mana, actually. Via narslimmus. GTech™ Botanics™ requires arbitrary mana, for purposes.
gollark: It was in a dungeon chest. I don't actually need it, probably.
gollark: I also have a slime cube which turns arbitrary chunks into slime chunks.
gollark: You could wait and do other things.
gollark: Did I remember to actually plant it? I might not have. There are saplings and slimy dirt in storage at least.

See also

References

  1. Eggermont, Jos J.; Burkard, Robert F.; Manuel Don (2007). Auditory evoked potentials: basic principles and clinical application. Hagerstwon, MD: Lippincott Williams & Wilkins. ISBN 978-0-7817-5756-0. OCLC 70051359.
  2. Hall, James W. (2007). New handbook of auditory evoked responses. Boston: Pearson. ISBN 978-0-205-36104-5. OCLC 71369649.
  3. Moore, Ernest J (1983). Bases of auditory brain stem evoked responses. New York: Grune & Stratton. ISBN 978-0-8089-1465-5. OCLC 8451561.
  4. DeBonis, David A.; Donohue, Constance L. (2007). Survey of Audiology: Fundamentals for Audiologists and Health Professionals (2nd Edition). Boston, Mass: Allyn & Bacon. ISBN 978-0-205-53195-0. OCLC 123962954.
  5. Don M, Kwong B, Tanaka C, Brackmann D, Nelson R (2005). "The stacked ABR: a sensitive and specific screening tool for detecting small acoustic tumors". Audiol. Neurootol. 10 (5): 274–90. doi:10.1159/000086001. PMID 15925862.
  6. Prout, T (2007). "Asymmetrical low frequency hearing loss and acoustic neuroma". Audiologyonline.
  7. Don M, Masuda A, Nelson R, Brackmann D (September 1997). "Successful detection of small acoustic tumors using the stacked derived-band auditory brain stem response amplitude". Am J Otol. 18 (5): 608–21, discussion 682–5. PMID 9303158.
  8. Teas, Donald C. (1962). "Cochlear Responses to Acoustic Transients: An Interpretation of Whole-Nerve Action Potentials". The Journal of the Acoustical Society of America. 34 (9B): 1438–1489. Bibcode:1962ASAJ...34.1438T. doi:10.1121/1.1918366. ISSN 0001-4966.
  9. Montaguti M, Bergonzoni C, Zanetti MA, Rinaldi Ceroni A (April 2007). "Comparative evaluation of ABR abnormalities in patients with and without neurinoma of VIII cranial nerve". Acta Otorhinolaryngol Ital. 27 (2): 68–72. PMC 2640003. PMID 17608133.
  10. Beck, DL; Speidel, DP; and Petrak, M. (2007) Auditory Steady-State Response (ASSR): A Beginner's Guide. The Hearing Review. 2007; 14(12):34-37.
  11. Picton TW, Dimitrijevic A, Perez-Abalo MC, Van Roon P (March 2005). "Estimating audiometric thresholds using auditory steady-state responses". Journal of the American Academy of Audiology. 16 (3): 140–56. doi:10.3766/jaaa.16.3.3. PMID 15844740.
  12. Hall JW, Swanepoel DW (2010). Objective Assessment of Hearing. San Diego = Arch. Neurol: Plural Publishing Inc.
  13. Kiebling J (1982). "Hearing Aid Selection by Brainstem Audiometry". Scandinavian Audiology. 11: 269–275.
  14. Billings CJ, Tremblay K, Souza PE, Binns MA (2007). "Stimulus Intensity and Amplification Effects on Cortical Evoked Potentials". Audiol Neurotol. 12 (4): 234–246. doi:10.1159/000101331. PMID 17389790.
  15. Rahne T, Ehelebe T, Rasinski C, Gotze G (2010). "Auditory Brainstem and Cortical Potentials Following Bone-Anchored Hearing Aid Stimulation". Journal of Neuroscience Methods. 193 (2): 300–306. doi:10.1016/j.jneumeth.2010.09.013. PMID 20875458.
  16. Jennifer Davis (2009-10-29), Peoria Journal Star, According to the U.S. Food and Drug Administration, about 188,000 people worldwide have received implants as of April 2009.
  17. W.F. House (2009), Annals of Otology, Rhinology, and Laryngology, 85, pp. 1–93, Cochlear implants
  18. Eggermont, J. J.; Ponton, C. W. (2003), Acta Oto-Laryngologica, 123, pp. 249–252, Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: Correlations with changes in structure and speech perception.
  19. Sharma, A.; Dorman, M. F. (2006), Advances in Oto-Laryngologica, Central auditory development in children with cochlear implants: Clinical implications.
  20. Sharma, A.; Gilley, P. M.; Dorman, M. F.; Baldwin, R. (2007), International Journal of Audiology, 46, pp. 494–499, Deprivation-induced cortical reorganization in children with cochlear implants.
  21. Eggermont, J. J.; Ponton, C. W.; Don, M.; Waring, M. D.; Kwong, B. (1997), Acta Oto-Laryngologica, 117, pp. 161–163, Deprivation-induced cortical reorganization in children with cochlear implants.
  22. Ceponiene, R.; Cheour, M.; Naatanen, R. (1998), Electroencephalography and Clinical Neurophysiology, 108, pp. 345–354, Interstimulus interval and auditory event-related potentials in children: Evidence for multiple generators.
  23. Kral, A.; Eggermont, J. J. (2007), Brain Res. Rev., 56, pp. 259–269, What's to lose and what's to learn: development under auditory deprivation, cochlear implants and limits of cortical plasticity.
  24. Sharma, A. (2005), Audiol, 16, pp. 564–573, doi:10.3766/jaaa.16.8.5, PMID 16295243, P1 latency as a biomarker for central auditory development in children with hearing impairment
  25. Gilley, P. M., Sharma, A., & Dorman, M. F. (2008). Cortical reorganization in children with cochlear implants. Brain Research.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.