TabSINT

TabSINT

  • Docs
  • Forum
  • Releases
  • Changelog
  • About

›User Guide

Quick Start

  • TabSINT Quick Start
  • WAHTS Quick Start
  • Run an Exam

User Guide

  • Introduction
  • Background
  • Tablet Setup
  • Configuration
  • Data Interface
  • Results Analysis
  • WAHTS
  • Dosimeter
  • Protocols
  • Response Areas
  • WAHTS Response Areas
  • Advanced Protocols
  • Example Protocols
  • Generate Configuration Code
  • FAQ

References

  • References

WAHTS Response Areas

TabSINT comes with many different types of WAHTS-specific response areas that can be used within WAHTS protocols. Select one of the following response areas for a protocol example and image of each response area type. Much like the regular response areas, there are a number of WAHTS response area definitions that, if referenced, may be used within any WAHTS response area.

Accelerated Threshold Response Area

A response area for performing an Accelerated Threshold exam, which measures hearing thresholds using a learning algorithm to achieve rapid convergence. The current accelerated threshold algorithm is inspired by support-vector machines (SVM).

Protocol Example

{
  "id": "Accelerated Threshold",
  "title": "Accelerated Threshold",
  "questionMainText": "Accelerated Threshold Audiometry",
  "responseArea": {
    "type": "chaAcceleratedThreshold",
    "autoSubmit": true,            
    "examInstructions" : "Tap the button once for each sound you hear.",
    "examProperties": {
      "UseSoftwareButton": true
    }
  }
}

Options

  • Audiometry Page Properties may be defined on the PAGE, not within the responseArea.

  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • Audiometry Level Properties

      • Hughson Westlake Level Properties

      • SVM_C:

        • Type: integer
        • Description: C parameter (weighting for loss function) for the SVM algorithm. (Default = 100)
      • SVM_D:

        • Type: number
        • Description: d parameter (margin offset) for the SVM algorithm. (Default = 0.01)
      • SVM_M:

        • Type: integer
        • Description: m parameter (margin weighting) for the SVM algorithm. (Default = 10)
      • SVM_MaxJump:

        • Type: integer
        • Description: Maximum size of any change in level the algorithm may make. (Default = 20)
      • SVM_StagDist:

        • Type: integer
        • description: Stagnation distance criterion. Changes within this distance lead to the satisfaction of the stagnation end condition. (Default = 5)
      • SVM_N_StagSteps:

        • Type: integer
        • Description: Number of presentations to consider to evaluate stagnation. (Default = 3)
      • SVM_MinStep:

        • Type: integer
        • Description: Minimum number of presentations before the exam can end. (Default = 5)

Response

The result.response is a number corresponding to the threshold level in LevelUnits. The result object also contains the Common Audiometry Responses and:

result.L = [30,15, ...]  // Array of levels presented
result.RetSPL = 15  // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.FalsePositive = [0,0, ...] // Array of numbers indicating the number of responses to each presentation that occurred outside the polling time window (may be 0, 1, 2 or 3 where 3 indicates 3+)
result.NumCorrectResp = 0  // Number of presentations correctly answered (only used when Screener = true)
result.ResponseTime = [859,489, ...] // Array of numbers indicating the response time (ms) to each presentation (no response recorded as 0)

Schema

  • chaAcceleratedThreshold.json

  • acceleratedThresholdExamProperties.json

Audiometry List Response Area

This response area is deprecated as of TabSINT version 4.4.0.

This response area allows the user to run many audiometry exams from a single protocol page.

Protocol Example

{
  "id": "AudiometryList",
  "title": "Test List",
  "questionMainText": "Hughson-Westlake Audiometry",
  "helpText": "Follow instructions",
  "instructionText": "This test measures your hearing sensitivity.  You will hear sounds at different pitches one ear at a time.  Your task is to tap the button when you hear a sound, no matter how soft the sound may be.",
  "responseArea": {
    "type": "chaAudiometryList",
    "repeatGroup": true,
    "randomizeList": true,
    "notesOnGroupFailedTwice": true,
    "presentationList": [
      {"F": 500, "OutputChannel": "HPL0"},
      {"F": 1000, "OutputChannel": "HPL0"},
      {"F": 2000, "OutputChannel": "HPL0"},
      {"F": 500, "OutputChannel": "HPR0"},
      {"F": 1000, "OutputChannel": "HPR0"},
      {"F": 2000, "OutputChannel": "HPR0"}
    ],
    "commonExamProperties": {
      "Lstart": 30,
      "UseSoftwareButton": true,
      "LevelUnits": "dB HL"
    },
    "commonResponseAreaProperties": {
      "pause": true
      }
  }
}

Options

  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the CHA exam pages (each page after starting page).
  • audiometryType:

    • Type: string
    • Description: Type of audiometry exam to run, available options are HughsonWestlake. (Default = HughsonWestlake)
  • repeatGroup:

    • Type: boolean
    • Description: If true, if any tests in a group fail to converge the first time, at the end of the group ask the listener to focus and repeat only the failed tests.
  • notesOnGroupFailedTwice:

    • Type: boolean
    • Description: If true, if any tests fail the second time, show Please hand tablet to test administrator at the end of the group and then prompt the administrator to enter notes.
  • randomizeList:

    • Type: boolean
    • Description: If true, the response area will shuffle the presentation list into a random order. (Default = false)
  • presentationList:

    • Type: array

    • Description: Array of audiometry exams to run. Any properties defined here will supersede commonExamProperties.

      • Audiometry Level Properties

      • id:

        • Type: string
        • Description: Custom presentationId to use for the result for each frequency. (Default = parent id _ Frequency)
  • commonExamProperties:

    • Type: object

    • Description: Object containing any of:

      • Hughson Westlake Exam Properties

      • Bekesy Like Exam Properties

  • commonResponseAreaProperties:

    • Type: object

    • Description: Object containing any of:

      • Audiometry Page Properties
  • measureBackground:

    • Type: string
    • Description: Method to use to measure background noise after an audiometry exam. Can be ThirdOctaveBands.

Response

The chaAudiometryList creates a result object for each presentation in the presentationList, where the presentationId is the id if specified in the protocol.

For each presentation, the result.response is the threshold value (if the exam converged) or Failed to Converge (if the exam failed to converge). Each result object contains Common Audiometry Responses as well as:

result.L = [30,15, ...] // Array of numbers indicating the levels presented
result.RetSPL = 15 // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.FalsePositive =[0,0, ...] // // Array (of length equal to length of L) indicating the number of responses to the corresponding presentation that occurred outside the polling time window (3 indicates 3 or more)
result.NumCorrectResponse = 0 // // Number of presentations correctly answered (only used when Screener = true)
result.ResponseTime = [874,578, ...] // Array of numbers recording the response time in ms to each presentation (0 if no response)

Schema

  • chaAudiometryList.json

Audiometry Results Plot Response Area

Use this page after an audiometry exam page to display an audiogram of the results. The example given here was used in conjunction with the example given for the Manual Audiometry Response Area.

Protocol Example

{
  "id": "ManualAudiometryPlot",
  "title": "Manual Audiometry",
  "questionSubText": "Manual Audiometry Results",
  "responseArea": {
    "type": "chaAudiometryResultsPlot",
    "displayIds": ["ManualAudiometry"]
  }
}

Options

  • displayIds:
    • Type: array
    • Description: An array of strings indicating the page ids of which to plot the results, i.e. ["training"] or ["section1_left", "section1_right"]

Response

The result.response from this response area contains no meaningful data. The result object only contains the Common TabSINT Responses. Of interest to the user is the audiogram generated using the data collected in a preceding audiometry pages.

Schema

  • chaAudiometryResultsPlot.json

Audiometry Results Table Response Area

Use this page after an audiometry exam page to display a table of the results. The example given here was used in conjunction with the example given for the Manual Audiometry Response Area.

Protocol Example

{
  "id": "ManualAudiometryTable",
  "title": "Manual Audiometry",
  "questionSubText": "Manual Audiometry Results",
  "responseArea": {
    "type": "chaAudiometryResultsTable",
    "displayIds": ["ManualAudiometry"]
  }
}

Options

  • displayIds:

    • Type: array
    • Description: An array of strings indicating the page ids of which to plot the results, i.e. ["training"] or ["section1_left", "section1_right"]
  • showSLMNoise:

    • Type: boolean
    • Description: Display the background noise measured from the SLM probe in the table. (Default = false)
  • showSvantek:

    • Type: boolean
    • Description: Display the background noise measured from the dosimeter in the table. (Default = false)

Response

The result.response from this response area contains no meaningful data. The result object only contains the Common TabSINT Responses. Of interest to the user is the table generated using the data collected in a preceding audiometry pages.

Schema

  • chaAudiometryResultsTable.json

Bekesy Like Response Area

Run a Bekesy-Like level threshold exam.

Protocol Example

{
  "id": "BekesyLike",
  "title": "Bekesy Level Exam",
  "questionMainText": "Bekesy Level Exam",
  "instructionText": "Press and hold the button only when you hear the tones.",
  "responseArea": {
      "type": "chaBekesyLike",
      "examInstructions": "Press and hold the button only when you hear the tones.",
      "examProperties": {
          "F": 4000,
          "Lstart": 50,
          "PresentationMax": 100,
          "UseSoftwareButton": true,
          "LevelUnits": "dB SPL",
          "OutputChannel": "HPL0"
    }
  }
}

Options

  • Audiometry Page Properties may be defined on the PAGE, not within the responseArea.

  • examProperties:

    • Type: object

    • Description: May contain any of the properties:

      • Bekesy Like Exam Properties
  • exportToCSV:

    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)

Response

For a chaBekesyLike response area, result.response is a number corresponding to the threshold level in LevelUnits. The result object also contains:

result.Threshold = 5 // Threshold (LevelUnits)
result.Units = "dB SPL" // Same as LevelUnits defined in the protocol
result.L = [50, 54, ...] // Array of numbers indicating the levels (in LevelUnits) presented during the exam
result.MaximumExcursion = 14 // Maximum difference (dB) between consecutive user responses that occurred during ReversalKeep period
result.RetSPL = 10  // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.Slope - -0.061290324 // Slope of L in dB per presentation over the ReversalKeep period

Note that the Common Audiometry Responses will also be provided.

Schema

  • chaBekesyLike.json

Bekesy MLD Response Area

Run a Bekesy Masking Level Difference (MLD) exam.

Protocol Example

{
  "id": "BekesyMLD",
  "title": "Bekesy MLD Exam",
  "questionMainText": "Bekesy MLD Exam",
  "instructionText": "Press and hold the button only when you hear the tones.",
  "responseArea": {
      "type": "chaBekesyMLD",
      "examInstructions": "Press and hold the button only when you hear the tones.",
      "examProperties": {
          "UseSoftwareButton": true,
          "F": 500,
          "Lstart": 70,
          "LowCutoff": 354,
          "HighCutoff": 707,
          "PresentationMax": 200,
          "IncrementStart": 1,
          "MaskerEar": 2,
          "MaskerPhase": 0,
          "TargetEar": 2,
          "TargetPhase": 0,
          "InitialSNR": 0
        }
    }
}

Options

  • Audiometry Page Properties may be defined on the PAGE, not within the responseArea.

  • examProperties:

    • Type: object

    • Description: May contain any of the properties:

      • Bekesy Like Exam Properties

      • MaskerEar:

        • Type: number
        • Description: Channel to be used for the masker noise, where 0 = Left, 1 = Right, 2 = Both. (Default = 2)
      • MaskerPhase:

        • Type: enum
        • Description: Phase of the masking material delivered to the right channel (used only if MaskerEar = 2). Can be 0 or 180. If 0, deliver the exact same noise to both channels. If 180, invert it at the right ear. (Default = 0)
      • LowCutoff:

        • Type: number
        • Description: Low cutoff frequency (Hz) to filter the masker noise. (Default = 500)
      • HighCutoff:

        • Type: number
        • Description: High cutoff frequency (Hz) to filter the masker noise. (Default = 2000)
      • TargetEar:

        • Type: number
        • Description: Channel to be used for the target, where 0 = Left, 1 = Right, 2 = Both. (Default = 2)
      • TargetPhase:

        • Type: enum
        • Description: Phase of the target material delivered to the right channel (used only if TargetEar = 2). Can be 0 or 180. If 0, deliver the exact same target to both channels. If 180, invert it at the right ear. (Default = 0)
      • InitialSNR:

        • Type: number
        • Description: Initial SNR (dB). The masker level is set as Lstart - InitialSNR. (Default = 5, Minimum = -15, Maximum = 10)
  • exportToCSV:

    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)

Response

For a chaBekesyMLD response area, result.response is a number corresponding to the threshold level in LevelUnits. The result object also contains:

result.Threshold = 5 // Threshold (LevelUnits)
result.Units = "dB HL" // Same as LevelUnits defined in the protocol
result.L = [70, 68, ...] // Array of numbers indicating the levels (in LevelUnits) presented during the exam
result.MaximumExcursion = 14 // Maximum difference (dB) between consecutive user responses that occurred during ReversalKeep period
result.RetSPL = 10  // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.Slope - -0.061290324 // Slope of L in dB per presentation over the ReversalKeep period

Note that the Common Audiometry Responses will also be provided.

Schema

  • chaBekesyMLD.json

  • bekesyMLDExamProperties.json

BHAFT Response Area

This response area presents a Bekesy Highest Audible Frequency (BHAFT) exam.

Protocol Example

{
  "id": "BHAFT",
  "title": "Bekesy Highest Audible Frequency Exam",
  "questionMainText": "Bekesy Highest Audible Frequency",
  "instructionText": "Press and hold the button only when you hear the tones.",
  "responseArea": {
    "type": "chaBHAFT",
    "examProperties": {
      "OutputChannel": "HPR0",
      "UseSoftwareButton": true,
      "PresentationMax": 50,
      "Fstart": 8000
    }
  }
}

Options

  • Audiometry Page Properties may be defined on the PAGE, not within the responseArea.

  • examProperties:

    • Type: object

    • Description: May contain any of the properties:

      • Audiometry Frequency Properties

      • ToneRepetitionInterval:

        • Type: integer
        • Description: Interval at which tones are presented, in ms. Overrides default inherited from audiometryProperties properties. (Default = 700, Maximum = 2000, Minimum = 450)
      • L0:

        • Type: number
        • Description: Nominal test level in dB SPL. Allowable range for test level should be from the minimum to the maximum of the levels defined in the calibration table. (Default = 80),
      • ReversalDiscard:

        • Type: integer
        • Description: Number of reversals to discard. (Default = 2, Maximum = 10, Minimum = 0)
      • ReversalKeep:

        • Type: integer
        • Description: Number of reversals to keep. (Default = 6, Maximum = 10, Minimum = 2, must be a multiple of 2)
      • IncrementStartMultiplierFrequency:

        • Type: number
        • Description: Frequency increment until ReversalDiscard: multiply this by IncrementNominalFrequency. (Default = 2, Maximum = 10, Minimum = 1)
      • IncrementNominalFrequency:

        • Type: number
        • Description: Frequency increment after first reversal, in octaves. (Default = 0.08333, Maximum = 1, Minimum = 0.01)
      • IncrementStartMultiplierLevel:

        • Type: number
        • Description: Level increment until ReversalDiscard: multiply this by IncrementNominalLevel. (Default = 2, Maximum = 10, Minimum = 1)
      • IncrementNominalLevel:

        • Type: number
        • Description: Level increment after first reversal, in dB. (Default = 4, Maximum = 10, Minimum = 0.5)
      • MinimumOutputLevel:

        • Type: number
        • Description: Minimum level that could be presented during exam, in dB SPL (allowable test levels are bounded by the minimum and maximum output levels defined in the calibration).
  • exportToCSV:

    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)

Response

For a chaBHAFT response area, result.response is a number corresponding to the threshold level in LevelUnits. The result object also contains:

result.Threshold = 65 // Threshold level (dB SPL)
result.ThresholdFrequency = 10275.318 // Threshold frequency (Hz)
result.F = [8000, 8979.696, ...] // Array of numbers indicating the frequencies presented
result.L = [65, 65, ...] // Array of numbers indicating the levels presented

Note that the Common Audiometry Responses will also be provided.

Schema

  • chaBHAFT.json

CRM Response Area

This response area is deprecated as of TabSINT version 4.4.0.

Run a WAHTS CRM test. This test assesses the subject's ability to hear a specific talker in a two-talker recording.

Note that the use of the CRM exam is restricted. Please contact tabsint@creare.com if you are interested in using this exam.

Protocol Example

{
  "id": "CRM_exam",
  "title": "CRM Response Area",
  "questionMainText": "CRM Exam",
  "questionSubText": "Your call sign is Baron",
  "responseArea": {
    "type": "chaCRM",
    "autoBegin": true,
    "verticalSpacing": 25,
    "examProperties": {
      "Level": 75,
      "ConditionPresentations": [1,3,2,3,3,3]
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allows user to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into exam, without having to press the 'Begin' button. (Default = false)
  • feedBack:

    • Type: boolean
    • Description: If true, show the correct result after the subject responds or after the maximum time to wait for the user response is reached. (Default = true)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • verticalSpacing:

    • Type: integer
    • Description: Vertical spacing between buttons, given in [px]. (Default = 30)
  • horizontalSpacing:

    • Type: integer
    • Description: Horizontal spacing between buttons, given in [px]. (Default = 10)
  • measureBackground:

    • Type: string
    • Description: Method to use to measure background noise after an audiometry exam. Can be ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Object containing the following options.

      • ConditionPresentations:

        • Type: array

        • Description: A tuple of the condition and the number of presentations of that condition to present, interleaved, for each condition to present (i.e. [condition, N presentations] ). (Default = [1, 5]).

          • Condition 1: loud and soft speakers
          • Condition 2: male and female speakers
          • Condition 3: spatial separation between speakers
      • Level:

        • Type: number
        • Description: Level at which to play the presentations (dB SPL). (Default = 80, Maximum = 115, Minimum = 0)
      • UseMcl:

        • Type: boolean
        • Description: If true, the subject may modify the sound level using the joystick. The Level argument is ignored unless the MCL has not yet been set, in which case it is used as the initial level. (Default = false)
      • MaxResponseTime:

        • Type: number
        • Description: Maximum time, in seconds, to wait for a subject response. (Default = 8)

Response

The chaCRM creates a result object for each presentation in ConditionPresentations.

For each presentation, the result.response is the string id of the selected button (e.g. "Blue 1"). Each result object contains Common Audiometry Responses as well as:

result.conditionCode = 2 // Condition of the presentation
result.correctColor = "Red" // String indicating the correct color for this presentation
result.correctNumber = 1 // Integer indicating the correct number for this presentation
result.wavFileName = "0023111821524600211_53.wav"  // String name of the wav file for this presentation
result.responseTime = 5.935177 // Time (sec) between when the presentation ended and when the subject responded
result.correct = true  // True if the subject responded correctly

There is a final result object that contains the percent correct over all of the presentations.

result.correctPercent = [1,1,2,1,3,1,1] // An array of size 2N + 1, where N is the number of conditions presented.  The array contains condition codes and fraction correct for that condition, interleaved.  The last value is the overall fraction correct.

Schema

  • chaCRM.json

Dichotic Digits Response Area

This response area prepares a Dichotic Digits Test. The exam presents two digits in each ear and the subject repeats the digits they heard. This test measures the ability of a person to identify digits pairs presented dichotically.

Note that the use of the Dichotic Digits exam is restricted. Please contact tabsint@creare.com if you are interested in using this exam.

Protocol Example

{
   "id": "dichoticDigits",
   "title": "Dichotic Digits Test",
   "questionMainText": "Dichotic Digits Test Example",
   "helpText": "Follow instructions",
   "instructionText": "",
   "responseArea": {
     "type": "chaDichoticDigits",
     "examInstructions" : "Select the digits heard",
     "examProperties": {
       "NumberOfPresentations": 5,
       "Language": "english",
       "Level": 85
    }
  } 
}

Options

  • autoBegin:

    • Type: boolean
    • Description: Go straight into exam, without having to press the 'Begin' button. (Default = false)
  • keypadDelay:

    • Type: number
    • Description: Number of milliseconds to wait before activating the keypad. (Default = 10)
  • feedback:

    • Type: boolean
    • Description: Shows the user which digits were correct after each set of digits is entered. (Default = true)
  • feedbackDelay:

    • Type: number
    • Description: Number of milliseconds to show the digits after the presentation before clearing the keypad. This field can be used even when feedback is set to false. (Default = 1000)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • examProperties:

    • Type: object

    • Description: Object containing the following options.

      • NumberOfPresentations:

        • Type: number
        • Description: Number of presentations in an exam. (Default = 20)
      • Level:

        • Type: number
        • Description: Level of presentations, in dB SPL. (Default = 50)
      • Language:

        • Type: string
        • Description: Language to use for the test media. Can be spanish or english. (Default = spanish)

Response

A Dichotic Digits exam generates a result object for each Dichotic Digits presentation. Within each object, result.response is an array containing the selected numbers. Each result object also contains:

result.presentationId = "dichoticDigits" // Same as the protocol page id
result.PresentationCount = 2 // Number of Dichotic Digits presentations
result.PresentedDigits = [9 5 8 3]  // Array of digits presented (length of 4)
result.PresentedFile = "C:DD/DD9583.WAV"  // Path to wav file presentation
result.PresentationScore = 75 // Percentage of digits correctly identified in the current presentation
result.response = [5 9 1 3] // Array of digits selected (length of 4)

There is a final result object summarizing the test results:

result.response = "Exam Results" // String indicating that this is the summary result for the Dichotic Digits exam
result.presentationId = "dichoticDigits" // Protocol page id 
result.ScoreTotal = 80 // Calculated score percentage for all digits presented. ScoreTotal = (total number of digits identified /(NumberOfPresentations x 4)) x 100. Only valid in state DONE.
result.ScoreLeft = 70 // Calculated score percentage for digits presented to the LEFT ear only. ScoreLeft = (total number of digits that were presented to the LEFT ear and were correctly identified / (NumberOfPresentations x 2)) x 100. Only valid in state DONE.
result.ScoreRight = 90 // Calculated score percentage for digits presented to the RIGHT ear only. ScoreRight = (total number of digits that were presented to the RIGHT ear and were correctly identified / (NumberOfPresentations x 2)) x 100. Only valid in state DONE.
result.resultsFromCha.State = 2 // "Done" state

Schema

  • chaDichoticDigits.json

DPOAE Response Area

This response area prepares for a Distortion Product Otoacoustic Emissions examination (DPOAE). The WAHTS emits tones at two frequencies and listens for a response.

Protocol Example

{
  "id": "DPOAE_LEFT",
  "title": "DPOAE EXAM",
  "questionMainText": "Otoacoustic Emission Measurement",
  "instructionText":"Insert the OAE probe into the LEFT ear, press begin, then wait quietly for measurement to complete.",
  "responseArea": {
    "type":"chaDPOAE",
    "skip": true,
    "examProperties": {
      "BlockSize": 8192,
      "F1": 4000,
      "F2": 6000,
      "L1": 65,
      "L2": 55,
      "InputChannel": 3,
      "DisableSpectrum": true,
      "NoiseRejection": true
    }
  } 
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allows user to skip the exam. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • feedBack:

    • Type: boolean
    • Description: Shows the correct result after the subject responds or after the maximum time to wait for the user response is reached.. (Default = true)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • measureBackground:

    • Type: string
    • Description: Select a method from the list below to measure background noise after an audiometry exam, the one available option is ["ThirdOctaveBands"].
  • plotProperties:

    • Type: object

    • Description: Object containing the following option.

      • displayDPOAE:
        • Type: array
        • Description: An array of strings indicating results to display. Option is DPOAE.
  • examProperties:

    • Type: object

    • Description: Object containing the following options.

      • BlockSize:

        • Type: integer
        • Description: The number of samples in a block used for the FFT (must be 1024 for use with the OAE Screener). (Default = 8192)
      • Fl:

        • Type: number
        • Description: Frequency of F1 (Hz). (Default = 833.33)
      • F2:

        • Type: number
        • Description: Frequency of F2 (Hz). (Default = 1000)
      • Ll:

        • Type: number
        • Description: Level of F1 (dB SPL). (Default = 65)
      • L2:

        • Type: number
        • Description: Level of F2 (dB SPL). (Default = 55)
      • MinTestAverages:

        • Type: number
        • Description: Minimum number of blocks that are averaged into the result before the test ends via the MinDpNoiseFloorThresh criterion. (Default = 60)
      • MaxTestAverages:

        • Type: number
        • Description: Maximum number of blocks to consider for averaging. (Default = 120, Minimum = 0, Maximum = 18750)
      • InputChannel:

        • Type: integer
        • Description: Input channel specifier. Defaults to 100, but 3 should be used for OAES devices.
      • DisableSpectrum:

        • Type: boolean
        • Description: If true, disable FFT spectrum results. Defaults to true (FFT disabled).
      • MinDpNoiseFloorThresh:

        • Type: number
        • Description: When the low DP exceeds the noise floor in the surrounding +/- NoiseHalfBandwidth Hz bins by this amount, the test will conclude (provided the MinTestAverages have been met). (Default = 10)
      • NoiseHalfBandwidth:

        • Type: number
        • Description: Bandwidth over which to calculate the noise floor in Hz. (Default = 30)
      • NoiseRejection:

        • Type: boolean
        • Description: If true, the noise rejection algorithm is applied to discard noisy data blocks. If false, all data is accepted. (Default = false)
      • TransientDiscard:

        • Type: number
        • Description: Initial period of data discarded at start of tone measured in ms. (Default = 21.3)

Response

The result.response from a chaDPOAE response area contains

DpLow.Frequency = # // The actual frequency (Hz) of the measurement.
DpLow.Amplitude = # // The amplitude (dB SPL) measured at the microphone at the Frequency.
DpLow.Phase = # // The phase (rad) measured at the microphone at the Frequency.
DpLow.NoiseFloor = # // The amplitude (dB SPL) of the noise floor in the ±3 FFT frequencies [2] around the measurement frequency.  Only calculated for Distortion Products; others shall be zero.
DpHigh.Frequency = # // The actual frequency (Hz) of the measurement.
DpHigh.Amplitude = # // The amplitude (dB SPL) measured at the microphone at the Frequency.
DpHigh.Phase = # // The phase (rad) measured at the microphone at the Frequency.
DpHigh.NoiseFloor = # // The amplitude (dB SPL) of the noise floor in the ±3 FFT frequencies [2] around the measurement frequency.  Only calculated for Distortion Products; others shall be zero.
F1.Frequency = # // The actual frequency (Hz) of the measurement.
F1.Amplitude = # // The amplitude (dB SPL) measured at the microphone at the Frequency.
F1.Phase = # // The phase (rad) measured at the microphone at the Frequency.
F2.Frequency = # // The actual frequency (Hz) of the measurement.
F2.Amplitude = # // The amplitude (dB SPL) measured at the microphone at the Frequency.
F2.Phase = # // The phase (rad) measured at the microphone at the Frequency.
TestAverages = # // The actual number of blocks averaged into the result (important data for noise rejection.)
examProperties.BlockSize = # // The number of samples used for the FFT.
examProperties.F1 = # // The actual frequency (Hz) used as F1.
examProperties.F2 = # // The actual frequency (Hz) used as F2.
examProperties.L1 = # // The actual amplitude (dB SPL) used as L1.
examProperties.L2 = # // The actual amplitude (dB SPL) used as L2.
channel = "string" // The input channel specifier.

Schema

  • chaDPOAE.json

Frequency Pattern Detection Response Area

Use this response area to administer a Frequency Pattern Detection Test, a test that measures the ability of a person to detect the frequency pattern of the presentation.

Protocol Example

{
  "id": "FrequencyPattern",
  "title": "Frequency Pattern Detection Test",
  "questionMainText": "Frequency Pattern Detection Test",
  "instructionText":"Enter the pattern as heard",
  "responseArea":{
    "type": "chaFrequencyPattern"
  }
}

Options

  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • feedback:

    • Type: boolean
    • Description: If true, show the user whether they answered each frequency correctly. (Default = true)
  • feedbackDelay:

    • Type: number
    • Description: Number of milliseconds to show the feedback after the presentation before starting the next presentation. (Default = 1000)
  • examProperties:

    • Type: object

    • Description: Object containing the following options.

      • Channel:

        • Type: number
        • Description: Channel to be used, where 0 = Left, 1 = Right, 2 = Both. (Default = 0)
      • NumberOfPresentations:

        • Type: number
        • Description: Number of presentations. (Default = 30, Minimum = 0, Maximum = 40)
      • Level:

        • Type: number
        • Description: Level of tones, set by calibration, in dB SPL (Default = 75, Minimum = 0, Maximum = 100)

Response

A Frequency Pattern Detection exam generates a result object for each Frequency Pattern presentation. Within each object, result.response is a number to represent the frequencies identified, where a high frequency is denoted by 2, a low frequency is denoted by 1, and unsure is denoted by 0. For example, HLL as 211. Each result object also contains:

result.presentationId = "FrequencyPattern" // Same as the protocol page id
result.state = 1 // 1 = In Progress, 2 = Done
result.presentationCount = 15  // Number of presentations
result.presentedPattern = 212  // Frequency pattern that was presented
result.correct = 0 // Returns true if Response matches PresentedPattern
result.response = 121 // Frequency pattern selected/entered

There is a final result object summarizing the test results:

result.response = "Exam Results" // String indicating that this is the summary result for the HINT exam
result.presentationId = "FrequencyPattern" // Protocol page id with "_Results" appended
result.presentationCount = 30 // Number of presentations completed
result.numberOfReversals = 3 // Total number of reversals (where the pattern selected was the exact inverse of what was presented, for example LHH instead of HLL). Only valid is state DONE
result.score = 83.3 // Calculated score percentage. Only valid in state DONE
result.resultsFromCha.State = 2 // "Done" state

Schema

  • chaFrequencyPattern.json

GAP Response Area

Use this response area to administer a Gap Detection Test, a test that measures the subject's ability to detect silent gaps in white noise.

Protocol Example

{
  "id": "Demonstration",
  "title": "Gap Detection Test",
  "responseArea": {
    "type":"chaGAP",
    "training": true,
    "examProperties": {
      "TimePres": 3000,
      "LNoise": 50,
      "NPresMax": 40
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allow the user to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • feedBack:

    • Type: boolean
    • Description: If true, show the correct result after the subject responds or after the maximum time to wait for the user response is reached. (Default = false)
  • feedbackDelay:

    • Type: number
    • Description: Number of milliseconds to show the digits after the presentation before clearing the keypad. This field can be used even when feedback is set to false. (Default = 1000)
  • training:

    • Type: boolean
    • Description: If true, present the training exam. (Default = false)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • measureBackground:

    • Type: string
    • Description: Method to use to measure background noise after an audiometry exam. Can be ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Object containing the following options.

      • Channel:

        • Type: number
        • Description: Channel to be used, where 0 = Left, 1 = Right, 2 = Both. (Default = 0)
      • TimePres:

        • Type: number
        • Description: Total length of each noise presentation in ms. (Default = 4000, Minimum = 0, Maximum = 40000)
      • LNoise:

        • Type: number
        • Description: Presentation level in dBA. (Default = 65, Minimum = 0, Maximum = 85)
      • AllowableGapLengths:

        • Type: array
        • Description: An array of numbers representing a list of allowable gap lengths, in ms, up to 30 items long. The minimum number of items in this array is 1 and the values must be between 0 and 100 (inclusive). (Default = [70,60,50,45,40,35,30,25,20,16,13,10,7,4,2,1])
      • TimeLead:

        • Type: number
        • Description: Length of leading delay before gap can be inserted, in ms. (Default = 1000, Minimum = 0, Maximum = 2000)
      • TimeTrail:

        • Type: number
        • Description: Length of trailing delay needed after gap, in ms. (Default = 1000, Minimum = 0, Maximum = 2000)
      • TimeWindow:

        • Type: number
        • Description: Length of window during which response can be accepted, in ms. (Default = 850, Minimum = 0, Maximum = 2000)
      • TimeNoResp:

        • Type: number
        • Description: Delay between beginning of gap and beginning of response window, in ms. (Default = 100, Minimum = 0, Maximum = 200)
      • TimePause:

        • Type: number
        • Description: Elapsed time between presentation, in ms. (Default = 1000, Minimum = 0, Maximum = 5000)
      • GapLengthStartIndex:

        • Type: number
        • Description: Index into AllowableGapLengths for initial gap length value. (Default = 8, Minimum = 0, Maximum = 29)
      • NReversalsCalc:

        • Type: number
        • Description: Number of reversals to use in computation of threshold. (Default = 8, Minimum = 1, Maximum = 10)
      • NReversals:

        • Type: number
        • Description: Number of reversals before test ends. (Default = 10, Minimum = 1, Maximum = 20)
      • NLowestReversals:

        • Type: number
        • Description: Number of lowest pairwise reversal averages to track for second threshold computation. (Default = 3, Minimum = 0, Maximum = 10)
      • NPresMax:

        • Type: number
        • Description: Maximum number of presentations to use before aborting the exam. (Default = 120, Minimum = 1, Maximum = 200)
      • NHits:

        • Type: number
        • Description: Number of consecutive hits (correct answers) necessary before reducing gap length. (Default = 2, Minimum = 1, Maximum = 3)
      • NMiss:

        • Type: number
        • Description: Number of consecutive misses (incorrect or no answers) necessary before increasing gap length. (Default = 2, Minimum = 1, Maximum = 3)
      • NPresCheck:

        • Type: number
        • Description: Number of consecutive presentations with same gap length allowed. (Default = 5, Minimum = 1, Maximum = 8)
      • MaxFreq:

        • Type: number
        • Description: Maximum frequency used to generate the white noise, in Hz. When calibration loaded to create filter coefficients, the table is truncated to include only those rows whose frequencies are less than or equal to MaxFreq. (Default = 16000, Minimum = 4000, Maximum = 16000)
      • UseSoftwareButton:

        • Type: boolean
        • Description: Uses a software submission instead of the mechanical Button. (Default = false)
      • SendFullResults:

        • Type: number
        • Description: When to transmit thresholds and array results. Available choices are: 0 = Always, 1 = Sometimes, 2 = Never. (Default = 0)
      • SemiAutomaticMode:

        • Type: boolean
        • Description: If true, pause after each pulse train to wait for a response. If false, proceed in a fully automated fashion. (Default = false)

Response

The contents of the result object depends on the value of the SendFullResults parameter and the State of the exam. The following result items are always included:

result.State = "SUCCESS" // Either "IN PROGRESS," "SUCCESS" (if the exam ended and thresholds were calculated), or "FAILED" (if the exam terminated abnormally)
result.PresentationCount = 32 // Number of completed presentations
result.HitOrMiss = 1 // Integer of 1 (True) if the gap was successfully detected or 0 (False) if either (a) no response or (b) response outside the the response window (for current presentation)
result.CurrentGapStartTime = 1907.3019 // Location of the gap within the noise for current presentation, from the start of the noise (ms)
result.CurrentGapLength = 6.9999995 // Gap length for current presentation (ms) (same as last element in result.GapLengthArray).
result.CurrentTimeResp = 360.22913 // Response time after end of gap (ms) for current presentation (-1 if no response) (same as last element in result.TimeRespArray). 
result.PlayPosition = 4000.5413 // Time elapsed since the beginning of the noise presentation (ms)
result.ActualMaxFreq = 48000 // Actual maximum frequency (Hz) used by the calibration routine in generating the FIR filter

If SendFullResults is 0 (or 1 AND the State is not IN PROGRESS), the following items will also be included in the result object:

result.GapThreshold = 5.4999995 // Average gap length (ms) calculated based on last NreversalsCalc reversals (NaN if threshold can't be computed)
result.GapLowestThreshold = 5.4999995 // Average of the shortest NLowestReversals pairwise averages of the NReversals reversals (exclude first reversal if NReversals is odd) (NaN if threshold can't be computed or NLowestReversals is zero)
result.GapLengthArray = [19.999998,19.999998, ...] // Array of gap lengths for each presentation (ms)
result.TimeRespArray = [357.73956,324.5833, ...] // Array of response times (ms) for each presentation (-1 if no response)
result.HitOrMissArray = [true,true, ...] // Boolean array indicating true if the gap was successfully detected, or false if either (a) no response or (b) response outside the the response window for each presentation.
result.ReversalUsedForThresholdArray = [false,false, ...] // Boolean array indicating true if the presentation was a reversal used to calculate the threshold or false otherwise

Schema

  • chaGAP.json

HINT Response Area

Run a Hearing in Noise Test (HINT) exam.

Note that the use of the HINT exam is restricted. Please contact tabsint@creare.com if you are interested in using this exam.

Protocol Example

{
  "id": "chaHINT",
  "title": "Hearing in Noise Test",
  "questionMainText":"HINT Exam",
  "questionSubText":"Listen carefully and tell the administrator all of the words that you hear",
   "responseArea": {
    "type": "chaHINT",
    "examProperties": {
      "Language": "english",
      "ListNumber": 1,
      "NumberOfPresentations": 10
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allow the user to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • measureBackground:

    • Type: string
    • Description: Method to use to measure background noise after an audiometry exam. Can be ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Object containing the following options:

      • Language:

        • Type: string
        • Description: Language to use for HINT test (can be english, mandarin, military, swahili, laspanish (Latin American Spanish) or portuguese). (Default = english)
      • IsPractice:

        • Type: boolean
        • Description: If true, run as a practice exam using the practice lists. (Default = false)
      • Direction:

        • Type: string
        • Description: Noise direction, can be front, left, right or quiet. (Default = front)
      • NoiseLevel:

        • Type: number
        • Description: Absolute level at which noise is played, in dBA SPL. (Default = 65, Minimum = 0, Maximum = 85)
      • InitialStepSize:

        • Type: number
        • Description: Change in SNR for first 4 presentations, in dB. (Default = 4, Minimum = 0, Maximum = 20)
      • StepSize:

        • Type: number
        • Description: Change in SNR after each response (after the first 4 presentations), in dB. (Default = 2, Minimum = 0, Maximum = 20)
      • InitialSNR:

        • Type: number
        • Description: SNR for the first presentation, in dB. See below for handling of NaN. (Default = NaN, Minimum = -20, Maximum = 20)
          • If NaN and:
            • Direction = front, then InitialSNR = 0
            • Direction = left, then InitialSNR = -5
            • Direction = right, then InitialSNR = -5
            • Direction = quiet, then InitialSNR = 20 - NoiseLevel
      • ListNumber:

        • Type: number
        • Description: One-based index of the list to use, where 0 selects the list randomly. (Default = 0, Minimum = 0, Maximum = 12)
      • NumberOfPresentations:

        • Type: number
        • Description: Number of presentations. (Default = 20, Minimum = 10, Maximum = 150)
      • DisableRepeatFirstUntilCorrect:

        • Type: boolean
        • Description: By default, this exam will repeat the first presentation until the subject gets it correct. If this field is true, the exam will not repeat the first presentation until the subject gets it correct. (Default = false)

Response

A HINT exam generates a result object for each HINT presentation. Within each object, result.response is an array containing the zero-based indices of the selected words. Each result object also contains:

result.presentationId = "chaHINT" // Same as the protocol page id
result.State = 0 // 0 = Playing, 1 = Loading, 2 = Waiting for Result, 3 = Done
result.SentencePath = "C:HINT/LIST1/TIS019.WAV" // String indicating the file name of the current presentation
result.CurrentSentence = "(A/The) mailman brought (a/the) letter." // String representation of the current presentation
result.ListLength = 10  // Number of presentations
result.CurrentSentenceIndex = 4  // Zero-based index of the current presentation.
result.sSRT = -3.2 // Calculated sentence speech reception threshold (valid only when State = DONE)
result.sSRTstd = 0 // Standard deviation of the SNRs used to calculate sSRT (valid only when State = DONE)
result.CurrentSNR = -3.2 // SNR (dB) of the current sentence
result.selectedWords = ["(A/The)","mailman"]  // Array of strings indicating the words selected from the presentation
result.numberCorrect = 2 // Number of words identified correctly
result.wordCount = 5 // Total number of words in the presentation
result.responseToCha = 3 // Bit field representation of the words which are correct in the sentence.  If the word is correct, the bit is 1; otherwise it is zero.  The least significant bit corresponds to the first word.

There is a final result object summarizing the test results:

result.response = "Exam Results" // String indicating that this is the summary result for the HINT exam
result.presentationId = "chaHINT_Results" // Protocol page id with "_Results" appended
result.presentationCount = 10 // Number of presentations completed
result.correctPresentationCount = 7 // Number of presentations for which the user correctly identified all of the words
result.resultsFromCha.State = 2 // "Done" state
result.resultsFromCha.SentencePath = "C:HINT/LIST1/TIS010.WAV" // String indicating the file name of the last presentation
result.resultsFromCha.CurrentSentence = "(A/The) car (is/was) going too fast." // String representation of the last presentation
result.resultsFromCha.ListLength = 10  // Number of presentations
result.resultsFromCha.CurrentSentenceIndex = 10  // One greater than the zero-based index of the last presentation (equal to ListLength)
result.resultsFromCha.sSRT = -8.971428 // Average of the SNRs of presentations 5 through (NumberOfPresentations + 1) where the SNR of presentation (NumberOfPresentations + 1) is what the SNR would have been should it have been presented (valid only when State = DONE)
result.resultsFromCha.sSRTstd = 1.3997027 // Standard deviation of the SNRs used to calculate sSRT (valid only when State = DONE)
result.resultsFromCha.CurrentSNR = -10.4 // SNR (dB) of the next presentation, if it were to be presented

Schema

  • chaHINT.json

Hughson Westlake Response Area

A response area for performing a Hughson Westlake level threshold exam.

Protocol Example

{
  "id": "Hughson Westlake",
  "title": "HW Audiometry",
  "responseArea": {
    "type": "chaHughsonWestlake",
    "autoSubmit": true,
    "examProperties": {
      "F": 500,
      "Lstart": 30,
      "TonePulseNumber": 3,
      "UseSoftwareButton": true,
      "LevelUnits": "dB HL",
      "OutputChannel": "HPR0"
    }
  }
}

Options

  • Audiometry Page Properties may be defined on the PAGE, not within the responseArea.

  • examProperties:

    • Type: object
    • Description: May contain any of the properties from Hughson Westlake Exam Properties
  • exportToCSV:

    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)

Response

The result.response is a number corresponding to the threshold level in LevelUnits. The result object also contains the Common Audiometry Responses and:

result.L = [30,15, ...]  // Array of levels presented
result.RetSPL = 15  // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.FalsePositive = [0,0, ...] // Array of numbers indicating the number of responses to each presentation that occurred outside the polling time window (may be 0, 1, 2 or 3 where 3 indicates 3+)
result.NumCorrectResp = 0  // Number of presentations correctly answered (only used when Screener = true)
result.ResponseTime = [859,489, ...] // Array of numbers indicating the response time (ms) to each presentation (no response recorded as 0)

Schema

  • chaHughsonWestlake.json

Manual Audiometry Response Area

Use this response area to run a manual audiometry exam.

Protocol Example

{
  "id": "ManualAudiometry",
  "title": "Manual Audiometry",
  "questionMainText": "Manual Audiometry",
  "submitText": "Finish",
  "helpText": "[Manual Audiometry Instructions]",
  "responseArea": {
    "type": "chaManualAudiometry",
    "minLevel": -20,
    "maxLevel": 80,
    "presentationList": [
      {
        "F": 500
      },
      {
        "F": 1000
      },
      {
        "F": 2000
      },
      {
        "F": 4000
      },
      {
        "F": 6000
      },
      {
        "F": 8000
      }
    ],
    "examProperties": {
      "LevelUnits": "dB SPL",
      "Lstart": 30,
      "TonePulseNumber": 5,
      "OutputChannel": "HPL0",
      "UseSoftwareButton": true,
    }
  }
}

Options

  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • boneConduction:

    • Type: boolean
    • Description: If true (and using a compatible WAHTS), enable output to the bone conductor. (Default = false)
  • showPresentedTones:

    • Type: boolean
    • Description: If true, show the tone presented at each frequency on a separate chart. (Default = false)
  • audiometryType:

    • Type: string
    • Description: Which type of audiometry exam to run, can be HughsonWestlake. (Default = HughsonWestlake)
  • minLevel:

    • Type: number
    • Description: Minimum level the user can select for playing tones (hardware limited). (Default = -80)
  • maxLevel:

    • Type: number
    • Description: Maximum level the user can select for playing tones (hardware limited). (Default = 100)
  • minMaskingLevel:

    • Type: number
    • Description: Minimum level the user can select for masking noise (hardware limited). (Default = -80)
  • maxMaskingLevel:

    • Type: number
    • Description: Maximum level the user can select for masking noise (hardware limited). (Default = 80)
  • onlySubmitFrequenciesTested:

    • Type: boolean
    • Description: If true, only generate results for frequencies tested. (Default = false)
  • exportToCSV:

    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)
  • presentationList:

    • Type: array

    • Description: An array of objects defining the frequencies to run. Each object may contain:

      • Audiometry Level Properties

      • id:

        • Type: string
        • Description: Custom presentationId to use for the result object for that frequency.
  • examProperties:

    • Type: object

    • Description: Object which may contain any of the following:

      • Hughson Westlake Exam Properties

Response

The chaManualAudiometry response area generates a result object for each frequency and output channel combination. If onlySubmitFrequenciesTested is true, result objects will only be recorded for the frequency/output channel combinations tested.

Within each object, result.response is a number corresponding to the threshold level, in LevelUnits (or NaN if the frequency/ear combination wasn't tested and onlySubmitFrequenciesTested is false). Each result object also contains the Common Audiometry Responses and:

result.ResponseType = "threshold" // String indicating the type of response
result.presentationIndex = 5  // 0-based index of presentation within the `presentationList`
result.RetSPL = 0 // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.L = [30, 35, 40] // Array of levels presented

Schema

  • chaManualAudiometry.json

Manual Screener Response Area

Use this response area to run a manual screener exam.

Protocol Example

{
  "id": "ManualScreener",
  "responseArea": {
    "type": "chaManualScreener",    
    "levels": [60, 40, 20],
    "presentationList": [
      {
        "F": 500
      },
      {
        "F": 1000
      },
      {
        "F": 2000
      }
    ],
    "examProperties": {
      "LevelUnits": "dB HL",
      "TonePulseNumber": 5,
      "UseSoftwareButton": true,
      "PollingOffset": 1000,
      "MinISI":1000,
      "MaxISI":3000
    }
  }
}

Options

  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • audiometryType:

    • Type: string
    • Description: Which type of audiometry exam to run, can be HughsonWestlake. (Default = HughsonWestlake)
  • onlySubmitFrequenciesTested:

    • Type: boolean
    • Description: If true, only generate results for frequencies tested. (Default = false)
  • levels:

    • Type: array
    • Description: Array of levels to run. Does not support DynamicStartLevel. Maximum length of array is 3. (Default = [60, 40, 25])
  • presentationList:

    • Type: array

    • Description: An array of objects defining the frequencies to run. Each object may contain:

      • id:

        • Type: string
        • Description: Custom presentationId to use for the result object for that frequency.
  • examProperties:

    • Type: object

    • Description: Object which may contain any of the following:

      • Hughson Westlake Exam Properties

Response

The chaManualScreener response area generates a result object for each frequency, level and output channel combination. If onlySubmitFrequenciesTested is true, result objects will only be recorded for the frequency/output channel combinations tested.

Within each object, result.response is P for pass, R for refer or - if the response was not recorded. Each result object also contains:

result.Units = "dB HL"           // String giving the units of the Threshold
result.ResponseType = "pass-fail" // String indicating the type of response
result.presentationIndex = 5  // 0-based index of presentation within the `presentationList`
result.RetSPL = 0 // Reference Equivalent Threshold Sound Pressure Level (RetSPL) at the test frequency
result.L = 30 // Screening level (in result.Units).
result.F = 500 // Screening frequency in Hz.

Schema

  • chaManualScreener.json

Manual Tone Generation Response Area

Use this response area to manually present different tones.

Protocol Example

{
  "id": "ManualTones",
  "title": "Manual Tone Generation",
  "responseArea": {
    "type": "chaManualToneGeneration",
    "presentationList": [
      {
        "F": 2500,
        "ToneDuration": 250,
        "Level": 50
      },
      {
        "F": 5000,
        "ToneDuration": 250,
        "Level": 50
      }
    ]
  }
}

Options

  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • minLevel:

    • Type: number
    • Description: Minimum level the user can select for playing tones (hardware limited).
  • maxLevel:

    • Type: number
    • Description: Maximum level the user can select for playing tones (hardware limited).
  • presentationList:

    • Type: array

    • Description: Array of available frequencies. Each array object may contain any of the Tone Generation Long Level Properties.

      • F:
        • Type: integer
        • Description: Frequency of tone/center frequency of noise. (Maximum = 32000, Minimum = 1)
  • commonPresentationProperties:

    • Type: object
    • Description: This object may contain any of the properties from Tone Generation Long Level Properties

Response

The result object from a chaManualToneGeneration response area contains only the common TabSINT responses. There are no results from the WAHTS.

Schema

  • chaManualToneGeneration.json

Masked Threshold Response Area

Use this response area to measure the threshold in one ear while masking the other ear with narrow band noise around a test frequency.

Protocol Example

{
  "id": "MaskedThreshold",
  "title": "Masked Threshold",
  "questionMainText": "Masked Threshold",
  "instructionText": "This test measures your hearing threshold with on ear masked.",
  "responseArea": {
    "type": "chaMaskedThreshold",
    "examProperties": {
      "F": 1000,
      "TestEar": "Left",
      "OutputChannel": "HPL0",
      "OE": 40
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allows the user to skip the exam. (Default = false)
  • pause:

    • Type: boolean
    • Description: If true, allows the user to pause the current WAHTS exam. When paused, the user is returned to the 'start' page.
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • hideExamProperties:

    • Type: string
    • Description: Hide the parameters of the audiometry test (i.e. Frequency, Level, Ear) before and/or during a test. Default is to show the parameters before and during a test. Options are before, during, always or never.
  • measureBackground:

    • Type: string
    • Description: Method with which to measure the background noise after an audiometry exam. The option is ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • TestEar:

        • Type: string
        • Description: Test ear cannot be determined by OutputChannel since the bone oscillator output channel is used for both ears.
      • ThresholdLE:

        • Type: number
        • Description: Left ear unmasked air conduction threshold output, in dB HL .(Default = 20)
      • ThresholdRE:

        • Type: number
        • Description: Right ear unmasked air conduction threshold output, in dB HL .(Default = 20)
      • ThresholdBC:

        • Type: number
        • Description: Unmasked bone conduction threshold output, in dB HL. (Default = null)
      • F:

        • Type: number
        • Description: Frequency of the test signal (Hz). (Default = 1000, Minimum = 500, Maximum = 8000)
      • MaskingType:

        • Type: string
        • Description: Masking method to apply where the options are Auto, Optimized, and Plateau. (Default = Auto)
      • StepSize:

        • Type: number
        • Description: Increment the signal level by this amount, in dB. (Default = 5, Minimum = 1, Maximum = 10)
      • MaskingStepSize:

        • Type: number
        • Description: Increment the signal level by this amount, in dB. (Default = 5, Minimum = 1, Maximum = 10)
      • TonePulseNumber:

        • Type: integer
        • Description: Total number of tones played for each pulse train. (Default = 3, Minimum = 1, Maximum = 5)
      • PollingOffset:

        • Type: integer
        • Description: Period beyond last pulse where subject response still accepted, in ms. Enforced on the CHA: PollingOffset <= MinISI <= MaxISI. (Default = 600, Minimum = 0, Maximum = 2000)
      • OutputChannel:

        • Type: string
        • Description: Channel on which to output the test signal. Note the LINE channel must select the DAC opposing the MaskingChannel where the options are HPL0, HPR0, HPL1 or HPR1. (Default = HPL0)
      • OE:

        • Type: number
        • Description: Occlusion effect to account for when the OutputChannel is set to the bone oscillator (LINEL0). (Default = 0, Minimum = 0, Maximum = 80)

Response

The chaMaskedThreshold response area generates an array of presentation levels presented during the test, an array of masking levels presented during the test, and an array indicating a response (1) or no response (0) received for each presentation.

Schema

  • chaMaskedThreshold.json

MLD Response Area

Use this response area to run a Masking Level Difference (MLD) exam.

Protocol Example

{
  "id": "chaMLD",
  "title": "Masking Level Difference Response Area",
  "responseArea": {
    "type": "chaMLD",
    "examProperties": {    
      "UseSoftwareButton": true,
      "RequireResponse": false,
      "StopOnResponse": true
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allows the user to skip the exam. (Default = false)
  • pause:

    • Type: boolean
    • Description: If true, allows the user to pause the current WAHTS exam. When paused, the user is returned to the 'start' page.
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • hideExamProperties:

    • Type: string
    • Description: Hide the parameters of the audiometry test (i.e. Frequency, Level, Ear) before and/or during a test. Default is to show the parameters before and during a test. Options are before, during, always or never.
  • measureBackground:

    • Type: string
    • Description: Method with which to measure the background noise after an audiometry exam. The option is ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • Frequency:

        • Type: number
        • Description: Frequency of the target tone (Hz), range is set by calibration. (Default = 500)
      • ToneDuration:

        • Type: number
        • Description: Duration of each burst (ms) of the target tone in the pulse train. (Default = 300, Minimum = 100, Maximum = 500)
      • ToneRamp:

        • Type: number
        • Description: Duration of target tone (ms) and masker noise ramp up and down. (Default = 20, Minimum = 20, Maximum = 50)
      • TonePulseNumber:

        • Type: number
        • Description: Number of tones in a pulse train. (Default = 5, Minimum = 1, Maximum = 5)
      • InterToneDuration:

        • Type: number
        • Description: Duration of the periods of silence (ms) in the signal portion before the first tone burst, between subsequent tone bursts, and after the last tone burst. (Default = 300, Minimum = 100, Maximum = 500)
      • FixedSignal:

        • Type: boolean
        • Description: If true, the level of the signal is fixed and the desired SNR is achieved by adjusting the level of the masker. If false, the masker is fixed instead. (Default = false)
      • FixedLevel:

        • Type: number
        • Description: Sound pressure level (dB SPL) of the fixed material. The SPL of the other material is set by the SNR. (Default = 70, Minimum = 20, Maximum = 85)
      • Adaptive:

        • Type: boolean
        • Description: If true, use the adaptive algorithm. (Default = false)
      • UseSoftwareButton:

        • Type: boolean
        • Description: If true, use a host-generated submission in response to a presentation. (Default = true)
      • RequireResponse:

        • Type: boolean
        • Description: If true, wait for a user response. If false, assume a negative response if no response is received. (Default = true)
      • StopOnResponse:

        • Type: boolean
        • Description: If true, the presentation playback ceases once a positive response is obtained (similar to the Hughson-Westlake implementation). (Default = false)
      • TimePause:

        • Type: number
        • Description: Length of time (ms) before the next presentation after a response. (Default = 1000, Minimum = 0, Maximum = 1000)
      • ResponseWindow:

        • Type: number
        • Description: Length of time (ms) to wait for a response when RequireResponse is false before moving on. (Default = 1000, Minimum = 0, Maximum = 1000)
      • UseNoTone:

        • Type: boolean
        • Description: If true, randomly present presentations with no target tone to catch false positives. (Default = true)
      • NMaxFalsePositives:

        • Type: number
        • Description: Number of false positives that will be tolerated before aborting the exam. (Default = 1, Minimum = 1, Maximum = 40)
      • MaskerBandpass:

        • Type: array
        • Description: Array indicating the lower and upper cut-off frequencies for the masker bandpass filter. Range set by calibration. (Default = [200,800])
      • ReferenceSignalEar:

        • Type: number
        • Description: Channel to use for the target tone during the reference condition, where 0 = left, 1 = right, and 2 = both. (Default = 2)
      • ReferenceSignalPhase:

        • Type: number
        • Description: Phase (in degrees) of the target tone delivered to the right channel. This parameter is only used if ReferenceSignalEar = 2 for the reference condition. (Default = 0, Minimum = 0, Maximum = 359)
      • ReferenceMaskerEar:

        • Type: number
        • Description: Channel to use for the masker noise during the reference condition, where 0 = left, 1 = right, and 2 = both. (Default = 2)
      • ReferenceMaskerPhase:

        • Type: number
        • Description: Phase (in degrees) of the masker delivered to the right channel. This parameters is only used if ReferenceMaskerEar = 2 for the reference condition. The valid options are (Default = 0):
          • 0: deliver the exact same noise to both ears
          • 180: invert the masker at the right ear
          • -1: generate new random noise for the right ear; other values invalid
      • ReferenceInitialSNR:

        • Type: number
        • Description: SNR of the first presentation at the reference condition. (Default = 1, Minimum = -15, Maximum = 10)
      • ReferenceNPresentations:

        • Type: number
        • Description: Number of presentations for the reference condition. (Default = 10, Minimum = 5, Maximum = 50)
      • ReferenceStepSize:

        • Type: number
        • Description: The increment or decrement (in SPL) between presentations of the reference condition. (Default = 2, Minimum = 0, Maximum = 5)
      • TargetSignalEar:

        • Type: array
        • Description: Channel(s) to use for the target tone during the target condition(s), where 0 = left, 1 = right, and 2 = both. (Default = [2])
      • TargetSignalPhase:

        • Type: array
        • Description: Phase(s) of the target tone delivered to the right channel. This parameter is only used if TargetSignalEar = 2 for the target condition(s). (Default = [0], Minimum = 0, Maximum = 359)
      • TargetMaskerEar:

        • Type: array
        • Description: Channel(s) to use for the masker noise during the target condition(s), where 0 = left, 1 = right, and 2 = both. (Default = [2])
      • TargetMaskerPhase:

        • Type: array
        • Description: Phase(s) of the masker delivered to the right channel. This parameter is only used if TargetMaskerEar = 2 for the target condition(s). The valid options are (Default = [0]):
          • 0: deliver the exact same noise to both ears
          • 180: invert the masker at the right ear
          • -1: generate new random noise for the right ear; other values invalid
      • TargetInitialSNR:

        • Type: array
        • Description: SNR(s) of the first presentation(s) during the target condition(s). (Default = [-7], Minimum = -15, Maximum = 10)
      • TargetNPresentations:

        • Type: array
        • Description: Number(s) of presentations for the target condition(s) (Default = [11], Minimum = 5, Maximum = 50)
      • TargetStepSize:

        • Type: array
        • Description: For each target condition, the increment or decrement (in SPL) to use between presentations of that condition. (Default = [2], Minimum = 0, Maximum = 5)

Response

The chaMLD response area generates a result object for each presentation. Each result object contains:

result.State = "DONE" // String representing the exam state (PLAYING, WAITING_FOR_RESULT, BETWEEN or DONE)
result.ResultType = "SUCCESS"  // String reporting "FAIL" if no positive responses or too many false positives, otherwise "SUCCESS"
result.FailureType = "" // String indicating reason for a "FAIL" (only provided if ResultType is "FAIL")
result.Condition = "REFERENCE" // Condition of the current presentation ("REFERENCE", "TARGET" or "NO_TONE")
result.CurrentConditionCount = 10 // Counter for the current condition or sub-condition
result.FalsePositiveCounter = 0 // Current number of false positives
result.TargetIndex = -999 // Index of sub-condition (during TARGET condition, otherwise it is -999)
result.ActualFrequency = 500 // Actual frequency (Hz) of the target tone
result.CurrentSNR = -17 // SNR of the current condition or sub-condition
result.TargetThreshold = [-28,0,0,0]  // Calculated SNR threshold(s) for the Target condition(s) (only valid in state DONE)
result.MLD = [10,0,0,0] // Calculated MLD(s) for each of the Target condition(s) (only valid in state DONE)
result.ReferenceSNRArray = [1,-1, ...]  // Array of SNRs of each presentation during the Reference condition
result.TargetSNRArray = [-7,-9, ...]  // 2-D Array of SNRs of each presentation during the Target condition(s)
result.ReferenceHitOrMiss = [true,true, ...] // Array of subject responses to each presentation during the Reference condition
result.TargetHitOrMiss = [false,true, ...] // 2-D Array of subject responses to each presentation during the Target condition(s)
result.ReferenceThreshold = -18 // Calculated SNR threshold for the Reference condition (only valid in state DONE)
userResponses.Condition = ["REFERENCE","TARGET", ...] // Array of conditions presented (REFERENCE,TARGET or NO_TONE)
userResponses.Response = [true,true, ...] // Array of user responses to the presentations

Schema

  • chaMLD.json

Sound Recognition Response Area

This response area is deprecated as of TabSINT version 4.4.0.

Use this response area to run a Sound Detection test.

Protocol Example

{
  "id": "SoundRecognition",
  "title": "Sound Recognition",
  "questionMainText": "In Noise",
  "helpText": "This test measures your ability to detect sounds in background noise.",
  "hideProgressBar":true,
  "responseArea": {
    "type": "chaSoundRecognition",
    "categories": [
      {
        "name": "AIRCRAFT",
        "soundClasses": [
          {
            "name": "FIXED-WING",
            "imgPath": "fixed-wing.jpg",
            "wavfiles": [
              {"path":"C:USER/SRIN/Aircraft/Jet/A-J-0001.wav"},
              {"path":"C:USER/SRIN/Aircraft/Jet/A-J-0002.wav"},
              {"path":"C:USER/SRIN/Aircraft/Jet/A-J-0003.wav"}
            ]
          },
          {
            "name": "ROTARY-WING",
            "imgPath": "rotary-wing.gif",
            "wavfiles": [
              {"path":"C:USER/SRIN/Aircraft/Rotor/A-R-0001.wav"},
              {"path":"C:USER/SRIN/Aircraft/Rotor/A-R-0002.wav"},
              {"path":"C:USER/SRIN/Aircraft/Rotor/A-R-0003.wav"}
            ]
          }
        ]
      },
      {
        "name": "CROWD",
        "soundClasses": [
          {
            "name": "POSITIVE",
            "imgPath": "thumbs up.png",
            "wavfiles": [
              {"path":"C:USER/SRIN/Crowd/Positive/C-P-0001.wav"},
              {"path":"C:USER/SRIN/Crowd/Positive/C-P-0002.wav"}
            ]
          },
          {
            "name": "NEGATIVE",
            "imgPath": "thumbs down.png",
            "wavfiles": [
              {"path":"C:USER/SRIN/Crowd/Negative/C-N-0001.wav"},
              {"path":"C:USER/SRIN/Crowd/Negative/C-N-0002.wav"}
            ]
          }
        ]
      },
      {
        "name": "FOOTSTEPS",
        "soundClasses": [
          {
            "name": "RUNNING",
            "imgPath": "running.PNG",
            "wavfiles": [
              {"path":"C:USER/SRIN/Footsteps/Running/F-R-0001.wav"},
              {"path":"C:USER/SRIN/Footsteps/Running/F-R-0002.wav"}
            ]
          },
          {
            "name": "WALKING",
            "imgPath": "walking.jpg",
            "wavfiles": [
              {"path":"C:USER/SRIN/Footsteps/Walking/F-W-0001.wav"},
              {"path":"C:USER/SRIN/Footsteps/Walking/F-W-0002.wav"}
            ]
          }
        ]
      }
    ],
    "startSNR": -10,
    "stepSizeSNR": 3,
    "pointsGoal": 5,
    "pointsAwardedForMaxedOutTrial": 0,
    "pointsAwardedForWrongCategory": 0,
    "backgroundNoiseLevel": 50,
    "pause": true,
    "nTrialsWithoutResponsePause": 1,
    "noResponseMessage": "<div>It looks like you have not pressed any buttons in a while.</div><br><br><div>If you are letting it time out because you cannot recognize the sounds, press 'RESUME' to continue.</div><br><br><div>If the test does not seem to be working, see the test administrator for help.</div>",
    "incorrectMessageInitial": "It looks like you are choosing some incorrect answers.  Remember, only choose an answer if you are sure.",
    "incorrectMessageRepeat": "It looks like you are still choosing some incorrect answers.  See the test administrator for help."
  }
}

Options

  • categories:

    • Type: array

    • Description: An array of objects defining the categories for the exam (must contain at least 1 object). Each object contains:

      • name:

        • Type: string
        • Description: Name of category.
      • soundClasses:

        • Type: array

        • Description: A 2-element array of objects defining the sound classes within the category. Each object contains:

          • name:

            • Type: string
            • Description: Name of sound class.
          • imgPath:

            • Type: string
            • Description: Relative path of the image to display for the sound class.
          • wavfiles:

            • Type: array

            • Description: An array of objects defining the wav files for the sound class. Each object can contain:

              • path:

                • Type: string
                • Description: Path to the wav file on the CHA, for example "C:USER/SRIN/Aircraft/Jet/A-J-0001.wav" (required).
              • playbackLevelAdjustment:

                • Type: number
                • Description: Allows fine tuning of wav file playback levels, where playback level = level + playbackLevelAdjustment (optional).
  • startSNR:

    • Type: integer
    • Description: Starting SNR. Starting level = backgroundNoiseLevel + startSNR. (Default = -15, Minimum = -30, Maximum = 30)
  • stepSizeSNR:

    • Type: integer
    • Description: Increase playback level by this amount (dB) each time the subject does not hear the sound. (Default = 1, Minimum = 0, Maximum = 10)
  • maxSNR:

    • Type: integer
    • Description: Maximum SNR to be presented. Maximum level is backgroundNoiseLevel + maxSNR. If subject does not hear the sound at this level, the exam fails for this sound and moves on to the next sound. (Default = 20, Minimum = 0, Maximum = 50)
  • pointsGoal:

    • Type: integer
    • Description: The exam will be complete when the subject reaches this many points. (Default = 20, Minimum = 1, Maximum = 50)
  • pointsAwardedForCorrectAnswer:

    • Type: number
    • Description: Number of points awarded when the subject selects the correct sound category AND sound class. (Default = 1)
  • pointsAwardedForRightCategoryWrongSubcategory:

    • Type: number
    • Description: Number of points awarded when the subject selects the correct sound category but the wrong sound class (subcategory). (Default = 0)
  • pointsAwardedForWrongCategory:

    • Type: number
    • Description: Number of points awarded when the subject selects the wrong category. (Default = -1)
  • pointsAwardedForMaxedOutTrial:

    • Type: number
    • Description: Number of points awarded when the trial reaches the maxSNR with no response. (Default = 1)
  • presentationMax:

    • Type: integer
    • Description: The maximum number of sound-recognition trials to present to the subject. Each sound played, regardless of whether it is a repeat, counts toward this total. (Default = 50)
  • incorrectPresentationMax:

    • Type: integer
    • Description: The exam will end if the subject gets this many incorrect. (Default = 50)
  • backgroundNoiseType:

    • Type: string
    • Description: Type of background noise (white, pink or brown). (Default = pink)
  • backgroundNoiseLevel:

    • Type: integer
    • Description: Level of background noise during presentations (dB SPL). The level of the noise is constant at this level during the presentations and drops to backgroundNoiseIdleLevel during feedback. (Default = 55)
  • backgroundNoiseIdleLevel:

    • Type: integer
    • Description: Level of background noise during feedback between trials (dB SPL). (Default = 40)
  • hidePointsTotalAndGoal:

    • Type: boolean
    • Description: If false, display the message "Number of points: nPoints out of nPointsGoal" at the bottom of the exam. If true, hide the message (for training/practice). (Default = false)
  • hideButtonPressTimer:

    • Type: boolean
    • Description: If false, display the message "Seconds to button press: nSeconds" at the bottom of the exam. If true, hide the message (for training/practice). (Default = false)
  • trainingMode:

    • Type: boolean
    • Description: If true, run the exam in training mode (with no background noise and using the training logic for success). (Default = false)
  • trainingLevel:

    • Type: integer
    • Description: If trainingMode is true, sets the playback level (dB SPL) for target sounds. (Default = 70)
  • trainingGoal:

    • Type: integer
    • Description: If trainingMode is true, sets the number of required correct identifications on the FIRST button press for each sound class. (Default = 2)
  • trainingMaxExemplarRepeats:

    • Type: integer
    • Description: Maximum number of unsuccessful attempts at a particular exemplar before moving on to the next exemplar during training mode (training mode does not increment the level). (Default = 10)
  • responseDelay:

    • Type: integer
    • Description: Delay (ms) after each loop, giving the subject time to press the button after the sound completes. Set to 0 to have no delay. If 0, correct/incorrect is displayed as soon as the sound finishes. (Default = 1000)
  • pause:

    • Type: boolean
    • Description: If true, allow the user to pause the exam in the middle. (Default = false)
  • pauseIfNoResponse:

    • Type: boolean
    • Description: If true, if no responses after nTrialsWithoutResponsePause trials, pause exam and show the noResponseMessage, then offer the ability to resume or restart. (Default = true)
  • nTrialsWithoutResponsePause:

    • Type: integer
    • Description: If no response from user after this number of trials, pause if pauseIfNoResponse = true. (Default = 3)
  • noResponseMessage:

    • Type: string
    • Description: Message to show the user if the exam automatically pauses after nTrialsWithoutResponsePause. (Default = It looks like you have not selected any sounds in a while. Please see an administrator if you have any questions.)
  • pauseIfIncorrect:

    • Type: boolean
    • Description: If true, pause and show incorrectMessageInitial (or incorrectMessageRepeat) if the subject answers nTrialsIncorrectPause number of presentations incorrectly. (Default = true)
  • nTrialsIncorrectPause:

    • Type: integer
    • Description: If the subject answers this many presentations incorrectly, pause and show the first incorrect message (incorrectMessageInitial). (Default = 2)
  • incorrectMessageInitial:

    • Type: string
    • Description: Message to show if the subject answers nTrialsIncorrectPause number of presentations incorrectly.
  • incorrectMessageRepeat:

    • Type: string
    • Description: Message to show during subsequent pauses if the subject answers more presentations incorrectly.
  • presentAllTokens:

    • Type: boolean
    • Description: If true, present all sound tokens once to each subject, in randomized order. (Default = false)

The chaSoundRecognition response area creates a result object for each presentation. Each result object contains:

result.chosenCategory = "CROWD" // String indicating category selected by the subject
result.chosenSoundClass = "NEGATIVE" // String indicating sound class selected by the subject
result.response.Category = "CROWD" // String indicating category selected by the subject
result.response.SoundClass = "NEGATIVE" // String indicating sound class selected by the subject
result.correct = true // Boolean indicating whether the subject correctly identified the category and sound class
result.points = 1  // Total number of points the subject has accumulated so far
result.presentedCategory = "CROWD" // String indicating category presented
result.presentedSoundClass = "NEGATIVE" // String indicating sound class presented
result.presentedWavfile = "C:USER/SRIN/Crowd/Negative/C-N-0001.wav" // String path to the wav file on the WAHTS that was presented
result.presentedLevel = 43 // A-weighted level (dBA) of the last presentation of the wavfile before the subject selected their response, calculated as backgroundNoiseLevel+startSNR+stepSizeSNR*(exemplarPlayCount-1)
result.presentedLevelOffset = 0 // Level adjustment (dB) to get specific wav file to level desired by test (set for each wavfile as exam input parameter playbackLevelAdjustment)
result.presentedSNR = -7 // SNR (dB) of the last presentation of the wav file before the subject selected a response (note: this compares A-weighted targets and Z-weighted noise)  
result.levelChangedB = 3 // Level increase (dB) of the wav file from the first presentation to when the subject selected a response (only when trainingMode == false, if trainingMode == true, levelChangedB is 0)
result.timeToButtonPress = 6.9 // Elapsed time (s) before the subject selected sound class
result.soundDetectionTime = 5.8 // Elapsed time (s) before the subject selected a category
result.trainingAttempts = [] // If trainingMode == true, array of answers (category and sound class) before the subject answered correctly.  If trainingMode == false, returns an empty array.
result.trainingAttemptCount = 0 // If trainingMode == true, number of answers before the subject answered correctly.  If trainingMode == false, returns 0.
result.exemplarPlayCount = 2 // Number of times that the wav file was presented before the subject selected a response (when trainingMode == false), or before the subject answered correctly (when trainingMode == true).
result.trainingMode = false // Boolean showing if trainingMode was used

In addition, there is a final result object that includes the aggregated results from all presentations.

result.presentedExemplars = [{}] // This array of objects is given in the last response array element.  It contains all of the objects called out above.  An array element is included for each presentation.

Schema

  • chaSoundRecognition.json

TAT Response Area

This response area is deprecated as of TabSINT version 4.4.0.

Use this response area to run a Tones at Threshold (TAT) exam.

Protocol Example

{
  "id": "TAT",
  "title": "Tones at Threshold Exam",
  "questionMainText": "Tones at Threshold",
  "instructionText": "Select the pattern that represents the sound blocks presented",
  "responseArea": {
    "type": "chaTAT",
    "examProperties": {
      "ToneLevel": 30,
      "NPresentations": 5,
      "Frequency": 5000
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allows the subject to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = true)
  • feedback:

    • Type: boolean
    • Description: If true, show the subject which blocks contain the signal during the presentation. (Default = false)
  • feedbackDelay:

    • Type: number
    • Description: Length of time (ms) to show the digits after the presentation before clearing the keypad. This delay will still be used even when feedback is set to false. (Default = 1000)
  • training:

    • Type: boolean
    • Description: If true, run a training exam. (Default = false)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • measureBackground:

    • Type: string
    • Description: Method with which to measure the background noise after an audiometry exam. The option is ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • NPresentations:

        • Type: number
        • Description: Number of presentations. (Default = 10, Minimum = 1, Maximum = 100)
      • NBlocks:

        • Type: number
        • Description: Number of blocks per presentation. (Default = 4, Minimum = 2, Maximum = 10)
      • NExclude:

        • Type: number
        • Description: Number of first/last blocks that cannot contain the tone. (Default = 1)
      • NosieBandCenterFreq:

        • Type: number
        • Description: Center frequency of noise bandpass filter (Hz). Maximum and minimum set by calibration. (Default = 1000)
      • NoiseBandSize:

        • Type: number
        • Description: Denominator of width of Nth octave bandpass filter. (Default = 1, Minimum = 1, Maximum = 12)
      • NoiseLevel:

        • Type: number
        • Description: Noise level (dB SPL). Maximum and minimum set by calibration. (Default = 25)
      • Frequency:

        • Type: number
        • Description: Frequency of the target tone (Hz). Maximum and minimum set by calibration. (Default = 1000)
      • ToneLevel:

        • Type: number
        • Description: Level of the target tone (dB SPL). Maximum set by calibration. (Default = 25)
      • ToneDuration:

        • Type: number
        • Description: Duration of each tone pulse in the signal pulse train (ms), including the ramp up and down. (Default = 300, Minimum = 100, Maximum = 500)
      • ToneRamp:

        • Type: number
        • Description: Duration of tone ramp up and ramp down (ms) within each tone pulse in the signal pulse train. (Default = 20, Minimum = 20, Maximum = 50)
      • TonePulseNumber:

        • Type: number
        • Description: Number of pulses in the signal pulse train. (Default = 5, Minimum = 1, Maximum = 5)
      • InterToneDuration:

        • Type: number
        • Description: Time between each signal tone pulse within a pulse train (ms). Duration is applied before the first signal pulse and after the last signal pulse in the train as well. (Default = 300, Minimum = 100, Maximum = 500)
      • TimeGap:

        • Type: number
        • Description: Time between sound blocks in a presentation (ms). (Default = 1000, Minimum = 0, Maximum = 1000)
      • Ear:

        • Type: string
        • Description: Ear to use for the output. Can be Left, Right or Both. (Default = Left)

Response

The chaTAT response area returns a result object for each presentation. Each result object contains the following:

result.presentation = 1  // 0-based index of the current presentation
result.userResponse = 2  // Response given by the user
result.correctResponse = 2 // Correct response
result.correct = true // Boolean whether the subject was correct

Additionally, it returns a final result object with the following summary information:

result.response = "Exam Results"  // String indicating it is the summary
result.score = 100  // Number indicating final score (%)

Schema

  • chaTAT.json

Third Octave Bands Response Area

Use this response area to measure the background noise level in each 1/3rd octave.

Protocol Example

{
  "id": "Third Octave Band",
  "title": "Third Octave Band Response Area",
  "questionMainText": "Background Noise Measurement",
  "questionSubText": "Please sit quietly while the test completes",
  "responseArea": {
    "type": "chaThirdOctaveBands"
  }
}

Options

  • measureBothEars:

    • Type: boolean
    • Description: If true, measure both ears, using channels SMICR0 (left) and SMICR1 (right). Default channel is SMICR0. (Default = false)
  • skip:

    • Type: boolean
    • Description: If true, allow the subject to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • delay:

    • Type: integer
    • Description: Delay (ms) between autoBegin and start of exam. (Minimum = 0)
  • standard:

    • Type: object

    • Description: An object with the following properties defining the reference standard:

      • name:

        • Type: string
        • Description: The standard name. Can be ANSI MPANL.
      • data:

        • Type: array

        • Description: Array of frequency and level pairs. Two objects are defined in each array element:

          • F:

            • Type: integer
            • Description: Frequency (Hz).
          • L:

            • Type: integer
            • Description: Allowable sound level (dB SPL).
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • BufferLength:

        • Type: integer
        • Description: Minimum number of samples to consider for third octave result (note that the WAHTS may use more samples than specified). (Default = 98304, Minimum = 1, Maximum = 4294959104)
      • InputChannel:

        • Type: string
        • Description: Input channel to use. Can be SMICR0, SMICR1, SMICL0 or SMICL1. (Default = SMICR0)

Response

The result object returned from a chaThirdOctaveBands response area contains:

result.response = "continue"  // 
result.Frequencies = [12.5, ... ] // Array of frequencies for which band levels were computed
result.Leq = [25, ...] // Array of sound levels in each frequency band (same length as Frequencies)

Schema

  • chaThirdOctaveBands.json

Three Digit Response Area

Use this response area to run a Triple Digit Task exam.

Protocol Example

{
  "id": "Three Digit",
  "title": "Three Digit Test Response Area Example",
  "questionMainText": "Three Digit Exam",
  "questionSubText": "Enter the 3 Digits You Hear",
  "responseArea": {
    "type": "chaThreeDigit",
    "examProperties":{
      "nPresentations": 10,
      "warmupN": 5
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allows the subject to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • keypadDelay:

    • Type: number
    • Description: Time (ms) to wait before activating the keypad. (Default = 10)
  • feedback:

    • Type: boolean
    • Description: If true, show the subject which digits were correct after each set of digits is entered. (Default = true)
  • feedbackDelay:

    • Type: number
    • Description: Time (ms) to show the digits after the presentation before clearing the keypad. This field can be used even when feedback is set to false. (Default = 1000)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • measureBackground:

    • Type: string
    • Description: Method with which to measure the background noise after an audiometry exam. The option is ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • nPresentations:

        • Type: number
        • Description: Number of presentations. (Default = 50, Minimum = 1, Maximum = 100)
      • warmupN:

        • Type: number
        • Description: Number of presentations during the warm-up period. (Default = 10, Minimum = 0, Maximum = 100)
      • targetType:

        • Type: string
        • Description: Type of target material. The options are filtered, timeCompressed, H3CamFiltered or TFS. (Default = filtered)
      • warmupMasker:

        • Type: string
        • Description: Type of masker material used during the warm-up period (only valid when maskerType = Schroeder). The options are none, negativePhase or positivePhase. (Default = positivePhase)
      • initialSNR:

        • Type: number
        • Description: Signal to Noise Ratio of the first presentation. (Default = 0, Minimum = -25, Maximum = 25)
      • fixedLevel:

        • Type: number
        • Description: Level of either the target or the masker, whichever is the fixedMaterial, in dB SPL. (Default = 75, Minimum = 0, Maximum = 100)
      • fixedMaterial:

        • Type: string
        • Description: Defines whether the target or the masker is presented at a fixed level equal to the fixedLevel. The level of the other is adjusted to get the desired SNR. (Default = target)
      • correctStep:

        • Type: number
        • Description: SNR step size for each correct digit in the previous response (dB). (Default = -0.5, Minimum = -25, Maximum = 25)
      • incorrectStep:

        • Type: number
        • Description: SNR step size for each incorrect digit in the previous response (dB). (Default = 2, Minimum = -25, Maximum = 25)
      • warmupCorrectStep:

        • Type: number
        • Description: SNR step size for each correct digit in the previous response during the warm-up period (dB). (Default = -0.5, Minimum = -25, Maximum = 25)
      • warmupIncorrectStep:

        • Type: number
        • Description: SNR step size for each incorrect digit in the previous response during the warm-up period (dB). (Default = 2, Minimum = -25, Maximum = 25)
      • maxSNR: (NOTE: This is Deprecated as of TabSINT v.4.3.0)

        • Type: number
        • Description: Max SNR during all presentations. (Default = 25, Minimum = 0, Maximum = 30)
      • ear:

        • Type: string
        • Description: Which ear to use for the output. The options are left, right or both. (Default = both)
      • maxLevel: (NOTE: This is Deprecated as of TabSINT v.4.3.0)

        • Type: number
        • Description: Max output level (dB SPL) during presentations. (Default = 90, Minimum = 0, Maximum = 100)
  • exportToCSV:

    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)

Response

The chaThreeDigit response area returns a result object for each presentation. Each result object contains the following:

result.currentPresentation = "c:USER/3D/FILTERED/828.WAV" // String indicating filename of the current presentation
result.response = ["8","2","8"]  // String array of the selected digits
result.currentDigits = ["8","2","8"] // String array of correct response for this presentation
result.State = 0 // Exam state, where 0, 1 and 2 correspond to PLAYING, WAITING_FOR_RESULT, and DONE, respectively
result.presentationCount = 0 // )-based index of current presentation
result.currentMasker = "positivePhase" // String indicating the masker type used for the current presentation
result.targetType = "filtered"  // targetType input parameter
result.digitScore = 0 // Percentage of digits correctly identified
result.presentationScore = 0 // Percentage of presentations where all 3 digits were correctly identified
result.currentSNR = 25 // SNR of the current presentation (dB)
result.maskerLevel = 75 // Level of the masker for the current presentation (dB SPL)
result.targetLevel = 75 // Level of the target for the current presentation (dB SPL) 
result.warmupDigitScore = 0 // Percentage of digits correctly identified when the masker was the warmupMasker
result.warmupPresentationScore = 0 // Percentage of presentations where all 3 digits were correctly identified when the masker was the warmupMasker
result.ear = "both"  // ear input parameter
result.warmupSRT = 0 // Average SNR (dB) of the first "warmupN" presentations
result.SRT = # // Average SNR (dB) after the "warmupN" presentations
result.numberCorrect = 3  // Number of the digits correctly identified in the current presentation
result.numberIncorrect = 0  // Number of digits incorrectly identified in the current presentation
result.eachCorrect = [true,true,true]  // Array of booleans indicating which digits were correctly identified
result.correct = true // Boolean reports true if all digits were correctly identified in the current presentation

Additionally, it returns a final result object with the following summary information:

result.response = "Exam Results" // String indicating summary results
result.digitScore = 93.333336 // Percentage of total number of digits that were correctly identified
result.presentationScore = 80 // Percentage of presentations where all 3 digits were correctly identified

Schema

  • chaThreeDigit.json

Tone Generation Response Area

Use this response area to present a single tone.

Protocol Example

{
  "id": "Tones",
  "title": "Tone Generation",
  "questionMainText": "Generate Specified Tone",
  "responseArea": {
    "type": "chaToneGeneration",
    "examProperties":
    {
      "F": 2500,
      "ToneDuration": 1000,
      "Level": 50,
      "OutputChannel": "HPR0"
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allow the subject to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • Tone Generation Long Level Properties

      • F:

        • Type: integer
        • Description: Frequency of tone/center frequency of noise (Hz). (Minimum = 1, Maximum = 32000)

Response

The result object from an chaToneGeneration response area contains only the common TabSINT results.

Schema

  • chaToneGeneration.json

TRT Response Area

Use the TRT response area to present a Threshold Response Time (TRT) exam.

Protocol Example

{
  "id": "chaTRT",
  "title": "Threshold Response Time Example",
  "questionMainText": "Threshold Response Time Exam",
  "questionSubText": "Press the button for the ear in which you hear the tone",
  "responseArea": {
    "type": "chaTRT",
    "examProperties": {
      "NPresentations": 10,
      "Thresholds": [
        {
          "ThresholdLevel": 50,
          "Frequency": 5000,
          "Ear": "Left"
        },
        {
          "ThresholdLevel": 55,
          "Frequency": 6000,
          "Ear": "Right"
        }        
      ]
    }
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allow the subject to skip the response area. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default=true)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • measureBackground:

    • Type: string
    • Description: Method with which to measure the background noise after an audiometry exam. The option is ThirdOctaveBands.
  • examProperties:

    • Type: object

    • Description: Properties defining the exam, including:

      • NPresentations:

        • Type: number
        • Description: Number of presentations. (Default = 20, Minimum = 1, Maximum = 96)
      • LevelUnits:

        • Type: string
        • Description: Units for both specifying and returning sound levels. The options are dB HL or dB SPL. (Default = dB HL)
      • Thresholds:

        • Type: array

        • Description: Array of objects where each object defines a threshold to validate. Each object contains:

          • ThresholdLevel:

            • Type: number
            • Description: Threshold sound level, in units specified by LevelUnits. (Default = 55)
          • Frequency:

            • Type: number
            • Description: Threshold frequency (Hz). (Default = 6000)
          • Ear:

            • Type: string
            • Description: Threshold ear. Options are Left or Right. (Default = Left)
      • ToneDuration:

        • Type: number
        • Description: Duration of each tone pulse in the signal pulse train (ms), including the ramp up and down. (Default = 300, Minimum = 100, Maximum = 500)
      • ToneRamp:

        • Type: number
        • Description: Duration of the tone ramp up and down (ms). (Default = 20, Minimum = 20, Maximum = 50)
      • TonePulseNumber:

        • Type: number
        • Description: Total number of tones played for each pulse train. (Default = 1, Minimum = 1, Maximum = 5)
      • ToneRepetitionInterval:

        • Type: number
        • Description: Rate tones are presented, in ms. (Default = 450, Minimum = 450, Maximum = 2000)
      • PollingOffset:

        • Type: number
        • Description: Period beyond last pulse where subject response still accepted, in ms. (Default = 600, Minimum = 0, Maximum = 1000)
      • MinISI:

        • Type: number
        • Description: Minimum value for inter-stimulus interval (ISI) in ms. (Default = 600, Minimum = 0, Maximum = 2000)
      • MaxISI:

        • Type: number
        • Description: Maximum value for the inter-stimulus interval (ISI) in ms. (Default = 1000, Minimum = 1000, Maximum = 5000)

Response

The result.response from a chaTRT response area is string array reporting the button pushed by the user for each presentation (left, right or null if no response is given). In addition, the result object contains:

result.correct = [true, true, ...] // Array indicating whether the subject answered each presentation correctly
result.ActualLevels = [55, 50, ...] // Array of sound levels presented during the test for each presentation (in LevelUnits)
result.ActualFrequencies = [6000,5000,  ...] // Array of frequencies (Hz) presented
result.ActualEars = [1,0, ...] // Array of numbers indicating the ear used for each presentation (where 0 = left and 1 = right) 
result.ResponseTime = [1202,1434, ...] // Array of numbers indicating the response time (ms) for each presentation ResponseTime is measured as the time between the start of the pulse train and when the response is registered. No response is recorded as 0.  Note that ResponseTime includes a variable latency that is a function of device OS, hardware, and Bluetooth radio. For the Samsung Tab-E tablet, this latency is ~ +/- 100 ms.  ResponseTime should be referenced only when the tablet hardware is characterized and controlled throughout data collection.  A 0 indicates no response given by the user.

Schema

  • chaTRT.json

WAHTS Calibration Check Response Area

Use this response area to perform the WAHTS daily calibration check.

Protocol Example

{
  "id": "calibration_check",
  "title": "WAHTS Calibration Check",
  "questionMainText": "WAHTS Calibration Check",
  "instructionText": "Place WAHTS on the calibration check fixture. Press the button below when ready to begin.",
  "image": {
    "path": "wahts-on-fixture.gif"
  },
  "responseArea": {
    "type": "chaCalibrationCheck",
    "exportToCSV": true
  }
}

Options

  • exportToCSV:
    • Type: boolean
    • Description: If true, export the result to CSV upon submitting exam results. (Default = false)

Response

The result object from a chaCalibrationCheck response area contains:

result.calSpectrum = [8.0, 10.2, ...] // Full spectrum data array of length 400
result.calibrationData.left.xlabel = "Frequency (Hz)" // xlabel on results plot
result.calibrationData.left.ylabel = "Deviation from baseline (dB)" // ylabel on results plot
result.calibrationData.left.title = "Left Ear Calibration Results" // title on results plot
result.calibrationData.left.calibration // object of calibration data for the left channel stored in the freqCalTable on the WAHTS
result.calibrationData.left.measured = [5.9, 1.8, ...] // array of measured-baseline values of length 17
result.calibrationData.left.frequencies = [125, 250, ...] // array of frequencies corresponding to the measured array
result.calibrationData.right.xlabel = "Frequency (Hz)" // xlabel on results plot
result.calibrationData.right.ylabel = "Deviation from baseline (dB)" // ylabel on results plot
result.calibrationData.right.title = "Right Ear Calibration Results" // title on results plot
result.calibrationData.right.calibration // object of calibration data for the right channel stored in the freqCalTable on the WAHTS
result.calibrationData.right.measured = [5.9, 1.8, ...] // array of measured-baseline values of length 17
result.calibrationData.right.frequencies = [125, 250, ...] // array of frequencies corresponding to the measured array

Schema

  • chaCalibrationCheck.json

Audiometry

Audiometry Properties

These are common exam-level audiometry properties supported across the WAHTS audiometry exams. These properties are included in Audiometry Frequency Properties and Audiometry Level Properties.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-area",
     "LevelUnits": "dB HL",
     "PresentationMax": 10
  }
}

Options

  • LevelUnits:

    • Type: string
    • Description: Units for both specifying and returning sound levels. The options are dB HL or dB SPL. (Default = dB HL)
  • ToneRepetitionInterval:

    • Type: integer
    • Description: Rate tones are presented, in ms. (Default = 450, Maximum = 2000, Minimum =450)
  • PresentationMax:

    • Type: integer
    • Description: Maximum number of presentations. (Default = 20, Maximum = 200, Minimum = 3)
  • UnresponsiveMax:

    • Type: integer
    • Description: Number of repeated presentations at either the MinimumOutputLevel or MaximumOutputLevel (or the min/max frequencies for the frequency exams) that will halt an exam and return a Threshold of NaN. (Default = 5, Maximum = 200, Minimum = 1)
  • UseSoftwareButton:

    • Type: boolean
    • Description: If true, the exam will be controlled with a software button. (Default = false)
  • BypassCalibrationLimit:

    • Type: boolean
    • Description: If true, the WAHTS ignores calibration-specified maximum output level (note this may introduce distortion). (Default = false)

Schema

  • audiometryProperties.json

Audiometry Frequency Properties

These are common exam-level properties supported across the WAHTS frequency exams. These properties are included in BHAFT Response Area.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-area",
     "Fstart": 2000
  }
}

Options

  • Audiometry Properties

  • Tone Generation Level Properties

  • Fstart:

    • Type: number
    • Description: Start frequency (constrained to nearest octave) in Hz. (Default = 1000)
  • MaximumOutputFrequency:

    • Type: number
    • Description: Maximum output frequency, in Hz. The default value is set by calibration.
  • MinimumOutputFrequency:

    • Type: number
    • Description: Minimum output frequency, in Hz. The default value is set by calibration.

Schema

  • audiometryFrequencyProperties.json

Audiometry Level Properties

These are common exam-level properties supported across the WAHTS level exams. These properties are included in Bekesy Like Exam Properties, Hughson Westlake Exam Properties, Audiometry List Response Area and Manual Audiometry Response Area.

Protocol Example

{
  "responseArea": {
     "type": "any-wahts-response-area",
     "Lstart": 20
  }
}

Options

  • Audiometry Properties

  • Tone Generation Properties

  • F:

    • Type: number
    • Description: Test frequency, in Hz. (Default = 1000)
  • Lstart:

    • Type: number
    • Description: Starting level of test, in LevelUnits. (Default = 40)
  • DynamicStartLevel:

    • Type: object

    • Description: Dynamically calculate starting level to shorten exams, where newLstart = Max(examProperties.Lstart, baseLevel + offset). The object contains:

      • offset:

        • Type: number
        • Description: Offset (addition) for calculation of new Lstart. (Default = 15)
      • baseIdList:

        • Type: array
        • Description: A string array of potential presentations to use for the base number. (Defaults to the latest 1kHz presentation, i.e. ["training"] or ["left_HW1000_first", "right_HW1000_first"]).
  • MaximumOutputLevel:

    • Type: number
    • Description: Maximum output level, in LevelUnits. The default value is set by calibration.
  • MinimumOutputLevel:

    • Type: number
    • Description: Minimum output level, in LevelUnits. The default value is set by calibration.
  • RelativeF:

    • Type: array
    • Description: 4-element array indicating how to calculate an output frequency relative to input frequency, where the syntax is ['above' or 'below', numerator, denominator, calculation method]. For example, ['below', 1, 6, 'lut']. Calculation method is optional and uses default method of actual math.

Schema

  • audiometryLevelProperties.json

Audiometry Page Properties

These are common page-level properties for WAHTS audiometry exams. These properties are used by Bekesy Like Response Area, BHAFT Response Area, and Hughson Westlake Response Area.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-areas",
     "pause": true
  }
}

Options

  • skip:

    • Type: boolean
    • Description: If true, allow the subject to skip the response area. (Default = false)
  • pause:

    • Type: boolean
    • Description: If true, allow the subject to pause the current WAHTS exam. When paused, the subject is returned to the 'start' page. (Default = false)
  • autoSubmit:

    • Type: boolean
    • Description: If true, go straight to the next page once this page is complete. (Default = false)
  • autoBegin:

    • Type: boolean
    • Description: If true, go straight into the exam, without having to press the 'Begin' button. (Default = false)
  • repeatIfFailedOnce:

    • Type: boolean
    • Description: If true, repeat the frequency if the test fails to converge on the first attempt. (Default = false)
  • getNotesIfFailedTwice:

    • Type: boolean
    • Description: If true, ask for researcher notes if the repeat fails to converge. (Default = false)
  • showMessageIfNoResponse:

    • Type: boolean
    • Description: If true, if the subject did not press the software button ONCE during an audiometry exam, show the noResponseCustomMessage message. (Default = false)
  • noResponseCustomMessage:

    • Type: string
    • Description: The message to show the subject if they did not press the software button ONCE during an audiometry exam. (Default = It looks like you did not press the button at all during the last test. Please make sure to press the button if you hear any sound)
  • examInstructions:

    • Type: string
    • Description: Replaces the top-level instruction text on the WAHTS exam pages (each page after starting page).
  • hideExamProperties:

    • Type: string
    • Description: Hide the parameters of the audiometry test (i.e. Frequency, Level, Ear) before and/or during a test. The options are before, during, always, never. (Default is to always show the exam properties)
  • resultMainText:

    • Type: string
    • Description: Replaces the questionMainText text while presenting results.
  • resultSubText:

    • Type: string
    • Description: Replaces the questionSubText text while presenting results.
  • plotProperties:

    • Type: object
    • Description: An object with Audiometry Plot Properties.
  • measureBackground:

    • Type: string
    • Description: Method with which to measure the background noise after an audiometry exam. The option is ThirdOctaveBands.
  • maskingNoise:

    • Type: object
    • Description: An object with Masking Noise Properties defining the masking noise to present with the exam

Schema

  • audiometryPageProperties.json

Audiometry Plot Properties

Properties defining how to present the results of a WAHTS audiometry exam. These options are referenced by Audiometry Page Properties.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-area",
     "displayLevelProgression": true
  }
}

Options

  • displayAudiogram:

    • Type: array
    • Description: An array of strings, to be used in matching page ids, to select which results are plotted. For example, ["training"] or ["section1_left", "section1_right"].
  • displayLevelProgression:

    • Type: boolean
    • Description: If true, turn on plotting of the level progression for an individual exam. (Default = false)

Schema

  • audiometryPlotProperties.json

Bekesy

Bekesy Like Exam Properties

Exam properties for a Bekesy Like Exam.

  • Audiometry Level Properties

  • ReversalDiscard:

    • Type: integer
    • Description: Number of reversals to discard. (Default = 2, Minimum = 0, Maximum = 10)
  • ReversalKeep:

    • Type: integer
    • Description: Number of reversals to keep (must be even). (Default = 6, Minimum = 2, Maximum = 10)
  • IncrementStart:

    • Type: number
    • Description: Increment between presentations until the first reversal, in dB. (Default = 4, Minimum = 1, Maximum = 20)
  • IncrementNominal:

    • Type: number
    • Description: Increment after the first reversal, in dB. (Default = 2, Minimum = 0.01, Maximum = 20)

Schema

  • bekesyLikeExamProperties.json

Hughson Westlake

Hughson Westlake Exam Properties

These are common exam-level properties supported across the WAHTS Hughson-Westlake level exams. These properties are used by Hughson Westlake Response Area, Accelerated Threshold Response Area and Manual Audiometry Response Area.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-areas",
     "Screener": true,
     "NumCorrectReq": 3
  }
}

Options

  • Audiometry Level Properties

  • StepSize:

    • Type: integer
    • Description: Smallest level increment (ignored when Screener is true). (Default = 5, Maximum = 10, Minimum = 2)
  • TonePulseNumber:

    • Type: integer
    • Description: Total number of tones played for each pulse train. (Default = 3, Maximum = 5, Minimum = 1)
  • PollingOffset:

    • Type: integer
    • Description: Period beyond last pulse where subject response still accepted, in ms. The WAHTS enforces that PollingOffset <= MinISI <= MaxISI. (Default = 600, Maximum = 1000, Minimum = 0)
  • MinISI:

    • Type: integer
    • Description: Minimum value for inter-stimulus interval (ISI), in ms. The WAHTS enforces that PollingOffset <= MinISI <= MaxISI. (Default = 600, Maximum = 2000, Minimum = 0)
  • MaxISI:

    • Type: integer
    • Description: Maximum value for inter-stimulus interval (ISI), in ms. The WAHTS enforces that PollingOffset <= MinISI <= MaxISI. (Default = 1000, Maximum = 5000, Minimum = 1000)
  • Screener:

    • Type: boolean
    • Description: If true, use the screener version of Hughson-Westlake level exam. (Default = false)
  • NumCorrectReq:

    • Type: integer
    • Description: Number of correct responses required to pass, and (if applicable) end the exam early. Only used when Screener is true. (Default = 2, Minimum = 0)
  • SemiAutomaticMode:

    • Type: boolean
    • Description: If true, pause after each pulse train to wait for a response. If false, proceed in a fully automated fashion. (Default = false)

Schema

  • hughsonWestlakeExamProperties.json

Masking Noise Properties

Use these properties to define the masking noise. These properties are referenced by Audiometry Page Properties.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-areas",
     "Type": "pink"
  }
}

Options

  • Type:

    • Type: string
    • Description: Base shape of noise spectrum. Options are white, pink or brown. (Default = white)
  • BandpassCenterFrequency:

    • Type: number
    • Description: Center frequency for the noise bandpass filter (Hz). If 0, no filtering is applied. (Default = 0)
  • BandpassOctaveWidth:

    • Type: number
    • Description: Width of the pass-band, in octaves. (Default = 1, Maximum = 6, Minimum = 0.04166)
  • Ear:

    • Type: number
    • Description: Channel to be used for the noise, where 0=Left, 1=Right, 2=Both. (Default = 2)
  • Level:

    • Type: array
    • Description: Level (dB SPL) of the noise. The integer array must have 2 elements (one for each ear, i.e. [Left_Ear,Right_Ear]). It is ignored for the non-specified ear. (Default = [30,30])

Schema

  • maskingNoiseProperties.json

Tone Generation

Tone Generation Base Properties

Base properties for all tone generation across the WAHTS audiometry exams. The following properties are used in Tone Generation Properties and Tone Generation Long Level Properties.

Protocol Example

{
   "responseArea": {
      "type": "some-wahts-response-area",
      "responseRequired": true,
      "OutputChannel": "HPR0"
   }
}

Options

  • OutputChannel:

    • Type: enum
    • Description: Output channel, where the options are HPL0, HPR0, HPL1, HPR1, LINEL0, NONE LINEL0, LINEL0 NONE, or HPL0 HPR0. (Default = HPL0)
  • UseWavFile:

    • Type: boolean
    • Description: If true, determine if a wav file exists for the requested OutputChannel and other parameters. If the wav file does not exist, return CHA_ERR_BAD_MEDIA. If false, generate stimulus on the fly.
  • ToneRamp:

    • Type: integer
    • Description: Length of the tone ramp, in ms. (Default = 25, Maximum = 50, Minimum = 20)
  • UseNthOctave:

    • Type: boolean
    • Description: If false, test with pure/warble tones. If true, test with octave band noise. (Default = false)
  • OctaveBandSize:

    • Type: integer
    • Description: Width of noise to generate if UseNthOctave is true (this is the denominator). (Default = 8, Maximum = 12, Minimum = 1)
  • FDev:

    • Type: number
    • Description: Frequency modulation deviation about the nominal frequency. (Default = 5.7, Maximum = 60, Minimum = 1.5)
  • FDevForm:

    • Type: string
    • Description: Frequency modulation functional form, where the options are None, Triangle or Sine. (Default = None)
  • FDevRate:

    • Type: number
    • Description: Frequency modulation rate, in Hz. (Default = 20, Maximum = 20, Minimum = 4)

Schema

  • toneGenerationBaseProperties.json

Tone Generation Properties

Tone generation properties across the WAHTS audiometry exams. The following properties are used in Audiometry Level Properties and Tone Generation Level Properties.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-areas",
     "ToneDuration": 250
  }
}

Options

  • Tone Generation Base Properties

  • ToneDuration:

    • Type: integer
    • Description: Length of tone, in ms. (Default = 225, Maximum = 680, Minimum = 0)

Schema

  • toneGenerationProperties.json

Tone Generation Level Properties

Tone generation properties used across the WAHTS audiometry exams. The following properties are used in Audiometry Frequency Properties.

Protocol Example

{
  "responseArea": {
     "type": "some-wahts-response-area",
     "Level": 60
  }
}

Options

  • Tone Generation Properties

  • Level:

    • Type: number
    • Description: Level of tone, in dB SPL. (Default = 65)

Schema

  • toneGenerationLevelProperties.json

Tone Generation Long Level Properties

Tone generation properties used across the WAHTS audiometry exams. The following properties are used in Manual Tone Generation Response Area and Tone Generation Response Area.

Protocol Example

{
  "responseArea": {
     "type": "any-wahts-response-areas",
     "ToneDuration": 250
  }
}

Options

  • Tone Generation Base Properties

  • ToneDuration:

    • Type: integer
    • Description: Length of tone, in ms. (Default = 225, Minimum = 0)
  • Level:

    • Type: number
    • Description: Level of tone, in dB SPL. (Default = 65)

Schema

  • toneGenerationLongLevelProperties.json

Additional Results

Common Audiometry Responses

The result objects for all WAHTS audiometry exams include the following:

result.Threshold = -5            // Number indicating threshold (frequency or level)
result.Units = "dB HL"           // String giving the units of the Threshold
result.ResultType = "Threshold"  // String indicating if threshold is reached, or if the exam fails

Common TabSINT Responses

The result objects for all TabSINT response areas can include any or all of the following:

result.examType = "HughsonWestlake"   // String indicating exam type for audiometry exams
result.examProperties = object               // Object containing the exam input parameters
result.presentationId = "Hughson Westlake"   // Page Id from the protocol (summary results may append "_Results")
result.responseStartTime = "2020-02-25T19:47:57.559Z"  // String with date and time the response area was started

result.isSkipped = false                     // Boolean indicating if the presentation was skipped
result.responseArea = "chaHughsonWestlake"   // String giving the response area type

result.page.responseArea = object   // Object contains all of the properties given in the protocol page


result.chaInfo.serialNumber = "e0010046"               // String indicating the serial number of the connected WAHTS
result.chaInfo.buildDateTime = "Jun  5 2019 16:41:30"  // String indicating build date and time for the WAHTS firmware
result.chaInfo.probeId.serialNumber = 128      // Probe serial number (used for probes connected to hand held CHAs)
result.chaInfo.probeId.description = "reener SN#E0010046"   // String containing description and serial number of connected probe (used for probes connected to hand held CHAs)
result.chaInfo.vBattery = 3.88    // CHA battery voltage

result.ResultTypeCode = 0 // Adds information to the result, particularly in cases where a threshold could not be found. (0:Threshold, 1:Hearing Potentially Outside Measurable Range and  2:Failed to Converge)
result.buttonPressTimes = [660,2859,...] // Array of numbers recording the elapsed time (ms) between each button press and the start of the test.
responseElapTimeMS = 26815             // This is a number is the total elapsed time (ms) for the protocol.
Last updated on 10/18/2022
← Response AreasAdvanced Protocols →
  • Accelerated Threshold Response Area
  • Audiometry List Response Area
  • Audiometry Results Plot Response Area
  • Audiometry Results Table Response Area
  • Bekesy Like Response Area
  • Bekesy MLD Response Area
  • BHAFT Response Area
  • CRM Response Area
  • Dichotic Digits Response Area
  • DPOAE Response Area
  • Frequency Pattern Detection Response Area
  • GAP Response Area
  • HINT Response Area
  • Hughson Westlake Response Area
  • Manual Audiometry Response Area
  • Manual Screener Response Area
  • Manual Tone Generation Response Area
  • Masked Threshold Response Area
  • MLD Response Area
  • Sound Recognition Response Area
  • TAT Response Area
  • Third Octave Bands Response Area
  • Three Digit Response Area
  • Tone Generation Response Area
  • TRT Response Area
  • WAHTS Calibration Check Response Area
  • Audiometry
    • Audiometry Properties
    • Audiometry Frequency Properties
    • Audiometry Level Properties
    • Audiometry Page Properties
    • Audiometry Plot Properties
  • Bekesy
    • Bekesy Like Exam Properties
  • Hughson Westlake
    • Hughson Westlake Exam Properties
  • Masking Noise Properties
  • Tone Generation
    • Tone Generation Base Properties
    • Tone Generation Properties
    • Tone Generation Level Properties
    • Tone Generation Long Level Properties
  • Additional Results
    • Common Audiometry Responses
    • Common TabSINT Responses
TabSINT
Docs
IntroductionQuick StartUser Guide
Source
GitLabIssue Tracker
Community
YouTube
Copyright © 2023 Creare