Sonde Health API Platform Documentation
Android User Voice Verification
🛡️ Module 3: User Voice Verification
Overview
The Verification Module enables your app to verify a user’s voice against the previously enrolled sample and to generate a Mental Fitness score using the Passive SDK.
Before proceeding:
Complete Module 1: Initialization and Module 2: Enrollment.
📦 Step 1: Import Required Classes
Add the following imports in your Activity or Fragment:
import sonde_passive.client.SondeSdk
import sonde_passive.client.VoiceAnalysisEngine
import sonde_passive.client.model.VoiceAnalysisMode
import sonde_passive.client.model.VoiceConfiguration
import sonde_passive.client.model.VoiceScore
import sonde_passive.data.model.Gender
import sonde_passive.data.model.MetaData
import com.sondeservices.passive.continuous.analysis.VoiceAnalysisCallback
import com.sondeservices.passive.continuous.analysis.VoiceSegmentData
⚙️ Step 2: Initialize the SDK Engine
Declare the VoiceAnalysisEngine in your Activity or Fragment:private lateinit var voiceAnalysisEngine: VoiceAnalysisEngine
Initialize it in onCreate() (Activity) or onViewCreated() (Fragment):
voiceAnalysisEngine = SondeSdk.voiceAnalysisEngine
🎙️ Step 3: Handle Microphone Permissions
Declare the permission request code:
private val REQUEST_CODE_RECORD_AUDIO = 11111
Check and request microphone permission:
if (ContextCompat.checkSelfPermission(
requireContext(),
android.Manifest.permission.RECORD_AUDIO
) != PackageManager.PERMISSION_GRANTED
) {
val permissions = arrayOf(android.Manifest.permission.RECORD_AUDIO)
ActivityCompat.requestPermissions(
requireActivity(),
permissions,
REQUEST_CODE_RECORD_AUDIO
)
}
Override onRequestPermissionsResult to ensure permission is granted:
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<String>,
grantResults: IntArray
) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == REQUEST_CODE_RECORD_AUDIO &&
grantResults.isNotEmpty() &&
grantResults[0] != PackageManager.PERMISSION_GRANTED) {
throw RuntimeException("Record Audio permission is required to start voice verification.")
}
}
🧩 Step 4: Start Voice Verification
Invoke startVoiceAnalysis() to collect the user’s voice sample for verification and scoring.
val metadata = MetaData(
gender = Gender.MALE, // You can add user's gender here
birthYear = userYear.toInt(), //Birth year of user (Recommended 18+ Years)
partnerId = userName // user name from app, if not the provide blank("")
)
voiceAnalysisEngine.startVoiceAnalysis(
context = requireContext(),
metadata = metadata, // MetaData object including age, gender, etc.
voiceConfiguration = VoiceConfiguration(
voiceAnalysisMode = VoiceAnalysisMode.PASSIVE //You can use CUED/NON_CONTINUOUS_PASSIVE as per your usecase.
),
voiceAnalysisCallback = object : VoiceAnalysisCallback {
override fun onError(throwable: Throwable) {
Toast.makeText(requireContext(), throwable.message, Toast.LENGTH_SHORT).show()
Log.e(TAG, "Verification Error", throwable)
}
override fun onSegmentAnalysed(voiceSegmentData: VoiceSegmentData) {
// Called when a 3-second segment is analyzed
Log.d(TAG, "Segment: ${voiceSegmentData.segmentNumber}, Verified: ${voiceSegmentData.isUserVerified}")
// voiceSegmentData includes:
// - isUserVerified: Boolean
// - noOfSecond: Int
// - voiceAnalysisDataType: Enum (NO_VOICE, ACTIVE_VOICE, INSUFFICIENT_VOICE, RECORDING)
// - segmentNumber: Int (1-10)
}
override fun onSessionScoreReady(
mfScore: VoiceScore,
cfScore: VoiceScore?
) {
Log.d(TAG, "Mental Fitness Score: ${mfScore.finalScore}")
displayScore(mfScore, cfScore) //Show score as per your app's display/design.
}
}
)
📈 Step 5: Handling VoiceScore for Display
The VoiceScore object contains:
finalScore: The main score (0-100) to display.
subScores: List of subscores with:
code: Subscore code (e.g., jitter, pitch_range).
name: Subscore name (e.g., Smoothness, Energy Range).
score: Subscore value.
Example of displaying scores:
fun displayScore(mfScore: VoiceScore, cfScore:VoiceScore?) {
val mainScore = mfScore.finalScore
val subScores = mfScore.subScores
// Display main score
scoreTextView.text = "Your Mental Fitness Score: $mainScore"
// Optionally display subscores
subScores.forEach { subScore ->
Log.d(TAG, "${subScore.name} (${subScore.code}): ${subScore.score}")
}
}
🎯 Best Practices
✅ Always inform the user to:
Record in a quiet environment.
Complete the recording in one attempt for best accuracy.
✅ Handle errors gracefully and inform the user.
✅ Scores may be:
Displayed immediately to the user with contextual interpretation (refer to Vocal Biomarkers - Health Checks).
Uploaded to your backend for analysis and dashboard reporting.
🖼️ Example UI
Display:
Main Score with interpretation (e.g., “Your Mental Fitness Score is 82: Excellent Mental Fitness”).
Subscores (optional) in a collapsible section for advanced users.
The score value generated in the above step varies between 0-100. Once you get the score, display it to the user. You should always show the user the interpretation message on the score screen. For some applications, scores may be uploaded and not shown to the users. However, the scores can be uploaded for an individual user or in bulk for further analysis through API function calls.
The main score is divided into 3 sub-ranges as follows
0-69: “Pay attention”
70-79: “Good”
80-100: “Excellent”
Refer to Vocal Biomarkers - Health Checks for the interpretation message.
Below is the screenshot for your reference.
✅ Summary
You have now integrated voice verification and Mental Fitness scoring into your app using the Passive SDK, completing:
SDK initialization.
User enrollment.
Verification and score generation.
For more implementation details of Passive Mode refere below developer guide
Developer guide - Background Handling, Notifications and Reminder Scheduling.
Sonde Health