Sonde Health API Platform Documentation

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Overview

High level sequence diagram is shown below for How you can integrate your AppServer and App with SondePlatform

Server Side

Creating Token

Using java-sdk, access token can be generated as below:

  1. Replace <clientId> and <clientSecret> in below snippets by the actual values that you must have received

SondeCredentialsService cred = SondeHealthClientProvider.getClientCredentialsAuthProvider(<clientId>, <clientSecret>);
  1. Make a list of all the required scopes. Example below

List<Scopes> scopeList = Arrays.asList(Scopes.STORAGE_WRITE, Scopes.SCORES_WRITE, Scopes.MEASURES_READ, Scopes.MEASURES_LIST);
  1. Now, generate access token using the SondeCredentialsService

AccessToken token = null;
try{
	token = cred.generateAccessToken(scopeList);
	String accessToken = token.getAccessToken(); //This line will get the accessToken with the requested scopes
	}
catch(SondeServiceException | SDKClientException | SDKUnauthorizedException ex){
	//Exception handling code
}

Creating User

  1. Instantiate the SondeCredentialsService using your clientId and clientSecret

SondeCredentialsService cred = SondeHealthClientProvider.getClientCredentialsAuthProvider(<clientId>, <clientSecret>);
  1. Create a subject client

UserClient userClient = SondeHealthClientProvider.getSubjectClient(cred);
  1. Build the UserCreationRequest using the user attributes.

UserCreationRequest request = new UserCreationRequest.UserBuilder(Gender.MALE,"1991").withLanguage("ENGLISH").build(); // language is optional
  1. Create user using sujectclient created in step 2

UserCreationResponse response;
try{
  userClient = SondeHealthClientProvider.getUserClient(cred);
  response = userClient.createUser(request);
}
catch(SondeServiceException | SDKClientException | SDKUnauthorizedException ex){
	//your exceptional handling code
}

Request For Score

  1. Instantiate the SondeCredentialsService using your clientId and clientSecret

SondeCredentialsService cred = SondeHealthClientProvider.getClientCredentialsAuthProvider(<clientId>, <clientSecret>);
  1. Upload the file using Storage Client

try{
  StorageClient client = SondeHealthClientProvider.getStorageClient(cred);
  FileUploadRequest fileUploadRequest = new FileUploadRequest(Country.INDIA, <user Identifier>, "C:\\test.wav", FileType.WAV);
  User identifier is the one that was generated when we created subject For example - "3cf7e4d31"
  FileUploadResponse response = client.uploadFile(fileUploadRequest);
}
catch(SondeServiceException | SDKClientException | SDKUnauthorizedException ex){
	//your exceptional handling code
}
  1. Create a Scoring client and get Score for a measure

try{
  ScoringClient scoringClient = SondeHealthClientProvider.getScoringClient(cred);
  ScoringRequest request = new ScoringRequest.Builder(response.getFilePath(), "emotional-resilience").withUserIdentifier("3cf7e4d31").build();
  ScoringResponse scoringResponse = scoringClient.getScore(request);
}
catch(SondeServiceException | SDKClientException | SDKUnauthorizedException ex){
	//your exceptional handling code
}

App Side

Web

Recording Wav File

prerequisistes:

  • Snippets are developed using below libraries

    <script src="https://unpkg.com/wavesurfer.js@3.3.1/dist/wavesurfer.min.js"></script>
    <script src="https://unpkg.com/wavesurfer.js@3.3.1/dist/plugin/wavesurfer.microphone.min.js"></script>
    <script   src="https://code.jquery.com/jquery-3.4.1.min.js"></script>

  • Wave file should be recorded in sampling rate of 44100 Hz

  1. Create WaveSurfer object to show waveform and initiate audio recording. Define a <div> element with id = 'waveform'

  • This Object of WaveSurfer should be created an event which is triggered by User Gesture like “button click”, etc.

    // AudioContext get initalise as per OS(Windows OS | Mac OS) 
    var AudioContext = window.AudioContext || window.webkitAudioContext;
    var audio_context = new AudioContext();
    var audio_processor = audio_context.createScriptProcessor(4096, 1, 1);
    let wavesurfer = WaveSurfer.create({
            container: '#waveform',   // id for div element on which waveform will shown
            waveColor: '#979DA3',
            interact: false,
            height:'70',
            cursorWidth: 0,
            audioContext:  audio_context,
            audioScriptProcessor:  audio_processor,
            plugins: [
                WaveSurfer.microphone.create({
                    bufferSize: 4096,
                    numberOfInputChannels: 1,
                    numberOfOutputChannels: 1,
                    constraints: {
                        video: false,
                        audio: true
                    }
                }
                )
            ]
    });
  1. Add event listener “deviceReady“ on WaveSurfer Object and do processing in it to convert audio to WAV

  • This event will trigger when user allows recording permission in browser

  • Create MediaRecorder Object to initiate record audio from stream provided by ‘deviceReady’ Event

  • Start MediaRecorder Object to start capturing audio from stream

  • Add event listener ‘dataavailable' and 'stop’ of mediaRecorder, these events will trigger when mediaRecorder stop using mediaRecorder.stop()

  • Get Audio data from 'dataavailable' event, store it on audioChunks

  • Process data to make WAV file in 'stop' event

  • Use audioChunks variable to make blob

  • Initialize AudioCTX Object with SampleRate to decode audioBuffer

  • Decode ArrayBuffer using AudioCtx and get PCM data

  • function encodePCMtoWAV will process PCM data and return WAV file

  • NOTE: you have to mention sampleRate in AudioCTX and encodePCMtoWAV both

  • Make blob of returned DataView

  • Make File Object using that blob.

  • You need to send this File object to Server

     wavesurfer.microphone.on('deviceReady', function(stream){
                    console.info('Device ready!')
                    //get stream from wavesurfer and process it
                    //record it using mediaRecorder of Javascript
                    this.mediaRecorder = new MediaRecorder(stream);
                    this.mediaRecorder.start();
                    const audioChunks = [];
                    
                    //this function will call once mediaRecorder will stop or stop event called
                    this.mediaRecorder.addEventListener('dataavailable', event=>{
                        audioChunks.push(event.data);          
                    })
    
    
                    //once we stop mediaRecorder object, check line no. 81
                    this.mediaRecorder.addEventListener("stop", () => {
                        this.loader_show()
                        let self = this
                        let audioChunkBlob = new Blob(audioChunks)
                        //get AudioBuffer from audioChunkBlob, Just to verify purpose
                        audioChunkBlob.arrayBuffer().then((obj)=>{
                            var audioCtx = new (window.AudioContext || window.webkitAudioContext)(
                                {
                                    sampleRate: 44100       //set sample rate here, NOTE: you need to change it to encodePCMtoWav() function too
                                }
                            );
                            //get PCM data from AudioBuffer which contain in obj, so that we can make WAV format independently
                            audioCtx.decodeAudioData(obj, function(buffer) {
                                    //we got PCM data in buffer
                                    //encodePCMtoWAV function is used to change format to WAV and add responsible header in it.
                                    const audioBlob = new Blob([self.encodePCMtoWAV(buffer.getChannelData(0))]); //Verify and Play this blob using creating Audio Object with this Blob
                                    var file = new File([audioBlob], 'sample', {type: "audio/wav"})   //this file object contain everything you needed, Just send this to server
                                    //send file object to server                                
                                },
                            function(e){ 
                                console.log("Error with decoding audio data" + e.err); 
                            });
                        })
                    });
                    
                }.bind(this));
  1. Add “DeviceError“ and “On“ error for microphone and WaveSurfer Object.

  • Destroy WaveSurfer waveform in microphone error

         wavesurfer.microphone.on('deviceError', function(code) {
                    console.warn('Device error: ' + code);
                    wavesurfer.destroy()
                });
                
          wavesurfer.on('error', function(e) {
              console.warn(e);
          });
  1. Start WaveSurfer

wavesurfer.microphone.start()
  1. Once the user allow permission to record audio in browser, “deviceReady” event will trigger

  2. Stop Recording after a threshold time

                        //threshold time is 6 seconds defined in top of code in working example
                        let sec = 0
                        let interval = setInterval(function(){
                            sec++;
                            document.getElementById('progress_bar').style.width = ((sec/this.threshold_time)*100).toString()+'%'
                            document.getElementById('progress_bar').innerText = `${sec} Second`
                            if(sec == this.threshold_time){
                                wavesurfer.microphone.stop()
                                this.mediaRecorder.stop()
                                clearInterval(interval)
                            }
                        }.bind(this), 1000)

Uploading Wav File

  1. Get pre-signed url from server to upload file, give details like location, userIdentifier, and fileType

  2. Upload file object which we created in at STEP: 14, To pre-signed url

//these are nested ajax call/HTTP Request to API.
//1. To get location of file for a specific user
//2. Upload file on that location using presigned url of AWS

$.ajax({
    type: 'POST',
    url: serverURL+"storage/files/",
    headers:{
        'Authorization':access_token,
        'Content-Type':'application/json',
    },
    dataType: 'json',
    data:JSON.stringify({
        "fileType": "wav",
        "countryCode": countryCode,   //like IN, US, DE
        "userIdentifier": user_identifier
      }),
    success: function(response){
        //save the response to get file_location which we use to calculate score
        //send file to pre-sign url
        $.ajax({
            type: 'PUT',
            url: response.signedURL,
            data:file,   //send object of file which we created in stop event of mediaRecorder
            processData: false,                     //these are important parameters
            dataType: false,
            success: function(obj){
               console.log("successfully uploaded on presigned url")
            },
            error: function(obj){
                    console.error(obj.responseJSON)
            }
        })   
    },
    error: function(obj){
        self.errorBlock()
        console.error(obj.responseJSON)
    }
})  

Request For Score

  1. Give userIdentifier, fileLocation(Where we uploaded file), and Measure Name to get score.

//This is an Ajax call/HTTP Request to API

$.ajax({
    type: 'POST',
    url: serverURL+"inference/scores",
    headers:{
        'Authorization':access_token,
        'Content-Type':'application/json',
    },
    dataType: 'json',
    data:JSON.stringify({
        "userIdentifier": user_identifier,
        "fileLocation": file_location_where_the_file_is_uploaded, //check line 19 in Uploading Wav File code snippet
        "measureName": measure_name_to_c

alculate
      }),
    success: function(final_score){
        console.log(final_score)
    },
    error: function(obj){
        console.error(obj.responseJSON)
    }
}) 

iOS

Recording Wav File

Wave file should be recorded in sampling rate of 44100 Hz

  • Adding the audio recording permission description into plist file

<key>NSMicrophoneUsageDescription</key>
  <string>Your message to show in permission dialog</string>
  • Ask recording permission from the user

func askForAudioRecordingPermission(){
        do{
            try recordingSession.setCategory(.record)
            recordingSession.requestRecordPermission({allowed in
                if !allowed{
                    self.showPermissionError()
                }
            })
        }catch{}
    }
  • To record the audio file follow the code snippet given below

func startRecording() {
    let recordingSession = AVAudioSession.sharedInstance()
    var audioRecorder: AVAudioRecorder!
        let settings: [String:Any] = [
            AVFormatIDKey: kAudioFormatLinearPCM,
            AVLinearPCMBitDepthKey:16,
            AVSampleRateKey: 44100.0,
            AVNumberOfChannelsKey: 1 as NSNumber,
            AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
            AVAudioFileTypeKey: kAudioFileWAVEType
        ]
       
        do{
            audioRecorder = try AVAudioRecorder(url: audioFilePath!, settings: settings)
            audioRecorder.prepareToRecord()
            audioRecorder.record()
            
        }catch{}
    }
  • Stop Audio Recording

audioRecorder.stop()

Uploading Wav File

– Parameters

– countryCode:

– subjectIdentifier:

  • To upload the audio file you need to first get the pre-signed URL where you need to upload the file.

  • Upload the file from the path where your audio file is created.

  func getSignedURL(countryCode: String, fileType: String, subjectIdentifier: String, completion:@escaping (_ storageResponse:[String:AnyObject])->Void, errorCompletion:@escaping (_ error:String)->Void){
        guard let url = URL(string: baseURL + "storage/files") else {
            return
        }
        if accessToken.isEmpty{
            return
        }
        
        let body:[String: String] = ["countryCode": countryCode, "fileType": fileType, "subjectIdentifier": subjectIdentifier]
        guard let bodyData = try? JSONSerialization.data(withJSONObject: body, options: []) else{
            return
        }
        
        
        let session = URLSession.shared
        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.setValue(accessToken, forHTTPHeaderField: "Authorization")
        request.setValue("application/json", forHTTPHeaderField: "Content-Type")
        request.httpBody = bodyData

        session.dataTask(with: request,completionHandler: {(data, response, error) in
            if error != nil{
                errorCompletion(error!.localizedDescription)
            }else{
                if let data = data, let storageResponse = try? JSONSerialization.jsonObject(with: data, options: []) as? [String: AnyObject]{
                   completion(storageResponse)
                }else{
                    errorCompletion("Invalid Response")
                }
            }
        }).resume()
    
        
    }
  • Upload the audio file

– Parameters:

– signedURL: You get this value from the response of getSignedURL.

– audioFileURL: URL path where you recorded your audio file.

 func uploadFile(signedURL:String, audioFileURL:URL, completion:@escaping ()->Void, errorCompletion:@escaping (_ error:Error)->Void){
        guard let  uploadURL = URL(string: signedURL) else{
            return
        }
        if accessToken.isEmpty{
            return
        }
        
        let session = URLSession(configuration: .default)
        var request = URLRequest(url: uploadURL)
        request.httpMethod = "PUT"
        
        session.uploadTask(with: request, fromFile: audioFileURL, completionHandler: {(data, response, error) in
            if (response as? HTTPURLResponse)?.statusCode == 200{
                completion()
            }else{
                errorCompletion(error!)
            }
        }).resume()
    }

Request For Score

– Parameters:

– fileLocation: You can get the file location from the response of getSignedURL.

– measureName: The value of a measure for which you want to calculate the score.

  func getScore(fileLocation:String, measureName:String, completion:@escaping (_ scoreResponse: [String:AnyObject])->Void, errorCompletion:@escaping (_ error:String)->Void){
        guard let url = URL(string: baseURL + "inference/scores") else {
            return
        }
        
        let body: [String: AnyObject] = ["fileLocation": fileLocation as AnyObject, "measureName" : measureName as AnyObject]
        guard let bodyData = try? JSONSerialization.data(withJSONObject: body, options: []) else{
            return
        }
        if accessToken.isEmpty{
            return
        }
        
        let session = URLSession(configuration: .default)
        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.httpBody = bodyData
        request.setValue(accessToken, forHTTPHeaderField: "Authorization")
        request.setValue("application/json", forHTTPHeaderField: "Content-Type")
        
        session.dataTask(with: request,completionHandler: {(data, response, error) in
            if error != nil{
               errorCompletion(error!.localizedDescription)
            }else{
                if let data = data, let responseJSON = try? JSONSerialization.jsonObject(with: data, options: []) as? [String: AnyObject]{
                    completion(responseJSON)
                }else{
                    
                    errorCompletion("Invalid Response")
                }
            }
        }).resume()
        
    }

Android

Recording Wav File

Wave file should be recorded in sampling rate of 44100 Hz

You can record a audio file in following steps :

i). Request for microphone permission :

To record an audio, you must request for microphone permission on android 6 and above.

Add the permission in manifest

<uses-permission android:name="android.permission.RECORD_AUDIO"/>

Below code snippet you can refer to request for microphone permission.

  private void requestAudioPermissions() {
        if (ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {

            //When permission is not granted by user, show them message why this permission is needed.
            if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.RECORD_AUDIO)) {
                Toast.makeText(this, "Please grant permissions to record audio", Toast.LENGTH_LONG).show();
                ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.RECORD_AUDIO}, PERMISSIONS_RECORD_AUDIO);

            } else {
                // Show user dialog to grant permission to record audio
                ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.RECORD_AUDIO}, PERMISSIONS_RECORD_AUDIO);
            }
        }
        //If permission is granted, then go ahead for recording audio
        else if (ContextCompat.checkSelfPermission(this,
                Manifest.permission.RECORD_AUDIO)
                == PackageManager.PERMISSION_GRANTED) {
            // start Recording audio;
        }
    }

ii). Start recording of audio :

Below snippet you can refer to start recording of audio and writing it to a file.

The expected file is .WAV file, you should convert your PCM file to .WAV file by using method WaveUtils.pcmToWave() (You can find implementation of this method in GitHub sample project here: https://github.com/sondehealth-samples/score/blob/master/client/android/app/src/main/java/com/sonde/sample/utils/WaveUtils.java ) This method adds required headers in the file.

// starts recording a file at internal memory
    private void startRecording() {
        mAudioRecord = new AudioRecord(AUDIO_SOURCE, SAMPLE_RATE_HZ, AUDIO_CHANNEL_CONFIG, AUDIO_FORMAT, AUDIO_BUFFER_SIZE_BYTES);
        mAudioRecord.startRecording();
        mFilePath = getFilesDir().getAbsolutePath() + "/" + System.currentTimeMillis() + ".wav";
        recording = true;
        new Thread(new Runnable() {
            @Override
            public void run() {
                writeAudioDataToFile(mFilePath);
            }
        }).start();
    }
    
    private void writeAudioDataToFile(String filename) {
        String audioFilename = filename + ".pcm";
        OutputStream outputStream = null;
        try {
            outputStream = new FileOutputStream(audioFilename);
            int bufferSizeBytes = AUDIO_BUFFER_SIZE_BYTES;
            short[] audioBuffer = new short[bufferSizeBytes / 2]; // assumes 16-bit encoding
            byte[] outputBuffer = new byte[bufferSizeBytes];
            int totalBytesRead = 0;
            while (recording) {
                int numShortsRead = mAudioRecord.read(audioBuffer, 0, audioBuffer.length);

                for (int i = 0; i < numShortsRead; i++) {
                    outputBuffer[i * 2] = (byte) (audioBuffer[i] & 0x00FF);
                    outputBuffer[i * 2 + 1] = (byte) (audioBuffer[i] >> 8);
                    audioBuffer[i] = 0;
                }

                int numBytesRead = numShortsRead * 2;
                totalBytesRead += numBytesRead;
                outputStream.write(outputBuffer, 0, numBytesRead);
            }

            OutputStream waveOutputStream = new FileOutputStream(mFilePath);
            InputStream dataInputStream = new FileInputStream(audioFilename);
            short numChannels = 1;
            short sampleSizeBytes =  2;
            //convert pcm to wav file
            WaveUtils.pcmToWave(waveOutputStream, dataInputStream, totalBytesRead, numChannels, SAMPLE_RATE_HZ, sampleSizeBytes);
        } catch (Exception e) {
            Log.e(TAG, "Error : " + e);
        }finally {
            try {
                if (outputStream != null) {
                    outputStream.close();
                }
            } catch (IOException e) {
            Log.e(TAG, "Error : " + e);
            }
            try {
                if (mAudioRecord != null) {
                    mAudioRecord.stop();
                    mAudioRecord.release();
                    mAudioRecord = null;
                }
            } catch (IllegalStateException e) {
            Log.e(TAG, "Error : " + e);
            }
            // delete PCM file
            File file = new File(audioFilename);
            file.delete();
        }
    }

Uploading Wav File

  1. Get pre-signed url from server to upload file, give details like location, userIdentifier, and fileType

private void getS3SignedUrl( {
        BackendApi backendApi = RetrofitClientInstance.getRetrofitInstance().create(BackendApi.class);
        String countryCode = "IN"; //Use your country code
        Call<S3PathResponse> call = backendApi.getS3FilePath(accessToken, new S3FilePathRequest("wav", countryCode, userIdentifier));
        call.enqueue(new Callback<S3PathResponse>() {
            @Override
            public void onResponse(Call<S3PathResponse> call, Response<S3PathResponse> response) {
                S3PathResponse s3PathResponse = response.body();
                if (s3PathResponse != null) {
                String s3FilePath = s3PathResponse.getSignedURL();
                    //use the received signed url to upload wav file
                } else {
                    Log.e(TAG, "error: code " + response.code());
                }
            }

            @Override
            public void onFailure(Call<S3PathResponse> call, Throwable t) {
                Log.e(TAG, "error: " + t);
            }
        });
    }
  1. Once you got the S3 signed url, next step is to upload wav file to S3 bucket.

You can refer below code snippet to upload file to S3 bucket

private void uploadFileToS3(final S3PathResponse s3PathResponse, final File filePath) {
        BackendApi backendApi = RetrofitClientInstance.getRetrofitInstance().create(BackendApi.class);
        MediaType MEDIA_TYPE_OCTET_STREAM = MediaType.parse("application/octet-stream");
        String uploadUrl = s3PathResponse.getSignedURL();
        Call<ResponseBody> call = backendApi.uploadFileToS3(uploadUrl, RequestBody.create(MEDIA_TYPE_OCTET_STREAM, filePath));
        call.enqueue(new Callback<ResponseBody>() {
            @Override
            public void onResponse(Call<ResponseBody> call, Response<ResponseBody> response) {
                if (response.isSuccessful()) {
                    Log.i(TAG, " : File Uploaded successfully " + filePath.getName());
                    // request for measure score
                } else {
                    Log.e(TAG, " : Failed to upload file " + filePath.getName() + " Error code : " + response.code());
                }
            }

            @Override
            public void onFailure(Call<ResponseBody> call, Throwable t) {
                Log.e(TAG, " : Failed to upload file " + filePath.getName() + " Error: " + t);
            }
        });
    }

Request For Score

After successfully uploading the wav file, the last step requires you to request for measure score.

You can refer below code snippet to request for measure score.

 private void requestForMeasureScore(String fileLocation, final String measureName, String userIdentifier) {
        Log.i(TAG, " : fileLocation : " + fileLocation);
        BackendApi backendApi = RetrofitClientInstance.getRetrofitInstance().create(BackendApi.class);
        Call<InferenceScoreResponse> call = backendApi.getInferenceScore(accessToken, new InferenceScoreRequest(fileLocation, measureName, userIdentifier));
        call.enqueue(new Callback<InferenceScoreResponse>() {
            @Override
            public void onResponse(Call<InferenceScoreResponse> call, Response<InferenceScoreResponse> response) {
                InferenceScoreResponse scoreResponse = response.body();
                if (scoreResponse != null) {
                    //show calculated score for measure
                } else {
                    Log.e(TAG, " : Failed to get score , Error: " + response.code());
                    Toast.makeText(MainActivity.this, "Could not calculate the score, please try again", Toast.LENGTH_LONG).show();
                }
            }

            @Override
            public void onFailure(Call<InferenceScoreResponse> call, Throwable t) {
                Log.e(TAG, " : Failed to get score , Error: " + t);
                Toast.makeText(MainActivity.this, "Could not calculate the score, please try again", Toast.LENGTH_LONG).show();
            }
        });
    }

For more information, please contact Sonde at support@sondehealth.com.

  • No labels