Skip to content

Conversation

@sanchitmonga22
Copy link
Contributor

@sanchitmonga22 sanchitmonga22 commented Jan 19, 2026

…for recording and playback in STT, TTS, and Voice Assistant screens. Remove deprecated AudioService file and update related components to utilize new audio utilities.

Description

Brief description of the changes made.

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Refactoring

Testing

  • Lint passes locally
  • Added/updated tests for changes

Platform-Specific Testing (check all that apply)

Swift SDK / iOS Sample:

  • Tested on iPhone (Simulator or Device)
  • Tested on iPad / Tablet
  • Tested on Mac (macOS target)

Kotlin SDK / Android Sample:

  • Tested on Android Phone (Emulator or Device)
  • Tested on Android Tablet

Flutter SDK / Flutter Sample:

  • Tested on iOS
  • Tested on Android

React Native SDK / React Native Sample:

  • Tested on iOS
  • Tested on Android

Labels

Please add the appropriate label(s):

SDKs:

  • Swift SDK - Changes to Swift SDK (sdk/runanywhere-swift)
  • Kotlin SDK - Changes to Kotlin SDK (sdk/runanywhere-kotlin)
  • Flutter SDK - Changes to Flutter SDK (sdk/runanywhere-flutter)
  • React Native SDK - Changes to React Native SDK (sdk/runanywhere-react-native)
  • Commons - Changes to shared native code (sdk/runanywhere-commons)

Sample Apps:

  • iOS Sample - Changes to iOS example app (examples/ios)
  • Android Sample - Changes to Android example app (examples/android)
  • Flutter Sample - Changes to Flutter example app (examples/flutter)
  • React Native Sample - Changes to React Native example app (examples/react-native)

Checklist

  • Code follows project style guidelines
  • Self-review completed
  • Documentation updated (if needed)

Screenshots

Attach relevant UI screenshots for changes (if applicable):

  • Mobile (Phone)
  • Tablet / iPad
  • Desktop / Mac

Important

Refactor React Native SDK audio handling by replacing AudioService with RunAnywhere.Audio API, introducing new manager classes, and updating example screens.

  • Behavior:
    • Replaces AudioService with RunAnywhere.Audio API for audio recording and playback.
    • Introduces AudioCaptureManager for microphone recording and AudioPlaybackManager for TTS playback.
    • Updates VoiceSessionHandle to use new manager classes.
    • Removes AudioService.ts from example app.
    • Updates example screens (STTScreen.tsx, TTSScreen.tsx, VoiceAssistantScreen.tsx) to use new audio APIs.
  • Architecture:
    • Provides a unified RunAnywhere.Audio API for audio operations.
    • Implements event-driven architecture for voice session handling.
    • Ensures proper cleanup and lifecycle management.
    • Cross-platform abstraction using NativeAudioModule for iOS and LiveAudioStream/react-native-sound for Android.
  • Misc:
    • Bumps SDK version to 0.17.0.
    • Updates React Native version in example app from 0.81.1 to 0.83.1.
    • Changes podspec framework path from ios/Frameworks/ to ios/Binaries/.

This description was created by Ellipsis for 9bab0da. You can customize this summary. It will automatically update as commits are pushed.


Summary by CodeRabbit

  • New Features

    • Added RunAnywhere.Audio public API for recording, playback, WAV creation, and audio utilities.
    • Replaced manual voice pipeline with an SDK-driven VoiceSession providing event-based sessions, live audio-level reporting, and simplified UI flows.
  • Chores

    • Bumped React/React Native and tooling versions across SDK and examples.
    • Updated native build settings for iOS/Android (framework paths, linker flags, 16KB page alignment).
    • Removed legacy local audio utilities in favor of the SDK audio surface.

✏️ Tip: You can customize this high-level summary in your review settings.

Greptile Summary

Refactors React Native SDK audio handling by replacing the deprecated AudioService utility with a unified RunAnywhere.Audio API and dedicated manager classes. The changes consolidate audio recording, playback, and WAV file creation into the SDK layer, providing a cleaner separation between SDK internals and application code.

Key Changes:

  • Introduced RunAnywhere.Audio static API with recording (startRecording, stopRecording) and playback (playAudio, stopPlayback) methods
  • Created AudioCaptureManager class for managing microphone recording with proper state tracking and VAD integration
  • Created AudioPlaybackManager class for TTS audio playback with lifecycle management
  • Updated VoiceSessionHandle to use the new manager classes instead of direct audio operations
  • Removed deprecated AudioService.ts from example app
  • Updated all example screens (STTScreen, TTSScreen, VoiceAssistantScreen) to use new audio APIs
  • Bumped SDK version to 0.17.0 with relaxed peer dependency constraints
  • Updated React Native from 0.81.1 to 0.83.1 in example app
  • Changed podspec framework path from ios/Frameworks/ to ios/Binaries/ for consistency

Architecture Improvements:

  • Better separation of concerns with dedicated manager classes
  • Event-driven architecture for voice session handling
  • Proper cleanup and lifecycle management
  • Cross-platform abstraction (iOS uses NativeAudioModule, Android uses LiveAudioStream/react-native-sound)

Issues Found:

  • Code duplication: WAV conversion utilities (createWavFromPCMFloat32, createWavFileFromChunks, createWavHeader) are duplicated between RunAnywhere+Audio.ts and the manager classes
  • Consider extracting shared WAV utilities to a common module
  • React Native version jump (0.81 → 0.83) should be thoroughly tested for breaking changes

Confidence Score: 4/5

  • This PR is safe to merge with minor code quality improvements recommended.
  • The refactoring is well-structured and follows clean architecture principles. The code is functional and tested on both platforms. Score of 4 (not 5) due to code duplication in WAV utilities that should be extracted to a shared module, and the significant React Native version bump that requires thorough testing.
  • Pay close attention to RunAnywhere+Audio.ts and the manager classes for the WAV utility duplication. Verify the React Native 0.83.1 upgrade doesn't introduce breaking changes.

Important Files Changed

Filename Overview
sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts New unified audio API with recording, playback, and WAV conversion. Contains code duplication with AudioCaptureManager/AudioPlaybackManager.
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts Refactored to use class-based API with proper state management. Clean implementation with good separation of iOS/Android concerns.
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts Well-structured playback manager with proper lifecycle handling and error management. Includes WAV conversion for TTS output.
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts Updated to use AudioCaptureManager and AudioPlaybackManager. Event-driven architecture with proper VAD integration.
examples/react-native/RunAnywhereAI/src/screens/STTScreen.tsx Updated to use RunAnywhere.Audio instead of deprecated AudioService. Clean integration with new audio API.
examples/react-native/RunAnywhereAI/src/screens/TTSScreen.tsx Updated to use RunAnywhere.Audio for playback. Properly migrated from old AudioService pattern.
sdk/runanywhere-react-native/packages/core/package.json Version bump to 0.17.0, relaxed peer dependency constraints (using >= instead of exact versions), updated TypeScript to 5.9.2.
examples/react-native/RunAnywhereAI/package.json Updated React Native to 0.83.1, added react-native-gesture-handler, reanimated, and worklets dependencies. Aligned with latest RN ecosystem.

Sequence Diagram

sequenceDiagram
    participant App as App/Screen
    participant RA as RunAnywhere.Audio
    participant ACM as AudioCaptureManager
    participant APM as AudioPlaybackManager
    participant Native as Native Module
    participant FS as File System

    Note over App,FS: Recording Flow
    App->>RA: startRecording(callbacks?)
    RA->>RA: getRNFS()
    alt iOS
        RA->>Native: NativeAudioModule.startRecording()
        Native-->>RA: {path}
        RA->>RA: Start audio level polling
        loop Every 100ms
            RA->>Native: getAudioLevel()
            Native-->>RA: {level}
            RA->>App: onProgress(elapsed, db)
        end
    else Android
        RA->>Native: LiveAudioStream.init(config)
        RA->>Native: LiveAudioStream.start()
        loop On audio data
            Native->>RA: on('data', base64)
            RA->>RA: calculateAudioLevelFromBase64()
            RA->>App: onProgress(elapsed, level)
        end
    end
    RA-->>App: filePath

    App->>RA: stopRecording()
    alt iOS
        RA->>Native: NativeAudioModule.stopRecording()
        Native-->>RA: {path}
    else Android
        RA->>Native: LiveAudioStream.stop()
        RA->>RA: createWavFileFromChunks()
        RA->>FS: writeFile(wavData)
    end
    RA-->>App: {uri, durationMs}

    Note over App,FS: Playback Flow
    App->>RA: playAudio(uri, callbacks?)
    alt iOS
        RA->>Native: NativeAudioModule.playAudio(uri)
        Native-->>RA: {duration}
        loop Every 100ms
            RA->>Native: getPlaybackStatus()
            Native-->>RA: {currentTime, duration, isPlaying}
            RA->>App: onProgress(currentTimeMs, durationMs)
        end
        Native->>RA: playback complete
        RA->>App: onComplete()
    else Android
        RA->>Native: new Sound(uri)
        RA->>Native: sound.play()
        loop Every 100ms
            RA->>Native: sound.getCurrentTime()
            Native-->>RA: seconds
            RA->>App: onProgress(currentTimeMs, durationMs)
        end
        Native->>RA: play callback(success)
        RA->>App: onComplete()
    end

    Note over App,FS: Voice Session Flow (VoiceSessionHandle)
    App->>RA: startVoiceSession(config)
    RA->>ACM: new AudioCaptureManager()
    RA->>APM: new AudioPlaybackManager()
    RA->>ACM: requestPermission()
    ACM-->>RA: granted
    RA->>ACM: startRecording()
    loop VAD Monitoring (50ms)
        ACM->>Native: getAudioLevel()
        Native-->>ACM: level
        ACM->>RA: onAudioLevel(level)
        RA->>RA: checkSpeechState(level)
        alt Speech detected
            RA->>App: event {type: 'speechStarted'}
        end
        alt Silence after speech
            RA->>App: event {type: 'speechEnded'}
            RA->>ACM: stopRecording()
            ACM-->>RA: {path, durationMs}
            RA->>RA: transcribeFile(path)
            RA->>App: event {type: 'transcribed', text}
            RA->>RA: generate(prompt)
            RA->>App: event {type: 'responded', response}
            RA->>RA: synthesize(response)
            RA->>APM: play(audioData)
            APM->>Native: playAudio()
            RA->>App: event {type: 'speaking'}
            Native-->>APM: playback complete
            RA->>App: event {type: 'turnCompleted'}
            alt Continuous mode
                RA->>ACM: startRecording()
            end
        end
    end
Loading

…for recording and playback in STT, TTS, and Voice Assistant screens. Remove deprecated AudioService file and update related components to utilize new audio utilities.
@coderabbitai
Copy link

coderabbitai bot commented Jan 19, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Refactors SDK and example to add a unified RunAnywhere.Audio public API and a VoiceSession flow, remove the local AudioService, implement cross-platform audio capture/playback (WAV handling, audio-level monitoring), update example screens and app bootstrap, and apply packaging/build and dependency updates.

Changes

Cohort / File(s) Summary
SDK: Audio managers & voice session
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts, .../AudioPlaybackManager.ts, .../VoiceSession/VoiceSessionHandle.ts
New/expanded platform-specific audio capture and playback implementations, WAV creation/playback, audio-level monitoring, event publishing, VoiceSession refactor (VAD, STT→LLM→TTS pipeline, new getters/events).
SDK: Public audio & voice exports
sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts, .../RunAnywhere+VoiceSession.ts, .../Extensions/index.ts, .../RunAnywhere.ts
Adds RunAnywhere.Audio namespace, exports audio APIs/types, centralizes onEvent handling for startVoiceSession, and exposes audio constants.
SDK: Package/version & podspec updates
sdk/runanywhere-react-native/package.json, .../packages/core/package.json, .../packages/llamacpp/package.json, .../packages/onnx/package.json, .../packages/core/RunAnywhereCore.podspec
Version bumps to 0.17.0 across packages, relaxed peer ranges, updated dev tooling/TS/RN versions, and podspec vendored framework path changed (Frameworks → Binaries).
SDK: Public index export
sdk/runanywhere-react-native/packages/core/src/Public/Extensions/index.ts, sdk/runanywhere-react-native/packages/core/src/Public/RunAnywhere.ts
Re-exports audio functions/types and mounts Audio under RunAnywhere public surface.
Example app: remove local AudioService & adopt SDK
Deleted: examples/react-native/RunAnywhereAI/src/utils/AudioService.ts; Updated: examples/react-native/RunAnywhereAI/src/screens/STTScreen.tsx, .../TTSScreen.tsx
Deletes local AudioService and updates callers to RunAnywhere.Audio (including delegating WAV creation to SDK).
Example app: VoiceAssistant refactor
examples/react-native/RunAnywhereAI/src/screens/VoiceAssistantScreen.tsx
Replaces manual record→STT→LLM→TTS pipeline with SDK-driven VoiceSession session flow, adds sessionRef/events, consolidated state (audio level, session status), and UI adjustments.
Example app: boot/config changes
examples/react-native/RunAnywhereAI/android/app/src/main/java/com/runanywhereaI/MainApplication.kt, examples/react-native/RunAnywhereAI/ios/RunAnywhereAI.xcodeproj/project.pbxproj, examples/react-native/RunAnywhereAI/package.json
Simplifies DefaultNewArchitectureEntryPoint.load() call, adds OTHER_CFLAGS = "$(inherited)"; to Xcode configs, and updates example dependencies (React/React Native and tooling).
Android build/linker changes (16KB page alignment)
examples/react-native/RunAnywhereAI/android/app/build.gradle, examples/react-native/RunAnywhereAI/android/build.gradle, .../gradle.properties.example, sdk/runanywhere-commons/**/CMakeLists.txt, sdk/runanywhere-react-native/**/android/CMakeLists.txt
Adds linker flags and Gradle/CMake properties to enforce 16KB page-size alignment for Android API 35+ and related packaging flags; updates NDK version.

Sequence Diagram(s)

sequenceDiagram
    participant UI as VoiceAssistantScreen
    participant VSH as VoiceSessionHandle
    participant ACM as AudioCaptureManager
    participant STT as STT Module
    participant LLM as TextGeneration Module
    participant TTS as TTS Module
    participant APM as AudioPlaybackManager
    participant EB as EventBus

    UI->>VSH: startVoiceSession(config/onEvent)
    VSH->>ACM: startRecording(onAudioData)
    ACM->>EB: publish voiceSession_listening
    ACM-->>VSH: recording started

    Note over UI: User speaks...
    ACM->>VSH: onAudioData (audio chunks)
    VSH->>VSH: update audioLevel / VAD

    ACM->>VSH: recordingStopped {path,duration}
    VSH->>STT: transcribeFile(path)
    STT-->>VSH: transcription
    VSH->>EB: publish voiceSession_transcribed

    VSH->>LLM: generate(prompt)
    LLM-->>VSH: response
    VSH->>EB: publish voiceSession_responded

    VSH->>TTS: synthesize(response)
    TTS-->>VSH: base64 PCM
    VSH->>APM: play(base64 or wav)
    APM->>EB: publish voiceSession_speaking
    APM-->>VSH: playback completed
    VSH->>EB: publish voiceSession_turnCompleted
    VSH-->>UI: events via onEvent / events()
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

react-native-sdk, react-native-sample

Poem

🐰 I hopped through code with tiny paws,

I stitched up audio, fixed the flaws.
RunAnywhere.Audio sings and plays,
VoiceSession guides the chatty days.
Hop—now voices sparkle in new ways!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main change: refactoring audio handling by replacing AudioService with RunAnywhere Audio API.
Description check ✅ Passed The PR description includes the required template structure with sections for description, type of change, testing, labels, and checklist. The description captures the major refactoring work, though the initial 'Brief description' placeholder text remains incomplete.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sanchitmonga22 sanchitmonga22 changed the title Refactor audio handling: replace AudioService with RunAnywhere.Audio … [React-native-SDK] Refactor audio handling: replace AudioService with RunAnywhere Audio Jan 19, 2026
@sanchitmonga22 sanchitmonga22 marked this pull request as ready for review January 19, 2026 02:08
Copy link

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 33127ff in 2 minutes and 15 seconds. Click for details.
  • Reviewed 4355 lines of code in 22 files
  • Skipped 3 files when reviewing.
  • Skipped posting 8 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts:392
  • Draft comment:
    Consider adding a comment explaining the choice of multiplying RMS by 3 for amplification in calculateAudioLevelFromBase64. This will help clarify why this specific factor is used.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment suggests adding an explanation for a specific multiplication factor in the code. This is a request for clarification, which is not allowed according to the rules. The comment does not provide a specific code suggestion or point out a potential issue with the code itself.
2. sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts:682
  • Draft comment:
    Consider using a more efficient, well‐tested library or built‐in Buffer methods for base64 conversion instead of manual loops with atob and btoa for performance and maintainability.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 15% vs. threshold = 50% This is a React Native file (as evidenced by imports from 'react-native' and the platform-specific code). In React Native, Node.js Buffer is not available by default. The atob/btoa functions are the standard way to handle base64 encoding/decoding in React Native and browser environments. The manual loop to convert Uint8Array to a binary string before using btoa is a standard pattern when Buffer is not available. The comment suggests using "Buffer methods" which would not be available without adding a polyfill. The comment also mentions "well-tested library" but doesn't specify what library, and adding a dependency just for base64 conversion when the built-in methods work is questionable. The performance concern might be valid for very large files, but this is processing audio chunks which are typically manageable sizes. This seems like a speculative refactoring suggestion without strong evidence that there's an actual problem. Could there be a React Native-compatible library that handles this better? Maybe the performance impact is significant for large audio files and this is a real concern? Perhaps there are newer React Native APIs I'm not aware of that handle this more efficiently? While there might be libraries available, the comment doesn't specify which one, and the current implementation uses standard browser/React Native APIs (atob/btoa) which are appropriate for this environment. Buffer is not available in React Native without polyfills. Without evidence of an actual performance problem or a specific actionable alternative, this is a speculative refactoring suggestion that doesn't meet the bar for a useful review comment. This comment should be deleted. It suggests using "Buffer methods" which aren't available in React Native by default, and provides no specific actionable alternative. The current implementation using atob/btoa is standard for React Native environments. This is a speculative code quality suggestion without strong evidence of a problem.
3. sdk/runanywhere-react-native/packages/core/package.json:15
  • Draft comment:
    The 'react-native' field points to 'src/index'. Confirm that this is intentional and that bundlers and consumers of the package correctly resolve this entry point.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None
4. sdk/runanywhere-react-native/packages/core/src/Public/RunAnywhere.ts:155
  • Draft comment:
    The RunAnywhere SDK entry point aggregates a large number of features. Consider modularizing some sections in the future to improve maintainability as the project scales.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts:315
  • Draft comment:
    The async iterator implementation for events is clever; however, consider adding a timeout or safeguard so that unresolved promises do not lead to potential memory leaks if no event arrives.
  • Reason this comment was not posted:
    Comment was on unchanged code.
6. sdk/runanywhere-react-native/packages/core/src/Public/Extensions/index.ts:123
  • Draft comment:
    Consider organizing the export statements (e.g., alphabetizing) to improve readability and ease future maintenance.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
7. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts:400
  • Draft comment:
    Ensure that the atob and btoa functions are available in your React Native environment or provide appropriate polyfills. These functions are not standard in Node and might not exist in all runtimes.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is purely informative, advising the author to ensure the availability of certain functions in their environment. It doesn't provide a specific code suggestion or ask for a specific test to be written. It also doesn't point out a potential issue with the code itself, such as a missing test or a confusing logic. Therefore, it violates the rule against making purely informative comments.
8. sdk/runanywhere-react-native/packages/core/src/Public/RunAnywhere.ts:289
  • Draft comment:
    Ensure that sensitive information such as API keys or tokens are not logged or exposed in error messages during the authentication process. Verify that logging does not leak any sensitive credentials.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is asking the PR author to verify that sensitive information is not logged or exposed, which falls under asking the author to ensure behavior is intended or to double-check things. This violates the rules.

Workflow ID: wflow_FIXlvwyubkMOmEFE

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

22 files reviewed, 6 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +406 to +472
clearInterval(playbackProgressInterval);
playbackProgressInterval = null;
}

if (currentSound) {
currentSound.release();
currentSound = null;
}

if (success) {
callbacks?.onComplete?.();
resolve();
} else {
reject(new Error('Playback failed'));
}
});
});
});
}

/**
* Stop audio playback
*/
export async function stopPlayback(): Promise<void> {
if (playbackProgressInterval) {
clearInterval(playbackProgressInterval);
playbackProgressInterval = null;
}

isPlaying = false;

if (Platform.OS === 'ios' && NativeAudioModule) {
try {
await NativeAudioModule.stopPlayback();
} catch {
// Ignore
}
} else if (currentSound) {
currentSound.stop();
currentSound.release();
currentSound = null;
}
}

/**
* Pause audio playback
*/
export async function pausePlayback(): Promise<void> {
if (Platform.OS === 'ios' && NativeAudioModule) {
try {
await NativeAudioModule.pausePlayback();
} catch {
// Ignore
}
} else if (currentSound) {
currentSound.pause();
}
}

/**
* Resume audio playback
*/
export async function resumePlayback(): Promise<void> {
if (Platform.OS === 'ios' && NativeAudioModule) {
try {
await NativeAudioModule.resumePlayback();
} catch {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Code duplication: createWavFromPCMFloat32 is duplicated in AudioPlaybackManager.ts:406-472.

Extract this shared WAV conversion logic into a utility module (e.g., WavUtils.ts) to avoid maintenance issues.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts
Line: 406:472

Comment:
**style:** Code duplication: `createWavFromPCMFloat32` is duplicated in `AudioPlaybackManager.ts:406-472`. 

Extract this shared WAV conversion logic into a utility module (e.g., `WavUtils.ts`) to avoid maintenance issues.

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +612 to +673
async function createWavFileFromChunks(filePath: string, chunks: string[]): Promise<void> {
const fs = getRNFS();
if (!fs) {
throw new Error('react-native-fs not available');
}

// Combine all audio chunks into PCM data
let totalLength = 0;
const decodedChunks: Uint8Array[] = [];

for (const chunk of chunks) {
const decoded = Uint8Array.from(atob(chunk), c => c.charCodeAt(0));
decodedChunks.push(decoded);
totalLength += decoded.length;
}

// Create combined PCM buffer
const pcmData = new Uint8Array(totalLength);
let offset = 0;
for (const chunk of decodedChunks) {
pcmData.set(chunk, offset);
offset += chunk.length;
}

// Create WAV header
const wavHeader = createWavHeader(totalLength);
const headerBytes = new Uint8Array(wavHeader);

// Combine header and PCM data
const wavData = new Uint8Array(headerBytes.length + pcmData.length);
wavData.set(headerBytes, 0);
wavData.set(pcmData, headerBytes.length);

// Write to file
const wavBase64 = arrayBufferToBase64(wavData.buffer);
await fs.writeFile(filePath, wavBase64, 'base64');

logger.info(`WAV file written: ${filePath}, size: ${wavData.length} bytes`);
}

function createWavHeader(dataLength: number): ArrayBuffer {
const buffer = new ArrayBuffer(44);
const view = new DataView(buffer);
const sampleRate = AUDIO_SAMPLE_RATE;
const byteRate = sampleRate * 2;

writeString(view, 0, 'RIFF');
view.setUint32(4, 36 + dataLength, true);
writeString(view, 8, 'WAVE');
writeString(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
view.setUint16(22, 1, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, byteRate, true);
view.setUint16(32, 2, true);
view.setUint16(34, 16, true);
writeString(view, 36, 'data');
view.setUint32(40, dataLength, true);

return buffer;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Code duplication: createWavFileFromChunks and createWavHeader are duplicated in AudioCaptureManager.ts:299-444.

Extract these shared WAV utilities to a common module to maintain DRY principle.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts
Line: 612:673

Comment:
**style:** Code duplication: `createWavFileFromChunks` and `createWavHeader` are duplicated in `AudioCaptureManager.ts:299-444`.

Extract these shared WAV utilities to a common module to maintain DRY principle.

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +329 to +335
// eslint-disable-next-line @typescript-eslint/no-require-imports
const RNFS = require('react-native-fs');
const fileName = `recording_${Date.now()}.wav`;
const filePath = `${RNFS.CachesDirectoryPath}/${fileName}`;

const wavBase64 = this.arrayBufferToBase64(wavData.buffer);
await RNFS.writeFile(filePath, wavBase64, 'base64');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider defining RNFS as a peer dependency with proper import instead of dynamic require to enable better type checking and tree shaking.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts
Line: 329:335

Comment:
**style:** Consider defining RNFS as a peer dependency with proper import instead of dynamic require to enable better type checking and tree shaking.

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +57 to +68
function getRNFS() {
if (!RNFS) {
try {
// eslint-disable-next-line @typescript-eslint/no-require-imports
RNFS = require('react-native-fs');
} catch {
logger.warning('react-native-fs not available');
return null;
}
}
return RNFS;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Verify all error paths gracefully handle null returns from getRNFS() and provide clear error messages guiding users to install missing dependencies.

Prompt To Fix With AI
This is a comment left during a code review.
Path: sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts
Line: 57:68

Comment:
**style:** Verify all error paths gracefully handle null returns from `getRNFS()` and provide clear error messages guiding users to install missing dependencies.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +456 to 459
this.vadInterval = setInterval(() => {
this.checkSpeechState(this.currentAudioLevel);
}, 50);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Check that 50ms VAD monitoring interval doesn't cause performance issues on lower-end devices, especially since audio level calculations can be CPU-intensive.

Prompt To Fix With AI
This is a comment left during a code review.
Path: sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts
Line: 456:459

Comment:
**style:** Check that 50ms VAD monitoring interval doesn't cause performance issues on lower-end devices, especially since audio level calculations can be CPU-intensive.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +26 to +27
"react": "19.2.0",
"react-native": "0.83.1",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Ensure all breaking changes from React Native 0.81.1 → 0.83.1 are tested, particularly around the new architecture and Nitro modules compatibility.

Prompt To Fix With AI
This is a comment left during a code review.
Path: examples/react-native/RunAnywhereAI/package.json
Line: 26:27

Comment:
**style:** Ensure all breaking changes from React Native 0.81.1 → 0.83.1 are tested, particularly around the new architecture and Nitro modules compatibility.

How can I resolve this? If you propose a fix, please make it concise.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
sdk/runanywhere-react-native/packages/core/RunAnywhereCore.podspec (1)

14-16: Update podspec source repository URL to the renamed repository.

The repository was moved from RunanywhereAI/sdks to RunanywhereAI/runanywhere-sdks. CocoaPods will fail to resolve version tags without this update.

Proposed fix
-  s.source       = { :git => "https://github.com/RunanywhereAI/sdks.git", :tag => "#{s.version}" }
+  s.source       = { :git => "https://github.com/RunanywhereAI/runanywhere-sdks.git", :tag => "#{s.version}" }
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts (1)

299-415: Add atob/btoa polyfill or switch to Buffer-based encoding for cross-platform compatibility

Lines 304–415: atob/btoa are not guaranteed in React Native's JavaScript runtime. JavaScriptCore (iOS) does not provide these globals by default, and while Hermes (Android) may include them in recent builds, support is inconsistent across versions. The code will crash on iOS and potentially on older Android builds. Add a polyfill at app initialization or refactor to use Buffer-based helpers (available across all RN platforms).

This issue affects not only AudioCaptureManager but also multiple files (AudioPlaybackManager, RunAnywhere+Audio.ts, etc.), so a centralized polyfill solution would be more maintainable than fixing each usage individually.

Option 1: Polyfill at app startup
// In your app's root initialization (e.g., index.js or App.tsx)
import { decode as atobPolyfill, encode as btoaPolyfill } from 'base-64';
if (typeof global.atob === 'undefined') {
  global.atob = atobPolyfill;
}
if (typeof global.btoa === 'undefined') {
  global.btoa = btoaPolyfill;
}
Option 2: Buffer-based refactoring (more robust)

Replace the three utility methods with Buffer-safe versions:

-  private base64ToArrayBuffer(base64: string): ArrayBuffer {
-    const binaryString = atob(base64);
-    const bytes = new Uint8Array(binaryString.length);
-    for (let i = 0; i < binaryString.length; i++) {
-      bytes[i] = binaryString.charCodeAt(i);
-    }
-    return bytes.buffer;
-  }
+  private base64ToArrayBuffer(base64: string): ArrayBuffer {
+    // Import Buffer at the top: import { Buffer } from 'buffer';
+    const buf = Buffer.from(base64, 'base64');
+    return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);
+  }

-  private arrayBufferToBase64(buffer: ArrayBuffer): string {
-    const bytes = new Uint8Array(buffer);
-    let binary = '';
-    for (let i = 0; i < bytes.length; i++) {
-      binary += String.fromCharCode(bytes[i]!);
-    }
-    return btoa(binary);
-  }
+  private arrayBufferToBase64(buffer: ArrayBuffer): string {
+    return Buffer.from(new Uint8Array(buffer)).toString('base64');
+  }
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts (1)

591-595: Transition to terminal state when continuousMode=false.

After a turn completes with continuousMode=false, the session remains in 'processing' or 'speaking' state, blocking subsequent calls to start(). The session must be explicitly stopped first, which violates the expected API contract.

Fix
    // Resume listening if continuous mode
    if (this.config.continuousMode && this.isRunning) {
      this.state = 'listening';
      await this.startListening();
+   } else if (!this.config.continuousMode) {
+     this.state = 'stopped';
+     this.emit({ type: 'stopped', timestamp: Date.now() });
    }
🤖 Fix all issues with AI agents
In `@examples/react-native/RunAnywhereAI/package.json`:
- Around line 26-27: The project bumps "react-native": "0.83.1" but includes
potentially incompatible native audio modules (react-native-live-audio-stream
v1.1.1, react-native-sound v0.13.0, react-native-audio-recorder-player v3.6.14);
test end-to-end audio record/playback on both Android and iOS with RN 0.83.1,
and if you encounter build/runtime failures replace or upgrade the offending
module(s)—prioritize replacing react-native-live-audio-stream with an actively
maintained alternative or a New-Architecture compatible fork, update
react-native-sound to a supported release or apply necessary API adapters, and
document the tested working versions or changes in the repo README and
package.json dependency comments.

In `@examples/react-native/RunAnywhereAI/src/screens/VoiceAssistantScreen.tsx`:
- Around line 246-298: handleModelSelected silently returns when
SDKModelInfo.localPath is missing; add user feedback and closing behavior for
that edge case. In VoiceAssistantScreen.tsx, inside handleModelSelected (and/or
immediately after setShowModelSelection(false)), detect if !model.localPath and
call Alert.alert with a clear message like "Model not available" or "Model
download incomplete" (include model.name or id), then return; ensure the
modal/sheet is closed (setShowModelSelection(false) is already called) so the
user sees the alert and no silent failure occurs. Reference:
handleModelSelected, SDKModelInfo, modelSelectionType, setShowModelSelection.

In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts`:
- Around line 121-135: requestPermission() incorrectly returns true on iOS
without verifying actual microphone access; update requestPermission (in
AudioCaptureManager) to query the real iOS permission state before returning
(e.g., use react-native-permissions or call a native helper like
NativeAudioModule.checkMicrophonePermission if available), set this.state to
'idle' only when permission is granted and to 'error' (or appropriate) when
denied, and return the boolean result so behavior matches the Android branch and
startRecording() expectations.
- Around line 147-195: The startRecording method can leave the manager stuck in
'recording' if startIOSRecording() or startAndroidRecording() throws; wrap the
Platform.OS branch in a try/catch around the await of
startIOSRecording()/startAndroidRecording(), and on catch reset state to 'idle',
clear audioBuffer, set audioDataCallback = null, recordingStartTime = null, call
stopAudioLevelMonitoring(), publish a failure event (e.g.,
EventBus.publish('Voice',{type:'recordingFailed', error: err})) or log the
error, then rethrow the error so callers know it failed; update startRecording
to perform this cleanup whenever the platform-specific start call fails.
- Around line 249-297: startAndroidRecording registers an event handler via
audioStream.on('data', ...) on a singleton LiveAudioStream but never removes it,
causing duplicate callbacks/memory leaks; fix by storing the subscription
returned by audioStream.on (e.g., set this.androidDataSubscription =
audioStream.on('data', ...)) inside startAndroidRecording and in
stopAndroidRecording check and call this.androidDataSubscription.remove() (or
unsubscribe equivalent) before clearing androidAudioChunks, then null out the
subscription; update startAndroidRecording and stopAndroidRecording to manage
this.androidDataSubscription lifecycle so repeated sessions don't accumulate
listeners.

In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts`:
- Around line 346-380: In playFileAndroid, the constructor callback error path
currently rejects without releasing the allocated Sound instance, leaking
resources; update the constructor error handler inside playFileAndroid (where
this.currentSound is assigned via new SoundClass(...)) to call
this.currentSound.release() and set this.currentSound = null before setting
this.state = 'error' and rejecting, mirroring the cleanup performed in the play
callback so the Sound instance is always released on load failure.

In
`@sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere`+Audio.ts:
- Around line 263-315: startRecording adds a 'data' listener to the live stream
via audioStream.on('data', ...) and you must remove those listeners when ending
recording to avoid duplicated chunks/memory leaks; in both stopRecording() and
cancelRecording() locate the live stream via getLiveAudioStream() and, before
(or immediately after) calling audioStream.stop(), call
audioStream.removeAllListeners('data') (react-native-live-audio-stream v1.1.1)
to remove the data listeners, then proceed with stopping/cleanup
(currentRecordPath unlink, clearing intervals, resetting
audioChunks/progressCallback/isRecording).
- Around line 205-233: The on('data') handler for audioStream uses
recordingStartTime to compute elapsed time but recordingStartTime is assigned
only after audioStream.start(), causing the first elapsed value to be wrong;
move the assignment of recordingStartTime to before registering the
audioStream.on('data') handler (i.e., set recordingStartTime = Date.now()
immediately after configuring audioStream and before calling
audioStream.on('data')), then proceed to start the stream with
audioStream.start() and set isRecording = true as before.
🧹 Nitpick comments (4)
examples/react-native/RunAnywhereAI/package.json (1)

34-35: Inconsistent version pinning between related packages.

react-native-reanimated uses tilde (~4.2.1) while react-native-worklets uses an exact version (0.7.1). These packages are tightly coupled—react-native-reanimated depends on react-native-worklets—and mismatched versions can cause runtime errors or native build failures.

Consider using consistent pinning for both:

♻️ Suggested fix
-    "react-native-reanimated": "~4.2.1",
-    "react-native-worklets": "0.7.1",
+    "react-native-reanimated": "~4.2.1",
+    "react-native-worklets": "~0.7.1",
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts (1)

402-487: Verify base64 helpers availability & temp WAV cleanup.
RN/Hermes environments don’t always expose atob/btoa; please confirm availability or add a polyfill. Also verify how tts_*.wav files are cleaned up after playback/stop to avoid cache growth.

examples/react-native/RunAnywhereAI/src/screens/TTSScreen.tsx (1)

568-573: Confirm WAV path format from SDK

Line 568: Now that WAV creation is delegated to RunAnywhere.Audio.createWavFromPCMFloat32, please verify the returned path matches what RNFS/NativeAudioModule/react-native-sound expect (some APIs return file:// URIs). If it’s a URI, normalize before storing to avoid playback/cleanup failures.

sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts (1)

492-558: Consider extracting WAV conversion helpers to avoid duplication.

createWavFromPCMFloat32 here mirrors AudioPlaybackManager.createWavFromPCMFloat32. A shared helper would reduce drift risk.

Comment on lines +26 to +27
"react": "19.2.0",
"react-native": "0.83.1",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n examples/react-native/RunAnywhereAI/package.json

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 5666


🌐 Web query:

react-native-audio-recorder-player 3.6.14 React Native 0.83.1 compatibility

💡 Result:

Short answer: Yes — react-native-audio-recorder-player v3.6.14 is compatible with React Native 0.83.1 (the package supports RN >= 0.61 and uses autolinking/pod install for modern RN versions). [1][2]

Notes and recommendations:

  • Install with autolinking (yarn/npm) and run pod install in ios. [1][2]
  • The project is marked deprecated in favor of react-native-nitro-sound — consider migrating for better maintenance. [3]

Sources:

  • npm package README (installation / RN >= 0.61 / autolinking). [1]
  • npm package listing for v3.6.14. [2]
  • Package deprecation / migration note (react-native-nitro-sound). [3]

🌐 Web query:

react-native-live-audio-stream 1.1.1 React Native 0.83.1 compatibility

💡 Result:

Short answer: There's no official claim that [email protected] supports React Native 0.83.1 — v1.1.1 was published Aug 27, 2021 and predates RN 0.83, so compatibility is not guaranteed. [1][2]

Details:

  • Package latest release: v1.1.1 (Aug 27, 2021). [2][3]
  • React Native 0.83 was released much later and includes New-Architecture/other native changes that can break older native modules. There is no mention in the library repo or release notes of support for RN 0.83/New Architecture. [3][4]
  • Community reports show some users having trouble with older audio-streaming modules on recent RN versions (EventEmitter/autolinking/native issues), which suggests you should test carefully. [5][6]

What I recommend:

  • Try it in a small RN 0.83.1 test project (Android and iOS), run pod install, build, and exercise start/stop/data callbacks.
  • If you hit issues, consider either (a) forking and updating the native code/podspec to match RN 0.83 native module APIs, or (b) using a more actively maintained alternative or implementing a small native module tailored to RN 0.83.

Sources:

  1. npm package page (react-native-live-audio-stream v1.1.1). [2]
  2. GitHub repo / releases (v1.1.1 — Aug 27, 2021). [3]
  3. React Native 0.83 release notes (shows changes/new-architecture). [4]
  4. Community reports about older audio-streaming modules failing on newer RN versions. [5][6]

🌐 Web query:

react-native-sound 0.13.0 React Native 0.83.1 compatibility

💡 Result:

Short answer: Yes — react-native-sound 0.13.0 can be used with React Native 0.83, but expect a few API/usage differences and to prefer the modern imports and autolinking (no manual native linking). See notes below.

Key points:

  • The react-native-sound repository README indicates modern React Native / New Architecture compatibility and current maintenance. [1]
  • React Native 0.83 is the latest stable RN line (React 19.2 etc.); libraries supporting the new architecture and autolinking are expected to work with 0.83. [2]
  • Practical issues observed by users: some older APIs (for example Sound.setCategory) were removed/changed in recent react-native-sound releases — update imports (use import Sound from 'react-native-sound') and remove/replace deprecated calls. (Community reports / issues). [1]

If you hit problems: share the exact error/build logs. If you need maximum compatibility or new-architecture features, consider testing on a small RN 0.83 sample app or evaluate maintained forks/replacements (e.g., newer audio libraries) before upgrading.

Sources:
[1] react-native-sound GitHub README.
[2] React Native 0.83 release notes.


🏁 Script executed:

#!/bin/bash
# Check NPM package info for native audio modules
echo "=== react-native-audio-recorder-player latest versions ==="
curl -s "https://registry.npmjs.org/react-native-audio-recorder-player" | jq -r '.versions | keys[-10:]' 2>/dev/null || echo "Failed to fetch"

echo ""
echo "=== react-native-live-audio-stream latest versions ==="
curl -s "https://registry.npmjs.org/react-native-live-audio-stream" | jq -r '.versions | keys[-10:]' 2>/dev/null || echo "Failed to fetch"

echo ""
echo "=== react-native-sound latest versions ==="
curl -s "https://registry.npmjs.org/react-native-sound" | jq -r '.versions | keys[-10:]' 2>/dev/null || echo "Failed to fetch"

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 536


Verify React Native 0.83.1 compatibility with native audio modules before release.

The upgrade from React Native 0.81.1 to 0.83.1 may introduce breaking changes for three audio-related native modules:

  • react-native-live-audio-stream (v1.1.1): High risk — published Aug 2021 and predates RN 0.83. No official support claim for RN 0.83 or New Architecture. Community reports indicate older audio-streaming modules frequently fail on newer RN versions due to native API changes.
  • react-native-sound (v0.13.0): Moderate risk — pinned to an older version (latest is 0.9.1). While the library supports modern RN, this specific version may require API workarounds.
  • react-native-audio-recorder-player (v3.6.14): Compatible with RN 0.83.1 (supports RN ≥ 0.61 via autolinking), though the package is deprecated in favor of react-native-nitro-sound.

Recommend testing audio record/playback functionality on both Android and iOS, or updating react-native-live-audio-stream to an actively maintained alternative if build/runtime errors occur.

🤖 Prompt for AI Agents
In `@examples/react-native/RunAnywhereAI/package.json` around lines 26 - 27, The
project bumps "react-native": "0.83.1" but includes potentially incompatible
native audio modules (react-native-live-audio-stream v1.1.1, react-native-sound
v0.13.0, react-native-audio-recorder-player v3.6.14); test end-to-end audio
record/playback on both Android and iOS with RN 0.83.1, and if you encounter
build/runtime failures replace or upgrade the offending module(s)—prioritize
replacing react-native-live-audio-stream with an actively maintained alternative
or a New-Architecture compatible fork, update react-native-sound to a supported
release or apply necessary API adapters, and document the tested working
versions or changes in the repo README and package.json dependency comments.

Comment on lines +246 to +298
const handleSelectModel = useCallback((type: 'stt' | 'llm' | 'tts') => {
setModelSelectionType(type);
setShowModelSelection(true);
}, []);

/**
* Get context for model selection
*/
const getSelectionContext = (type: 'stt' | 'llm' | 'tts'): ModelSelectionContext => {
switch (type) {
case 'stt': return ModelSelectionContext.STT;
case 'llm': return ModelSelectionContext.LLM;
case 'tts': return ModelSelectionContext.TTS;
}
};

/**
* Handle model selected from the sheet
*/
const handleModelSelected = useCallback(async (model: SDKModelInfo) => {
setShowModelSelection(false);

try {
switch (modelSelectionType) {
case 'stt':
if (model.localPath) {
const sttSuccess = await RunAnywhere.loadSTTModel(model.localPath, model.category || 'whisper');
if (sttSuccess) {
setSTTModel({ id: model.id, name: model.name, preferredFramework: LLMFramework.ONNX } as ModelInfo);
}
}
break;
case 'llm':
if (model.localPath) {
const llmSuccess = await RunAnywhere.loadModel(model.localPath);
if (llmSuccess) {
setLLMModel({ id: model.id, name: model.name, preferredFramework: LLMFramework.LlamaCpp } as ModelInfo);
}
}
break;
case 'tts':
if (model.localPath) {
const ttsSuccess = await RunAnywhere.loadTTSModel(model.localPath, model.category || 'piper');
if (ttsSuccess) {
setTTSModel({ id: model.id, name: model.name, preferredFramework: LLMFramework.PiperTTS } as ModelInfo);
}
}
break;
}
} catch (error) {
Alert.alert('Error', `Failed to load model: ${error}`);
}
}, [modelSelectionType]);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

git ls-files | grep -i "voiceassistant" | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 561


🏁 Script executed:

cat -n examples/react-native/RunAnywhereAI/src/screens/VoiceAssistantScreen.tsx | sed -n '240,310p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2724


🏁 Script executed:

cat -n examples/react-native/RunAnywhereAI/src/screens/VoiceAssistantScreen.tsx | sed -n '1,50p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2005


🏁 Script executed:

find . -name "model*" -type f -path "*/components/*" | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 56


🏁 Script executed:

find . -path "*/components/*" -name "*.tsx" -o -path "*/components/*" -name "*.ts" | grep -i model

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 391


🏁 Script executed:

cat -n ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx | head -150

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 5672


🏁 Script executed:

cat -n ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx | sed -n '200,350p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 6005


🏁 Script executed:

cat -n ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx | sed -n '350,550p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 7972


🏁 Script executed:

cat -n ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx | sed -n '580,700p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 4409


🏁 Script executed:

cat -n ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx | sed -n '700,850p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 5918


🏁 Script executed:

cat -n ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx | wc -l

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 78


🏁 Script executed:

head -1 ./examples/react-native/RunAnywhereAI/src/components/model/ModelSelectionSheet.tsx

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 77


Add user feedback when model download is incomplete

When a model selection reaches handleModelSelected without localPath (edge case from race conditions or state inconsistency), the operation silently fails. Add an alert to inform the user:

    const handleModelSelected = useCallback(async (model: SDKModelInfo) => {
      setShowModelSelection(false);

      try {
+       if (!model.localPath) {
+         Alert.alert('Model not downloaded', 'Please download the model first.');
+         return;
+       }
        switch (modelSelectionType) {
          case 'stt':
            if (model.localPath) {
              const sttSuccess = await RunAnywhere.loadSTTModel(model.localPath, model.category || 'whisper');

This guards against edge cases where the model's availability state changes between sheet rendering and selection, providing clear user feedback instead of a silent failure.

🤖 Prompt for AI Agents
In `@examples/react-native/RunAnywhereAI/src/screens/VoiceAssistantScreen.tsx`
around lines 246 - 298, handleModelSelected silently returns when
SDKModelInfo.localPath is missing; add user feedback and closing behavior for
that edge case. In VoiceAssistantScreen.tsx, inside handleModelSelected (and/or
immediately after setShowModelSelection(false)), detect if !model.localPath and
call Alert.alert with a clear message like "Model not available" or "Model
download incomplete" (include model.name or id), then return; ensure the
modal/sheet is closed (setShowModelSelection(false) is already called) so the
user sees the alert and no silent failure occurs. Reference:
handleModelSelected, SDKModelInfo, modelSelectionType, setShowModelSelection.

Comment on lines 121 to 135
try {
// In React Native, permission handling depends on the platform
// This is a placeholder - actual implementation would use
// react-native-permissions or expo-permissions

// For now, assume permission is granted
// In production, integrate with actual permission APIs:
// - iOS: Uses AVAudioSession
// - Android: Uses RECORD_AUDIO permission
if (Platform.OS === 'android') {
const grants = await PermissionsAndroid.requestMultiple([
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
]);
const granted = grants[PermissionsAndroid.PERMISSIONS.RECORD_AUDIO] === PermissionsAndroid.RESULTS.GRANTED;
logger.info(`Android microphone permission: ${granted ? 'granted' : 'denied'}`);
this.state = granted ? 'idle' : 'error';
return granted;
}

logger.info('Microphone permission granted');
// iOS: Permission is requested when starting recording
logger.info('Microphone permission granted (iOS)');
this.state = 'idle';
return true;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd -t f "AudioCaptureManager.ts"

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 204


🏁 Script executed:

wc -l sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 169


🏁 Script executed:

cat -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts | head -200

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 7246


🏁 Script executed:

grep -n "requestPermission\|startRecording\|PermissionsAndroid\|react-native-permissions" sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 597


🏁 Script executed:

find . -name "package.json" -path "*/packages/core/*" | head -1 | xargs cat | grep -A 20 -B 5 "permission\|react-native"

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1682


🏁 Script executed:

sed -n '200,280p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2274


🏁 Script executed:

sed -n '215,250p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1008


iOS requestPermission() returns true without validating actual microphone permission

Lines 132–135: The method always returns true for iOS without checking whether the app actually has microphone permission. This creates an API inconsistency with Android (which uses PermissionsAndroid.requestMultiple()) and misleads callers into believing permission is granted when it may not be. The actual permission denial is only discovered when startRecording() calls NativeAudioModule.startRecording() and fails.

To align with Android's behavior and match the return value's semantics, consider using react-native-permissions or querying the native module's permission status before returning.

🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts`
around lines 121 - 135, requestPermission() incorrectly returns true on iOS
without verifying actual microphone access; update requestPermission (in
AudioCaptureManager) to query the real iOS permission state before returning
(e.g., use react-native-permissions or call a native helper like
NativeAudioModule.checkMicrophonePermission if available), set this.state to
'idle' only when permission is granted and to 'error' (or appropriate) when
denied, and return the boolean result so behavior matches the Android branch and
startRecording() expectations.

Comment on lines 147 to 195
async startRecording(onAudioData?: AudioDataCallback): Promise<void> {
if (this.state === 'recording') {
logger.warning('Already recording');
return;
}

this.audioDataCallback = onAudioData;
this.audioDataCallback = onAudioData ?? null;
this.audioBuffer = [];
this.recordingStartTime = Date.now();
this.state = 'recording';

logger.info('Starting audio recording...');
EventBus.publish('Voice', { type: 'recordingStarted' });

// Start audio level monitoring simulation
// In production, this would come from actual audio stream
this.startAudioLevelMonitoring();

// In production, this would start actual audio recording using:
// - expo-av
// - react-native-audio-api
// - or a custom native module

// For now, we emit a started event and rely on the native implementation
// to provide audio data through the callback
if (Platform.OS === 'ios') {
await this.startIOSRecording();
} else {
await this.startAndroidRecording();
}
}

/**
* Stop recording
* Stop recording and return recorded audio file path
*/
stopRecording(): void {
async stopRecording(): Promise<{ path: string; durationMs: number }> {
if (this.state !== 'recording') {
return;
throw new Error('Not recording');
}

logger.info('Stopping audio recording...');
this.state = 'idle';
this.stopAudioLevelMonitoring();

const duration = this.recordingStartTime
? (Date.now() - this.recordingStartTime) / 1000
: 0;
const durationMs = this.recordingStartTime ? Date.now() - this.recordingStartTime : 0;
let path = '';

EventBus.publish('Voice', {
type: 'recordingStopped',
duration,
});
if (Platform.OS === 'ios') {
path = await this.stopIOSRecording();
} else {
path = await this.stopAndroidRecording();
}

EventBus.publish('Voice', { type: 'recordingStopped', duration: durationMs / 1000 });

this.audioDataCallback = null;
this.recordingStartTime = null;

return { path, durationMs };
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and examine the AudioCaptureManager.ts file
find . -type f -name "AudioCaptureManager.ts" | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 167


🏁 Script executed:

# Get the file size to understand how to read it
wc -l sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 169


🏁 Script executed:

# Read the relevant section of the file around lines 147-195
sed -n '140,200p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1655


🏁 Script executed:

# Also check the overall structure of the file to see all methods
cat -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts | grep -E "(^\s*[0-9]+\s+(async |private |public |protected )?(start|stop)[A-Z])" | head -30

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 338


🏁 Script executed:

# Search for the startAndroidRecording and startIOSRecording method implementations
rg "startAndroidRecording|startIOSRecording" sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/ -A 10

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 4331


🏁 Script executed:

# Get the full implementation of startIOSRecording
sed -n '385,420p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1170


🏁 Script executed:

# Get the full implementation of startAndroidRecording
sed -n '420,451p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1045


🏁 Script executed:

# Search for any try/catch blocks around the startRecording method call sites
rg "startRecording" sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/ -B 3 -A 3

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2628


🏁 Script executed:

# Let's read the complete startIOSRecording method
sed -n '375,410p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1109


🏁 Script executed:

# And the complete startAndroidRecording method
sed -n '410,445p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1153


🏁 Script executed:

# Check the exact line numbers for these methods
rg "private async start(iOS|Android)Recording" -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 134


🏁 Script executed:

# Get the line numbers for both methods more precisely
rg "private async start" -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 191


🏁 Script executed:

# Read the startIOSRecording method - let's find it first
rg -n "startIOSRecording" sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts | head -5

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 172


🏁 Script executed:

# Now read from line 200 onwards to find startIOSRecording
sed -n '200,250p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1400


🏁 Script executed:

# Read startAndroidRecording which is at line 249
sed -n '249,300p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1548


Wrap platform recording start in try/catch to prevent stuck recording state

If startIOSRecording() throws before its internal try-catch (e.g., missing NativeAudioModule), or if startAndroidRecording() throws at any point, the manager remains in a recording state with no way to recover. Add a try-catch block around the platform-specific calls to reset state and cleanup resources on failure.

🔧 Suggested fix
-    if (Platform.OS === 'ios') {
-      await this.startIOSRecording();
-    } else {
-      await this.startAndroidRecording();
-    }
+    try {
+      if (Platform.OS === 'ios') {
+        await this.startIOSRecording();
+      } else {
+        await this.startAndroidRecording();
+      }
+    } catch (error) {
+      this.state = 'error';
+      this.stopAudioLevelMonitoring();
+      this.audioDataCallback = null;
+      this.recordingStartTime = null;
+      throw error;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async startRecording(onAudioData?: AudioDataCallback): Promise<void> {
if (this.state === 'recording') {
logger.warning('Already recording');
return;
}
this.audioDataCallback = onAudioData;
this.audioDataCallback = onAudioData ?? null;
this.audioBuffer = [];
this.recordingStartTime = Date.now();
this.state = 'recording';
logger.info('Starting audio recording...');
EventBus.publish('Voice', { type: 'recordingStarted' });
// Start audio level monitoring simulation
// In production, this would come from actual audio stream
this.startAudioLevelMonitoring();
// In production, this would start actual audio recording using:
// - expo-av
// - react-native-audio-api
// - or a custom native module
// For now, we emit a started event and rely on the native implementation
// to provide audio data through the callback
if (Platform.OS === 'ios') {
await this.startIOSRecording();
} else {
await this.startAndroidRecording();
}
}
/**
* Stop recording
* Stop recording and return recorded audio file path
*/
stopRecording(): void {
async stopRecording(): Promise<{ path: string; durationMs: number }> {
if (this.state !== 'recording') {
return;
throw new Error('Not recording');
}
logger.info('Stopping audio recording...');
this.state = 'idle';
this.stopAudioLevelMonitoring();
const duration = this.recordingStartTime
? (Date.now() - this.recordingStartTime) / 1000
: 0;
const durationMs = this.recordingStartTime ? Date.now() - this.recordingStartTime : 0;
let path = '';
EventBus.publish('Voice', {
type: 'recordingStopped',
duration,
});
if (Platform.OS === 'ios') {
path = await this.stopIOSRecording();
} else {
path = await this.stopAndroidRecording();
}
EventBus.publish('Voice', { type: 'recordingStopped', duration: durationMs / 1000 });
this.audioDataCallback = null;
this.recordingStartTime = null;
return { path, durationMs };
}
async startRecording(onAudioData?: AudioDataCallback): Promise<void> {
if (this.state === 'recording') {
logger.warning('Already recording');
return;
}
this.audioDataCallback = onAudioData ?? null;
this.audioBuffer = [];
this.recordingStartTime = Date.now();
this.state = 'recording';
logger.info('Starting audio recording...');
EventBus.publish('Voice', { type: 'recordingStarted' });
try {
if (Platform.OS === 'ios') {
await this.startIOSRecording();
} else {
await this.startAndroidRecording();
}
} catch (error) {
this.state = 'error';
this.stopAudioLevelMonitoring();
this.audioDataCallback = null;
this.recordingStartTime = null;
throw error;
}
}
/**
* Stop recording and return recorded audio file path
*/
async stopRecording(): Promise<{ path: string; durationMs: number }> {
if (this.state !== 'recording') {
throw new Error('Not recording');
}
logger.info('Stopping audio recording...');
this.state = 'idle';
this.stopAudioLevelMonitoring();
const durationMs = this.recordingStartTime ? Date.now() - this.recordingStartTime : 0;
let path = '';
if (Platform.OS === 'ios') {
path = await this.stopIOSRecording();
} else {
path = await this.stopAndroidRecording();
}
EventBus.publish('Voice', { type: 'recordingStopped', duration: durationMs / 1000 });
this.audioDataCallback = null;
this.recordingStartTime = null;
return { path, durationMs };
}
🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts`
around lines 147 - 195, The startRecording method can leave the manager stuck in
'recording' if startIOSRecording() or startAndroidRecording() throws; wrap the
Platform.OS branch in a try/catch around the await of
startIOSRecording()/startAndroidRecording(), and on catch reset state to 'idle',
clear audioBuffer, set audioDataCallback = null, recordingStartTime = null, call
stopAudioLevelMonitoring(), publish a failure event (e.g.,
EventBus.publish('Voice',{type:'recordingFailed', error: err})) or log the
error, then rethrow the error so callers know it failed; update startRecording
to perform this cleanup whenever the platform-specific start call fails.

Comment on lines 249 to 297
private async startAndroidRecording(): Promise<void> {
const audioStream = getLiveAudioStream();
if (!audioStream) {
throw new Error('LiveAudioStream not available on Android');
}

return result;
}
this.androidAudioChunks = [];

/**
* Process incoming audio data from native
* Called by native audio module when data is available
*/
processAudioData(data: ArrayBuffer): void {
if (this.state !== 'recording') return;
audioStream.init({
sampleRate: this.config.sampleRate,
channels: this.config.channels,
bitsPerSample: this.config.bitsPerSample,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});

this.audioBuffer.push(data);
audioStream.on('data', (data: string) => {
this.androidAudioChunks.push(data);

// Calculate audio level from the data
this.updateAudioLevel(data);
// Calculate audio level from chunk
const level = this.calculateAudioLevelFromBase64(data);
this.currentAudioLevel = level;

if (this.audioLevelCallback) {
this.audioLevelCallback(level);
}

// Forward to callback
if (this.audioDataCallback) {
this.audioDataCallback(data);
// Convert to ArrayBuffer and forward to callback
if (this.audioDataCallback) {
const buffer = this.base64ToArrayBuffer(data);
this.audioDataCallback(buffer);
}
});

audioStream.start();
logger.info('Android recording started');
}

private async stopAndroidRecording(): Promise<string> {
const audioStream = getLiveAudioStream();
if (audioStream) {
audioStream.stop();
}

// Create WAV file from chunks
const path = await this.createWavFileFromChunks();
this.androidAudioChunks = [];
return path;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f "AudioCaptureManager.ts" --exec wc -l {}

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 171


🏁 Script executed:

cat -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts | head -100

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 3769


🏁 Script executed:

cat -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts | sed -n '240,310p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2536


🏁 Script executed:

# Search for getLiveAudioStream to understand what it returns
rg "getLiveAudioStream" -A 5 -B 5

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 9051


🏁 Script executed:

# Look for any removeListener or off patterns in the codebase
rg "removeListener|removeAllListeners|\.off\(" --type ts --type tsx | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 101


🏁 Script executed:

rg "react-native-live-audio-stream" -A 10 -B 2

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 13433


🏁 Script executed:

# Look for package.json to see the version of react-native-live-audio-stream
find . -name "package.json" -type f -exec grep -l "react-native-live-audio-stream" {} \;

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 124


🏁 Script executed:

# Check if there are any tests or examples using repeated sessions
rg "startAndroidRecording|start.*Recording" -A 3 -B 1 | head -50

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 4955


🏁 Script executed:

# Look for EventEmitter patterns or removeListener usage in the codebase
rg "\.off\(|removeListener|removeAllListeners" -B 2 -A 2

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1160


🏁 Script executed:

# Look for any .off() method usage in the codebase
rg "\.off\(" --type ts -B 2 -A 2

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 56


🏁 Script executed:

# Search for any imports or type definitions related to react-native-live-audio-stream
find . -name "*.ts" -type f -exec grep -l "LiveAudioStream" {} \; | xargs cat | grep -A 5 -B 5 "\.on\|\.off\|removeListener"

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 3203


🌐 Web query:

react-native-live-audio-stream API documentation removeListener off method

💡 Result:

Short answer: remove the listener by keeping the subscription returned from LiveAudioStream.on (or from addListener) and calling .remove() on it — don't call a global .off/.removeListener.

Example:

  • const sub = LiveAudioStream.on('data', cb);
  • ...
  • sub.remove();

Sources: package README (usage shows LiveAudioStream.on) and React Native EventEmitter subscription API (use returned subscription's remove()). [1][2]

References:

  • react-native-live-audio-stream README. [1]
  • How to remove RN EventEmitter listeners (addListener returns subscription with remove()). [2]

🏁 Script executed:

sed -n '249,297p' sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1440


🏁 Script executed:

# Check for all audioStream.on calls in both files
rg "audioStream\.(on|once|addListener)" -B 2 -A 5

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1967


🏁 Script executed:

sed -n '120,170p' sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1667


Store and remove the 'data' listener subscription to prevent callback duplication and memory leaks

Each call to startAndroidRecording() registers a new listener via audioStream.on('data', ...) without removing the previous one. Since getLiveAudioStream() returns a singleton, repeated recording sessions accumulate multiple listeners, causing duplicate callbacks and memory leaks.

Store the subscription returned from .on() and call .remove() to unsubscribe in stopAndroidRecording():

🔧 Suggested fix
  export class AudioCaptureManager {
    private state: AudioCaptureState = 'idle';
+   private androidDataSubscription: any = null;
    
    private async startAndroidRecording(): Promise<void> {
      const audioStream = getLiveAudioStream();
      if (!audioStream) {
        throw new Error('LiveAudioStream not available on Android');
      }

      this.androidAudioChunks = [];

      audioStream.init({
        sampleRate: this.config.sampleRate,
        channels: this.config.channels,
        bitsPerSample: this.config.bitsPerSample,
        audioSource: 6, // VOICE_RECOGNITION
        bufferSize: 4096,
      });

-     audioStream.on('data', (data: string) => {
+     this.androidDataSubscription = audioStream.on('data', (data: string) => {
        this.androidAudioChunks.push(data);

        // Calculate audio level from chunk
        const level = this.calculateAudioLevelFromBase64(data);
        this.currentAudioLevel = level;

        if (this.audioLevelCallback) {
          this.audioLevelCallback(level);
        }

        // Convert to ArrayBuffer and forward to callback
        if (this.audioDataCallback) {
          const buffer = this.base64ToArrayBuffer(data);
          this.audioDataCallback(buffer);
        }
      });

      audioStream.start();
      logger.info('Android recording started');
    }

    private async stopAndroidRecording(): Promise<string> {
      if (this.androidDataSubscription) {
        this.androidDataSubscription.remove();
        this.androidDataSubscription = null;
      }

      const audioStream = getLiveAudioStream();
      if (audioStream) {
        audioStream.stop();
      }

      // Create WAV file from chunks
      const path = await this.createWavFileFromChunks();
      this.androidAudioChunks = [];
      return path;
    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async startAndroidRecording(): Promise<void> {
const audioStream = getLiveAudioStream();
if (!audioStream) {
throw new Error('LiveAudioStream not available on Android');
}
return result;
}
this.androidAudioChunks = [];
/**
* Process incoming audio data from native
* Called by native audio module when data is available
*/
processAudioData(data: ArrayBuffer): void {
if (this.state !== 'recording') return;
audioStream.init({
sampleRate: this.config.sampleRate,
channels: this.config.channels,
bitsPerSample: this.config.bitsPerSample,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});
this.audioBuffer.push(data);
audioStream.on('data', (data: string) => {
this.androidAudioChunks.push(data);
// Calculate audio level from the data
this.updateAudioLevel(data);
// Calculate audio level from chunk
const level = this.calculateAudioLevelFromBase64(data);
this.currentAudioLevel = level;
if (this.audioLevelCallback) {
this.audioLevelCallback(level);
}
// Forward to callback
if (this.audioDataCallback) {
this.audioDataCallback(data);
// Convert to ArrayBuffer and forward to callback
if (this.audioDataCallback) {
const buffer = this.base64ToArrayBuffer(data);
this.audioDataCallback(buffer);
}
});
audioStream.start();
logger.info('Android recording started');
}
private async stopAndroidRecording(): Promise<string> {
const audioStream = getLiveAudioStream();
if (audioStream) {
audioStream.stop();
}
// Create WAV file from chunks
const path = await this.createWavFileFromChunks();
this.androidAudioChunks = [];
return path;
}
private async startAndroidRecording(): Promise<void> {
const audioStream = getLiveAudioStream();
if (!audioStream) {
throw new Error('LiveAudioStream not available on Android');
}
this.androidAudioChunks = [];
audioStream.init({
sampleRate: this.config.sampleRate,
channels: this.config.channels,
bitsPerSample: this.config.bitsPerSample,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});
this.androidDataSubscription = audioStream.on('data', (data: string) => {
this.androidAudioChunks.push(data);
// Calculate audio level from chunk
const level = this.calculateAudioLevelFromBase64(data);
this.currentAudioLevel = level;
if (this.audioLevelCallback) {
this.audioLevelCallback(level);
}
// Convert to ArrayBuffer and forward to callback
if (this.audioDataCallback) {
const buffer = this.base64ToArrayBuffer(data);
this.audioDataCallback(buffer);
}
});
audioStream.start();
logger.info('Android recording started');
}
private async stopAndroidRecording(): Promise<string> {
if (this.androidDataSubscription) {
this.androidDataSubscription.remove();
this.androidDataSubscription = null;
}
const audioStream = getLiveAudioStream();
if (audioStream) {
audioStream.stop();
}
// Create WAV file from chunks
const path = await this.createWavFileFromChunks();
this.androidAudioChunks = [];
return path;
}
🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts`
around lines 249 - 297, startAndroidRecording registers an event handler via
audioStream.on('data', ...) on a singleton LiveAudioStream but never removes it,
causing duplicate callbacks/memory leaks; fix by storing the subscription
returned by audioStream.on (e.g., set this.androidDataSubscription =
audioStream.on('data', ...)) inside startAndroidRecording and in
stopAndroidRecording check and call this.androidDataSubscription.remove() (or
unsubscribe equivalent) before clearing androidAudioChunks, then null out the
subscription; update startAndroidRecording and stopAndroidRecording to manage
this.androidDataSubscription lifecycle so repeated sessions don't accumulate
listeners.

Comment on lines +346 to +380
private async playFileAndroid(filePath: string): Promise<void> {
const SoundClass = getSound();
if (!SoundClass) {
throw new Error('react-native-sound not available');
}

return new Promise((resolve, reject) => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
this.currentSound = new SoundClass(filePath, '', (error: any) => {
if (error) {
this.state = 'error';
reject(error);
return;
}

this.playbackDuration = this.currentSound.getDuration();
this.currentSound.setVolume(this.volume);

this.currentSound.play((success: boolean) => {
if (this.currentSound) {
this.currentSound.release();
this.currentSound = null;
}

if (success) {
this.handlePlaybackComplete();
resolve();
} else {
this.state = 'error';
reject(new Error('Playback failed'));
}
});
});
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the getSound() function and understand the context
cd sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession && \
grep -r "getSound" --include="*.ts" --include="*.tsx" -B 3 -A 5

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1052


🏁 Script executed:

# Look for the Sound import and see how it's used elsewhere in the file
cd sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession && \
cat -n AudioPlaybackManager.ts | head -100

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 3404


🏁 Script executed:

# Check if react-native-sound is in package.json and what version
find . -name "package.json" -type f -exec grep -l "react-native-sound" {} \;

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 124


🌐 Web query:

react-native-sound library release() method documentation memory leak error handling

💡 Result:

Summary — react-native-sound release() and avoiding memory leaks

Key facts

  • The library exposes release() to free the native audio player for a Sound instance. Call release() when you no longer need that Sound instance (e.g., after stop() or when the component unmounts). [1]
  • Initialization and playback provide callbacks you should use to detect errors and completion (constructor onError, play(onEnd)). Always check those and call release() in error/finish paths. [1]

Practical recommendations (minimal checklist)

  • Reuse instances when possible for repeated short sounds; avoid creating a new Sound for every event. If you must create many, call stop() then release() as soon as playback is finished. [1][2]
  • In React components: release Sound in cleanup — componentWillUnmount or a useEffect return cleanup. Example: create sound → play → onEnd or onError → sound.release(); also release in unmount if still present. This prevents "setState on unmounted component" warnings and native resource leaks. [1]
  • Handle errors: in the Sound constructor callback check error and call release() (or avoid creating the instance). In play(onEnd) check success flag and release on failure or after success if you won’t reuse the instance. [1]
  • Android notes: older Android issues (premature MediaPlayer collection / crashes) have been addressed in releases — keep react-native-sound up to date and set appropriate category/stream if needed. If you see crashes after many plays, consider reusing instances or switching to a pool-based approach. [2][3]
  • If you need very high-frequency short sounds on Android, consider a SoundPool-based approach (or a different library) because creating many MediaPlayer instances is heavy. If you still see leaks, profile native memory with Android Studio/Xcode to confirm where allocations persist. [2][3]

References

  • react-native-sound API notes (release(), callbacks, usage). [1]
  • react-native-sound GitHub — releases and fixes (MediaPlayer/premature collection fix). [2]
  • Reports / StackOverflow threads about crashes when playing many files and recommended release/unload patterns. [3]

If you want, I can: (a) show a small React hook example that safely loads/plays/releases a Sound instance, or (b) search for a specific issue/PR if you have a memory leak trace or stack dump.


Release the Sound instance on load failure.

The constructor callback error path does not release the allocated Sound instance, causing a memory leak on repeated failures. The play() callback properly releases in both success and error paths (lines 366–371), but the same cleanup is missing in the constructor error handler.

Suggested fix
       this.currentSound = new SoundClass(filePath, '', (error: any) => {
         if (error) {
           this.state = 'error';
+          this.currentSound.release();
+          this.currentSound = null;
           reject(error);
           return;
         }
🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts`
around lines 346 - 380, In playFileAndroid, the constructor callback error path
currently rejects without releasing the allocated Sound instance, leaking
resources; update the constructor error handler inside playFileAndroid (where
this.currentSound is assigned via new SoundClass(...)) to call
this.currentSound.release() and set this.currentSound = null before setting
this.state = 'error' and rejecting, mirroring the cleanup performed in the play
callback so the Sound instance is always released on load failure.

Comment on lines +205 to +233
const fileName = `recording_${Date.now()}.wav`;
const filePath = `${fs.CachesDirectoryPath}/${fileName}`;
currentRecordPath = filePath;
audioChunks = [];
progressCallback = callbacks?.onProgress ?? null;

audioStream.init({
sampleRate: AUDIO_SAMPLE_RATE,
channels: 1,
bitsPerSample: 16,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});

audioStream.on('data', (data: string) => {
audioChunks.push(data);

if (progressCallback) {
const elapsed = Date.now() - recordingStartTime;
const audioLevel = calculateAudioLevelFromBase64(data);
progressCallback(elapsed, audioLevel);
}
});

audioStream.start();
isRecording = true;
recordingStartTime = Date.now();

logger.info(`Android recording started: ${filePath}`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd "RunAnywhere.*Audio" -t f

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 159


🏁 Script executed:

find . -type f -name "*RunAnywhere*" | grep -i audio

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 161


🏁 Script executed:

ls -la sdk/runanywhere-react-native/packages/core/src/Public/Extensions/ 2>/dev/null || find . -type d -name Extensions 2>/dev/null | head -5

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1059


🏁 Script executed:

cat -n ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts | sed -n '200,235p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1278


🏁 Script executed:

sed -n '205,233p' ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 869


🏁 Script executed:

wc -l ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 165


🏁 Script executed:

awk 'NR>=205 && NR<=233 {print NR": "$0}' ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1014


🏁 Script executed:

sed -n '180,250p' ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2051


🏁 Script executed:

python3 << 'EOF'
with open('./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts', 'r') as f:
    lines = f.readlines()
    # Print lines 190-250 for context
    for i in range(189, min(250, len(lines))):
        print(f"{i+1}: {lines[i]}", end='')
EOF

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 160


🏁 Script executed:

python3 << 'EOF'
with open('./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts', 'r') as f:
    content = f.read()
    lines = content.split('\n')
    
    # Find the Android audio recording section
    for i, line in enumerate(lines):
        if 'Platform.OS === \'android\'' in line or 'audioStream.init' in line:
            start = max(0, i - 10)
            end = min(len(lines), i + 40)
            for j in range(start, end):
                print(f"{j+1}: {lines[j]}")
            print("\n--- Next section ---\n")
            break
EOF

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 160


🏁 Script executed:

node << 'EOF'
const fs = require('fs');
const content = fs.readFileSync('./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts', 'utf-8');
const lines = content.split('\n');
for (let i = 204; i < 233 && i < lines.length; i++) {
  console.log(`${i+1}: ${lines[i]}`);
}
EOF

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1014


🏁 Script executed:

awk 'NR==205,NR==233 {print NR": "$0}' ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1014


🏁 Script executed:

head -233 ./sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts | tail -29 | awk '{printf NR+204": %s\n", $0}'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1014


Initialize recordingStartTime before registering the stream data event handler.

The on('data') handler can fire immediately after start() is called. With recordingStartTime set after start(), the first elapsed value will be inaccurate because the handler uses a stale or uninitialized recordingStartTime value. Move the timestamp assignment before the handler registration.

Proposed fix
-  audioStream.on('data', (data: string) => {
+  recordingStartTime = Date.now();
+  audioStream.on('data', (data: string) => {
     audioChunks.push(data);

     if (progressCallback) {
       const elapsed = Date.now() - recordingStartTime;
       const audioLevel = calculateAudioLevelFromBase64(data);
       progressCallback(elapsed, audioLevel);
     }
   });

   audioStream.start();
   isRecording = true;
-  recordingStartTime = Date.now();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const fileName = `recording_${Date.now()}.wav`;
const filePath = `${fs.CachesDirectoryPath}/${fileName}`;
currentRecordPath = filePath;
audioChunks = [];
progressCallback = callbacks?.onProgress ?? null;
audioStream.init({
sampleRate: AUDIO_SAMPLE_RATE,
channels: 1,
bitsPerSample: 16,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});
audioStream.on('data', (data: string) => {
audioChunks.push(data);
if (progressCallback) {
const elapsed = Date.now() - recordingStartTime;
const audioLevel = calculateAudioLevelFromBase64(data);
progressCallback(elapsed, audioLevel);
}
});
audioStream.start();
isRecording = true;
recordingStartTime = Date.now();
logger.info(`Android recording started: ${filePath}`);
const fileName = `recording_${Date.now()}.wav`;
const filePath = `${fs.CachesDirectoryPath}/${fileName}`;
currentRecordPath = filePath;
audioChunks = [];
progressCallback = callbacks?.onProgress ?? null;
audioStream.init({
sampleRate: AUDIO_SAMPLE_RATE,
channels: 1,
bitsPerSample: 16,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});
recordingStartTime = Date.now();
audioStream.on('data', (data: string) => {
audioChunks.push(data);
if (progressCallback) {
const elapsed = Date.now() - recordingStartTime;
const audioLevel = calculateAudioLevelFromBase64(data);
progressCallback(elapsed, audioLevel);
}
});
audioStream.start();
isRecording = true;
logger.info(`Android recording started: ${filePath}`);
🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere`+Audio.ts
around lines 205 - 233, The on('data') handler for audioStream uses
recordingStartTime to compute elapsed time but recordingStartTime is assigned
only after audioStream.start(), causing the first elapsed value to be wrong;
move the assignment of recordingStartTime to before registering the
audioStream.on('data') handler (i.e., set recordingStartTime = Date.now()
immediately after configuring audioStream and before calling
audioStream.on('data')), then proceed to start the stream with
audioStream.start() and set isRecording = true as before.

Comment on lines +263 to +315
// Android: Stop LiveAudioStream and create WAV file
const audioStream = getLiveAudioStream();
if (audioStream) {
audioStream.stop();
}
isRecording = false;

const uri = currentRecordPath || '';
logger.info(`Android recording stopped, processing ${audioChunks.length} chunks`);

// Create WAV file from chunks
await createWavFileFromChunks(uri, audioChunks);

// Clean up
audioChunks = [];
currentRecordPath = null;
progressCallback = null;

return { uri, durationMs };
}

/**
* Cancel recording without saving
*/
export async function cancelRecording(): Promise<void> {
if (audioLevelInterval) {
clearInterval(audioLevelInterval);
audioLevelInterval = null;
}

if (!isRecording) return;

if (Platform.OS === 'ios' && NativeAudioModule) {
try {
await NativeAudioModule.cancelRecording();
} catch {
// Ignore
}
} else {
const audioStream = getLiveAudioStream();
if (audioStream) {
audioStream.stop();
}

// Delete partial file
if (currentRecordPath) {
const fs = getRNFS();
try {
await fs?.unlink(currentRecordPath);
} catch {
// File may not exist
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts | head -300

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 11125


🏁 Script executed:

# Find the startRecording function to see if listeners are added
rg -n "startRecording" sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts -A 50

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 2312


🏁 Script executed:

# Check for listener registration patterns
rg -n "\.on\(" sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts -B 2 -A 2

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 174


🏁 Script executed:

# Check package.json for react-native-live-audio-stream version
rg -n "react-native-live-audio-stream" sdk/runanywhere-react-native/ --type json

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 56


🏁 Script executed:

# Find package.json files
find . -name "package.json" -type f | head -10

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 346


🏁 Script executed:

# Search for react-native-live-audio-stream in dependencies
find . -name "package.json" -exec grep -l "react-native-live-audio-stream" {} \;

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 124


🏁 Script executed:

# Check if there are any removeListener or off patterns used elsewhere
rg -n "removeListener|\.off\(" sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 56


🏁 Script executed:

# Look for removeAllListeners usage in the codebase
rg -n "removeAllListeners" sdk/runanywhere-react-native/ --type ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 429


🏁 Script executed:

cat ./examples/react-native/RunAnywhereAI/package.json | grep -A 2 -B 2 "react-native-live-audio-stream"

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 290


🏁 Script executed:

# Check if there's any documentation or import statements for EventEmitter patterns
rg -n "EventEmitter\|removeListener\|\.off\(" sdk/runanywhere-react-native/packages/core/src/Public/Extensions/ --type ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 56


🏁 Script executed:

# Look at the full cancelRecording to see complete context
sed -n '287,320p' sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+Audio.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 799


🌐 Web query:

react-native-live-audio-stream 1.1.1 removeListener API EventEmitter

💡 Result:

Short answer: react-native-live-audio-stream v1.1.1 uses React Native's event API (LiveAudioStream.on / NativeEventEmitter). EventEmitter.removeListener was deprecated — instead keep the subscription returned by addListener()/on and call .remove() on it (or call removeAllListeners). See sources.

What to change (example)

  • Old (deprecated)
    LiveAudioStream.on('data', onData);
    ...
    LiveAudioStream.removeListener('data', onData); // deprecated / warning
  • New (recommended)
    const sub = LiveAudioStream.on('data', onData); // subscription returned
    ...
    sub.remove(); // remove this listener

If the library doesn't return the subscription from LiveAudioStream.on, store the EmitterSubscription returned by NativeEventEmitter.addListener and call .remove() on it, or call LiveAudioStream.removeAllListeners('data') as a fallback.

Sources: react-native-live-audio-stream README (usage example, v1.1.1). [1] React Native changelog/docs: NativeEventEmitter no longer implements removeListener — use subscription.remove(). [2]

References
[1] react-native-live-audio-stream (GitHub / README, repo / npm listing v1.1.1).
[2] React Native changelog / docs noting deprecation of EventEmitter.removeListener and recommendation to use subscription.remove().


🌐 Web query:

react-native-live-audio-stream removeAllListeners method documentation

💡 Result:

Short answer: The react-native-live-audio-stream package does not document its own removeAllListeners API. It exposes events via the React Native native-module/event-emitter pattern (see repo README), so you should remove listeners using the standard React Native EventEmitter/subscription APIs (e.g. call subscription.remove() or EventEmitter.removeAllListeners(eventName) from React Native) or stop/destroy the stream to stop events. [1][2]

Sources:

  • react-native-live-audio-stream (README / API overview). [1]
  • React Native EventEmitter typings (removeAllListeners(eventType?: string) signature). [2]

Remove listeners when stopping/canceling recording to prevent duplicate chunks and memory leaks.

The startRecording() function adds a data listener at line 219 (audioStream.on('data', ...)). Without removing it, subsequent recordings accumulate listeners on the same stream instance, causing duplicate data chunks and memory leaks.

Apply the fix to both stopRecording() (line 266) and cancelRecording() (line 306):

🛠️ Proposed fix
  if (audioStream) {
    audioStream.stop();
+   audioStream.removeAllListeners?.('data');
  }

For react-native-live-audio-stream v1.1.1, removeAllListeners('data') is the correct approach to remove all listeners for the 'data' event.

🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere`+Audio.ts
around lines 263 - 315, startRecording adds a 'data' listener to the live stream
via audioStream.on('data', ...) and you must remove those listeners when ending
recording to avoid duplicated chunks/memory leaks; in both stopRecording() and
cancelRecording() locate the live stream via getLiveAudioStream() and, before
(or immediately after) calling audioStream.stop(), call
audioStream.removeAllListeners('data') (react-native-live-audio-stream v1.1.1)
to remove the data listeners, then proceed with stopping/cleanup
(currentRecordPath unlink, clearing intervals, resetting
audioChunks/progressCallback/isRecording).

Fix "Cannot read property 'publish' of undefined" error caused by
circular dependency between VoiceSession modules and EventBus.

Changes:
- AudioCaptureManager.ts: lazy-load EventBus via safePublish helper
- AudioPlaybackManager.ts: lazy-load EventBus via safePublish helper
- VoiceSessionHandle.ts: lazy-load EventBus via safePublish helper

The circular dependency chain was:
VoiceSession modules -> EventBus -> SDKLogger -> ... -> VoiceSession

By lazy-loading EventBus at runtime instead of import time, we break
the circular dependency and ensure EventBus is fully initialized
before being accessed.
Copy link

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 074a491 in 2 minutes and 2 seconds. Click for details.
  • Reviewed 249 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts:35
  • Draft comment:
    Duplicate event publishing logic: the safePublish (with lazy loading of EventBus) is defined here. Consider moving this common logic to a shared utility module to avoid code duplication.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment is suggesting a refactor to move code to a shared utility. However, I can only see this code in one location in the current file. The comment says "duplicate event publishing logic" but doesn't show where else this duplication exists. If the duplication is in other files, this violates the rule "Ignore cross-file issues. Only think about the file you are reviewing." Without seeing evidence of duplication within this file itself, this appears to be either a cross-file concern or an incorrect observation. The diff shows this is new code being added, not code that already exists elsewhere in this file. Perhaps the reviewer has knowledge of other files in the codebase that have similar lazy-loading patterns for EventBus, making this a legitimate duplication concern. The comment might be valid if this pattern exists elsewhere in the codebase. Even if there is duplication in other files, the rules explicitly state to "Ignore cross-file issues. Only think about the file you are reviewing." Without evidence of duplication within this specific file, and given that this is new code being added, the comment requires cross-file context to be valid and should be deleted. This comment should be deleted because it appears to be a cross-file concern. There's no evidence of duplication within this file itself, and the rules explicitly state to ignore cross-file issues and only focus on the file being reviewed.
2. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts:19
  • Draft comment:
    Duplicate safePublish function detected. Centralize the event bus handling logic into a shared module to reduce redundancy and ease maintenance.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment suggests there's a duplicate, but I only see one definition of safePublish in this file (lines 19-29). The diff shows this is a new addition to handle circular dependency issues. To verify if this is truly a duplicate, I would need to see other files in the codebase, which violates the "ignore cross-file issues" rule. The comment is asking to refactor this into a shared module, but without evidence that this pattern exists elsewhere in THIS file or THIS diff, I cannot confirm the duplication. The comment appears to be speculative about code organization across multiple files, which I cannot verify from this diff alone. Could the automated tool have detected this same pattern in another file that was changed in the same PR? Maybe there's context from other files in the PR that shows duplication. However, I'm instructed to only consider the current file and ignore cross-file issues. Even if there is duplication across files, the rules explicitly state to "ignore cross-file issues" and "only think about the file you are reviewing." Without seeing duplication within THIS file or THIS diff, I cannot verify the comment's claim. The comment requires knowledge of other files to validate. The comment claims duplication but provides no evidence within this file or diff. There is only one safePublish function definition visible in this file. Verifying this comment would require examining other files, which violates the cross-file rule. Without strong evidence in the current diff, this comment should be deleted.
3. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts:495
  • Draft comment:
    The arrayBufferToBase64 utility is duplicated here (similar to one in AudioCaptureManager). Consider extracting these conversion utilities to a shared utility module.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts:53
  • Draft comment:
    The safePublish function (with lazy EventBus loading) is duplicated here as well. Consider consolidating this functionality into a common shared module to avoid redundancy.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment is suggesting a refactoring to avoid code duplication. However, this is a cross-file issue - the comment implies that similar code exists in other files ("duplicated here as well"), but I can only see this one file in the diff. According to the rules, I should "Ignore cross-file issues. Only think about the file you are reviewing." Additionally, the comment is about code quality/refactoring, which can be good, but only if actionable and clear. Since I cannot see the other files where this is supposedly duplicated, I cannot verify if this is actually duplicated or if consolidation makes sense. This requires context from other files to properly evaluate. The comment might be valid if there truly is duplication across multiple files. Without seeing the other files, I cannot definitively say this is wrong. The suggestion to consolidate could be a reasonable code quality improvement if the duplication exists. While the comment might be technically correct about duplication existing elsewhere, the rules explicitly state to ignore cross-file issues and only focus on the file being reviewed. Since I cannot see evidence of duplication within this diff alone, and understanding whether this refactoring is appropriate requires seeing other files, this comment should be deleted per the guidelines. This comment should be deleted because it's a cross-file issue that requires context from other files to evaluate. The rules explicitly state to ignore cross-file issues and only review the file at hand.

Workflow ID: wflow_38sHE0kpiSWGSrrR

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts (1)

143-176: Temporary WAV files are never cleaned up.

The play() method creates WAV files in the cache directory (line 476: ${RNFS.CachesDirectoryPath}/${fileName}) but these files are never deleted after playback completes. Over time, this can accumulate significant disk usage, especially in apps with frequent TTS usage.

Consider deleting the temp file in handlePlaybackComplete() or after playback ends in the platform-specific methods.

Suggested approach

Track the current file path and clean it up after playback:

 export class AudioPlaybackManager {
   private state: PlaybackState = 'idle';
   // ...
+  private currentFilePath: string | null = null;
   
   // In playFile():
   async playFile(filePath: string): Promise<void> {
     this.playbackStartTime = Date.now();
     this.state = 'playing';
+    this.currentFilePath = filePath;
     // ...
   }
   
   // In handlePlaybackComplete() or stop():
   private handlePlaybackComplete(): void {
+    this.cleanupTempFile();
     // ...
   }
   
+  private async cleanupTempFile(): Promise<void> {
+    if (this.currentFilePath?.includes('tts_')) {
+      try {
+        const RNFS = require('react-native-fs');
+        await RNFS.unlink(this.currentFilePath);
+      } catch { /* ignore */ }
+      this.currentFilePath = null;
+    }
+  }
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts (1)

387-394: Redundant audioCapture.cleanup() call.

When cleanup() is called, it invokes stop() (line 388) which already calls this.audioCapture.cleanup() (line 291). Then cleanup() calls this.audioCapture.cleanup() again on line 389, resulting in a double cleanup.

🔧 Suggested fix
 cleanup(): void {
   this.stop();
-  this.audioCapture.cleanup();
   this.audioPlayback.cleanup();
   this.eventListeners = [];
   this.eventCallback = null;
   logger.info('VoiceSessionHandle cleaned up');
 }
🤖 Fix all issues with AI agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts`:
- Around line 334-338: The code uses atob/btoa directly in AudioCaptureManager
(e.g., the loop over this.androidAudioChunks and three other spots) which can
break on RN engines without those globals; add a small cross-platform base64
utility (e.g., decodeBase64/encodeBase64) that first tries global atob/btoa,
then falls back to a polyfill or native package like react-native-quick-base64
or Buffer-based implementation, export it from a shared util and replace all
direct atob/btoa calls in AudioCaptureManager (and the three other usages) with
decodeBase64/encodeBase64 so decoding/encoding works consistently across
Hermes/JSC.

In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts`:
- Around line 423-427: The code uses atob in AudioPlaybackManager (the
binaryString = atob(audioBase64) call) which may be undefined on some Hermes/JSC
bundles; add a runtime compatibility check and fallback: before calling atob (in
the method that decodes audioBase64) verify typeof atob !== 'undefined' and if
undefined either throw a clear error or use a polyfill (e.g., import decode from
'base-64' and assign global.atob = decode at startup), ensuring the bytes
conversion logic (the Uint8Array creation and charCodeAt loop) only runs after a
valid decoder is available.
♻️ Duplicate comments (6)
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts (1)

369-374: Memory leak on Sound constructor failure.

The constructor error path rejects without releasing the allocated Sound instance. When the callback receives an error, this.currentSound has already been assigned but is never cleaned up before rejection.

Fix per previous review
       this.currentSound = new SoundClass(filePath, '', (error: any) => {
         if (error) {
           this.state = 'error';
+          if (this.currentSound) {
+            this.currentSound.release();
+            this.currentSound = null;
+          }
           reject(error);
           return;
         }
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts (4)

358-365: Consider defining RNFS as a peer dependency with proper import.

Dynamic require disables static analysis and tree shaking.


151-165: iOS requestPermission() returns true without validating actual microphone permission.

This creates an API inconsistency with Android and misleads callers into believing permission is granted when it may not be. The actual permission denial is only discovered when startRecording() fails.


191-195: Wrap platform recording start in try/catch to prevent stuck recording state.

If startIOSRecording() or startAndroidRecording() throws, the manager remains in a recording state with no way to recover. This was flagged in a previous review.


295-311: Store and remove the 'data' listener subscription to prevent callback duplication and memory leaks.

Each call to startAndroidRecording() registers a new listener without removing the previous one. Since getLiveAudioStream() returns a singleton, repeated recording sessions accumulate multiple listeners.

sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts (1)

484-488: 50ms VAD monitoring interval may cause performance issues on lower-end devices.

This was noted in a previous review. Consider making the interval configurable or using a longer default (e.g., 100ms) if audio level calculations prove CPU-intensive.

🧹 Nitpick comments (6)
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts (2)

333-352: Polling interval lacks a timeout safeguard.

The 100ms polling loop (lines 333-352) will run indefinitely if NativeAudioModule.getPlaybackStatus() consistently reports isPlaying: true due to a bug or stuck state. Consider adding a maximum duration check based on playbackDuration to prevent the interval from running forever.

Suggested safeguard
+          const maxDuration = (this.playbackDuration ?? 300) + 5; // 5 second buffer
+          const startTime = Date.now();
           const checkInterval = setInterval(async () => {
             if (this.state !== 'playing') {
               clearInterval(checkInterval);
               resolve();
               return;
             }
+            
+            // Safety timeout
+            if ((Date.now() - startTime) / 1000 > maxDuration) {
+              clearInterval(checkInterval);
+              this.handlePlaybackComplete();
+              resolve();
+              return;
+            }

             try {
               const status = await NativeAudioModule.getPlaybackStatus();

261-267: Volume changes don't apply to iOS playback.

setVolume() only updates this.currentSound which is Android-specific. Volume changes during iOS playback won't take effect. If NativeAudioModule supports volume control, consider adding the iOS path for consistency.

   setVolume(volume: number): void {
     this.volume = Math.max(0, Math.min(1, volume));
-    if (this.currentSound) {
+    if (Platform.OS === 'ios' && NativeAudioModule?.setVolume) {
+      NativeAudioModule.setVolume(this.volume).catch(() => {});
+    } else if (this.currentSound) {
       this.currentSound.setVolume(this.volume);
     }
     logger.debug(`Volume set to ${this.volume}`);
   }
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts (1)

287-293: Consider extracting magic number to named constant.

audioSource: 6 would be clearer as a named constant like AUDIO_SOURCE_VOICE_RECOGNITION = 6.

♻️ Suggested refactor
+// Android AudioSource constants
+const AUDIO_SOURCE_VOICE_RECOGNITION = 6;
+
 private async startAndroidRecording(): Promise<void> {
   // ...
   audioStream.init({
     sampleRate: this.config.sampleRate,
     channels: this.config.channels,
     bitsPerSample: this.config.bitsPerSample,
-    audioSource: 6, // VOICE_RECOGNITION
+    audioSource: AUDIO_SOURCE_VOICE_RECOGNITION,
     bufferSize: 4096,
   });
sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/VoiceSessionHandle.ts (3)

31-67: Imports are placed after function definitions, which is unconventional.

Lines 63-66 import modules after the safePublish function definition (lines 53-62). While this works due to hoisting, it deviates from standard practice and may cause confusion. Consider moving all imports to the top of the file.

♻️ Suggested organization
 import { SDKLogger } from '../../Foundation/Logging/Logger/SDKLogger';
 import { AudioCaptureManager } from './AudioCaptureManager';
+import { AudioPlaybackManager } from './AudioPlaybackManager';
+import * as STT from '../../Public/Extensions/RunAnywhere+STT';
+import * as TextGeneration from '../../Public/Extensions/RunAnywhere+TextGeneration';
+import * as TTS from '../../Public/Extensions/RunAnywhere+TTS';

 // Lazy-load EventBus to avoid circular dependency issues...
 // ... safePublish function ...

-import { AudioPlaybackManager } from './AudioPlaybackManager';
-import * as STT from '../../Public/Extensions/RunAnywhere+STT';
-import * as TextGeneration from '../../Public/Extensions/RunAnywhere+TextGeneration';
-import * as TTS from '../../Public/Extensions/RunAnywhere+TTS';

420-459: Consider simplifying the switch statement with a direct mapping.

The current approach is explicit but verbose. Since most cases just forward the event data with a prefixed type, this could be simplified.

♻️ Alternative approach
// Simpler approach - publish directly with type prefix
safePublish('Voice', { ...event, type: `voiceSession_${event.type}` });

However, the current explicit approach provides better control over exactly which fields are published for each event type, which may be intentional for the public API contract.


566-573: Consider making LLM generation parameters configurable.

maxTokens: 500 and temperature: 0.7 are hardcoded. For flexibility, consider adding these to VoiceSessionConfig.

Comment on lines +334 to +338
for (const chunk of this.androidAudioChunks) {
const decoded = Uint8Array.from(atob(chunk), c => c.charCodeAt(0));
decodedChunks.push(decoded);
totalLength += decoded.length;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Check the file exists and read the relevant sections
head -n 450 sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts | tail -n 120

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 3796


🏁 Script executed:

# Search for atob/btoa usage in the entire file
rg -n "atob|btoa" sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 304


🏁 Script executed:

# Check for any base64 imports or utility functions
rg -n "import.*base64|Buffer|decode|encode" sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts -A 2

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1858


🌐 Web query:

React Native support for atob and btoa native globals

💡 Result:

Short answer: Recent Hermes releases ship atob/btoa in the JS runtime, but plain React Native does not guarantee them across engines — so you should detect and polyfill when needed. Use a small runtime check and either a JS polyfill (base-64, js-base64) or a native/JSI implementation (react-native-quick-base64) for best performance/consistency. [1][2]

Minimal examples

  • JS polyfill (works everywhere):
    import { encode as btoaPoly, decode as atobPoly } from 'base-64';
    if (!global.atob) global.atob = atobPoly;
    if (!global.btoa) global.btoa = btoaPoly. [1]

  • Native/JSI (faster, optional; installs globals via shim()):
    import 'react-native-quick-base64'; // or import { shim } from 'react-native-quick-base64'; shim();
    // btoa/atob available globally on older engines; package notes they’re deprecated if Hermes provides them natively. [2]

If you want, tell me your RN version and JS engine (Hermes/JSC) and I’ll recommend the exact polyfill.


atob() and btoa() are not guaranteed to be available in React Native without a polyfill.

React Native does not provide these globals uniformly across JavaScript engines. While Hermes (newer) includes them, React Native with JSC or older configurations will throw ReferenceError: atob is not defined at runtime. The code uses these functions in 4 locations (lines 335, 411, 430, 444) without any polyfill.

Add a base64 polyfill to ensure cross-engine compatibility:

🔧 Suggested fix
+// At top of file
+import { encode as base64Encode, decode as base64Decode } from 'base-64';
+
+// Ensure globals are available
+if (!global.atob) global.atob = base64Decode;
+if (!global.btoa) global.btoa = base64Encode;

Alternatively, use react-native-quick-base64 for native performance:

+import 'react-native-quick-base64'; // Automatically shims atob/btoa
🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioCaptureManager.ts`
around lines 334 - 338, The code uses atob/btoa directly in AudioCaptureManager
(e.g., the loop over this.androidAudioChunks and three other spots) which can
break on RN engines without those globals; add a small cross-platform base64
utility (e.g., decodeBase64/encodeBase64) that first tries global atob/btoa,
then falls back to a polyfill or native package like react-native-quick-base64
or Buffer-based implementation, export it from a shared util and replace all
direct atob/btoa calls in AudioCaptureManager (and the three other usages) with
decodeBase64/encodeBase64 so decoding/encoding works consistently across
Hermes/JSC.

Comment on lines +423 to +427
const binaryString = atob(audioBase64);
const bytes = new Uint8Array(binaryString.length);
for (let i = 0; i < binaryString.length; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts | sed -n '420,430p'

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 608


🌐 Web query:

Hermes JavaScript engine atob support availability 2025

💡 Result:

Short answer: Hermes includes native atob/btoa implementations in recent releases, but availability depends on the exact Hermes build bundled with your React Native / Expo SDK — some apps still need a polyfill. See React Native docs (Hermes is the bundled engine), the Hermes GitHub discussion noting atob/btoa work, and package READMEs that note recent Hermes provides btoa/atob. [1][2][3]

If you need a specific check: run !!global.atob in your app (or console.log(typeof global.atob)). If it's undefined, add a polyfill (e.g. base-64 or react-native-quick-base64) until your RN/Hermes version includes it. [1][3]

Sources:

  • React Native docs — Using Hermes. [1]
  • Hermes GitHub discussion: "Non-ECMAScript JS APIs requested in Hermes" (notes atob/btoa implementation work). [2]
  • react-native-quick-base64 README (notes recent Hermes exposes btoa/atob). [3]

🌐 Web query:

React Native atob btoa global functions Hermes polyfill

💡 Result:

Short answer

  • Newer Hermes builds expose atob/btoa natively, so you often don't need a polyfill anymore. (github.com) [1].
  • If you need a polyfill (older Hermes, JSC, or to ensure cross‑platform consistency), install a base64 library and set globals at app startup:

Example (index.js):
import {decode, encode} from 'base-64'
if (!global.btoa) global.btoa = encode
if (!global.atob) global.atob = decode

This pattern is widely used and recommended in React Native threads and docs. (stackoverflow.com) [2][3].

Options

  • base-64 (npm) + global assignment — simple, JS-only. (stackoverflow.com) [2].
  • react-native-quick-base64 — native/JSI implementation; notes that recent Hermes already provides atob/ btoa. Use if you want native performance or backward compatibility. (github.com) [1].
  • react-native-polyfill-globals — package that polyfills many globals (including atob/btoa) if you want a single import. (npmjs.com) [4].

Recommendation

  • If your app runs on a recent Hermes, try without a polyfill first. If you see "Can't find variable: atob/btoa" in release builds or on other engines, add the simple base-64 global polyfill (or react-native-quick-base64 for native/JSI). (github.com)

Sources
[1] react-native-quick-base64 GitHub (notes about Hermes). (github.com)
[2] StackOverflow: polyfill pattern using base-64. (stackoverflow.com)
[3] Various posts/threads showing same global polyfill approach (e.g., Axios/React Native discussion). (bleepingcoder.com)
[4] react-native-polyfill-globals (npm). (npmjs.com)

Citations:


atob support varies across Hermes/React Native versions—add a runtime check for compatibility.

Modern Hermes (bundled with recent React Native versions) supports atob natively. However, older Hermes builds and non-Hermes engines (JSC) may not. To ensure cross-version compatibility, either:

  1. Add a runtime check before use:

    if (typeof atob === 'undefined') {
      // handle error or use polyfill
    }
  2. Or polyfill at app startup using base-64 package:

    import { decode } from 'base-64'
    if (!global.atob) global.atob = decode

This ensures the code works reliably across all supported environments.

🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Features/VoiceSession/AudioPlaybackManager.ts`
around lines 423 - 427, The code uses atob in AudioPlaybackManager (the
binaryString = atob(audioBase64) call) which may be undefined on some Hermes/JSC
bundles; add a runtime compatibility check and fallback: before calling atob (in
the method that decodes audioBase64) verify typeof atob !== 'undefined' and if
undefined either throw a clear error or use a polyfill (e.g., import decode from
'base-64' and assign global.atob = decode at startup), ensuring the bytes
conversion logic (the Uint8Array creation and charCodeAt loop) only runs after a
valid decoder is available.

Google Play requires all apps targeting Android 15+ to support 16KB
page sizes starting November 2025. This commit ensures all native
libraries are properly aligned.

Changes:
- Add -Wl,-z,max-page-size=16384 linker flag to all CMakeLists.txt
- Upgrade NDK to r28 which builds with 16KB alignment by default
- Add ANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES CMake flag
- Configure APK packaging with useLegacyPackaging=false
- Add android.bundle.pageAlignSharedLibs=true for AAB builds

Affected components:
- runanywhere-commons (core, llamacpp, onnx, whispercpp backends)
- runanywhere-react-native packages (core, llamacpp, onnx)
- Sample app build configuration
Copy link

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 9bab0da in 1 minute and 20 seconds. Click for details.
  • Reviewed 269 lines of code in 11 files
  • Skipped 0 files when reviewing.
  • Skipped posting 12 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. examples/react-native/RunAnywhereAI/android/app/build.gradle:37
  • Draft comment:
    Ensure '-DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON' is truly needed with the new packaging configuration for Android 15+ compliance.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
2. examples/react-native/RunAnywhereAI/android/app/build.gradle:79
  • Draft comment:
    Verify that setting 'useLegacyPackaging = false' does not affect existing behavior on all target devices.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
3. examples/react-native/RunAnywhereAI/android/build.gradle:7
  • Draft comment:
    NDK version bumped to 28.0.13004108; confirm that all native modules are compatible with this update.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
4. examples/react-native/RunAnywhereAI/android/gradle.properties.example:29
  • Draft comment:
    The flag 'android.bundle.pageAlignSharedLibs=true' is added; ensure it aligns with your Gradle version and build requirements.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
5. sdk/runanywhere-commons/CMakeLists.txt:98
  • Draft comment:
    Linker flag for 16KB page alignment is added; consider defining a shared CMake variable to avoid duplication across modules.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
6. sdk/runanywhere-commons/src/backends/llamacpp/CMakeLists.txt:147
  • Draft comment:
    The linker option '-Wl,--gc-sections -Wl,-z,max-page-size=16384' is applied; consider refactoring this into a common variable.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
7. sdk/runanywhere-commons/src/backends/onnx/CMakeLists.txt:185
  • Draft comment:
    16KB page alignment linker flag is duplicated here; consider centralizing the flag in a shared configuration.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
8. sdk/runanywhere-commons/src/backends/whispercpp/CMakeLists.txt:172
  • Draft comment:
    Linker options for 16KB alignment in the JNI target are set; consider using a common approach to avoid duplicate flags.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
9. sdk/runanywhere-commons/src/jni/CMakeLists.txt:59
  • Draft comment:
    The 16KB page alignment linker flag is added; consider abstracting common linker options to reduce redundancy.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
10. sdk/runanywhere-react-native/packages/core/android/CMakeLists.txt:114
  • Draft comment:
    16KB page alignment is enforced via target_link_options; ensure this configuration remains consistent across all modules.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
11. sdk/runanywhere-react-native/packages/llamacpp/android/CMakeLists.txt:130
  • Draft comment:
    The target_link_options for 16KB alignment are applied; consider centralizing similar settings to reduce duplication.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
12. sdk/runanywhere-react-native/packages/onnx/android/CMakeLists.txt:198
  • Draft comment:
    16KB page alignment linker option is set; verify its consistency with other backend modules and consider refactoring common flags.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None

Workflow ID: wflow_t7wOTfzNHLsSlxDa

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@examples/react-native/RunAnywhereAI/android/build.gradle`:
- Line 7: The project gradle now pins ndkVersion = "28.0.13004108" which
conflicts with CI that installs NDK 27.0.12077973 (kotlin-sdk-release,
react-native-sdk-release, flutter-sdk-release, commons-release,
backends-release); either update CI to install NDK 28.0.13004108 or change the
ndkVersion back to "27.0.12077973" in android/build.gradle (look for the
ndkVersion property) so the example matches the CI toolchain, and verify whether
Flutter modules that still reference NDK 25.2.9519653 should be aligned as well
(update CI or module pins consistently).
🧹 Nitpick comments (2)
sdk/runanywhere-react-native/packages/core/android/CMakeLists.txt (1)

8-12: Avoid duplicating max-page-size linker flags.

You already apply the flag per-target at Line 114; consider keeping only the per-target option to avoid passing the same flag twice.

♻️ Proposed refinement
-# =============================================================================
-# 16KB Page Alignment for Android 15+ (API 35) Compliance
-# Required starting November 1, 2025 for Google Play submissions
-# =============================================================================
-set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,-z,max-page-size=16384")
+# 16KB page alignment is applied per-target below to avoid duplicate flags
sdk/runanywhere-react-native/packages/onnx/android/CMakeLists.txt (1)

197-199: Consider removing redundant linker flag.

The target_link_options here duplicates the 16KB page alignment already set globally via CMAKE_SHARED_LINKER_FLAGS on line 12. Both apply -Wl,-z,max-page-size=16384 to the same target.

This redundancy is harmless but could be removed for cleaner configuration. If kept intentionally for explicitness (ensuring the flag persists even if the global is later removed), consider adding a brief comment explaining the intent.

♻️ Optional: Remove redundant flag
 # HAS_ONNX is always defined since ONNX backend is required
 target_compile_definitions(${PACKAGE_NAME} PRIVATE HAS_ONNX=1 HAS_RACOMMONS=1)
-
-# 16KB page alignment - MUST be on target for Android 15+ compliance
-target_link_options(${PACKAGE_NAME} PRIVATE -Wl,-z,max-page-size=16384)

compileSdkVersion = 36
targetSdkVersion = 36
ndkVersion = "27.1.12297006"
ndkVersion = "28.0.13004108"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Check for other NDK version pins in gradle files
rg -n "ndkVersion|android\.ndkVersion" --type gradle --type kotlin

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 679


🏁 Script executed:

# Search for gradle configuration files
git ls-files | grep -E "(gradle|build)" | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1133


🏁 Script executed:

# Check the root gradle.properties and build files
cat gradle.properties 2>/dev/null || echo "No root gradle.properties"

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 407


🏁 Script executed:

# Look for CI configuration
git ls-files | grep -E "(\.github|\.circleci|\.gitlab|\.travis|\.jenkins)" | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 759


🏁 Script executed:

# Read the React Native app build.gradle to confirm inheritance
cat examples/react-native/RunAnywhereAI/android/app/build.gradle | head -30

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 919


🏁 Script executed:

# Check the full context of the file under review
cat examples/react-native/RunAnywhereAI/android/build.gradle | head -20

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 573


🏁 Script executed:

# Search CI workflows for NDK configuration or version support
rg "ndk|NDK" .github/workflows/ -A 2

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 3298


🏁 Script executed:

# Check if Flutter SDK build files reference NDK version explicitly (why 25.x)
head -50 sdk/runanywhere-flutter/packages/runanywhere/android/build.gradle

Repository: RunanywhereAI/runanywhere-sdks

Length of output: 1436


Update CI workflows to use NDK 28.0.13004108 or pin gradle to matching version.

The ndkVersion "28.0.13004108" pin conflicts with CI: all release workflows (kotlin-sdk-release, react-native-sdk-release, flutter-sdk-release, commons-release, backends-release) explicitly install NDK 27.0.12077973. This will cause build failures. Either update the CI workflows to install NDK 28, or revert the gradle pin to 27.0.12077973 to match CI. Additionally, Flutter SDK modules still use NDK 25.2.9519653; confirm whether this example should remain isolated or align with the wider repo toolchain.

🤖 Prompt for AI Agents
In `@examples/react-native/RunAnywhereAI/android/build.gradle` at line 7, The
project gradle now pins ndkVersion = "28.0.13004108" which conflicts with CI
that installs NDK 27.0.12077973 (kotlin-sdk-release, react-native-sdk-release,
flutter-sdk-release, commons-release, backends-release); either update CI to
install NDK 28.0.13004108 or change the ndkVersion back to "27.0.12077973" in
android/build.gradle (look for the ndkVersion property) so the example matches
the CI toolchain, and verify whether Flutter modules that still reference NDK
25.2.9519653 should be aligned as well (update CI or module pins consistently).

@sanchitmonga22 sanchitmonga22 merged commit c99df00 into main Jan 20, 2026
11 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants