Skip to main content
PhoneClaw uses OpenRouter to access various AI models for natural language processing, code generation, and automation tasks. This guide explains how to configure and switch between models.

Available Models

PhoneClaw comes with two pre-configured models that you can select from within the app:

Gemini 2.0 Flash

  • Model ID: google/gemini-2.0-flash-001
  • Provider: Google
  • Best For: Fast response times, general-purpose tasks
  • Cost: Paid (via OpenRouter)

Llama 4 Maverick (Free)

  • Model ID: meta-llama/llama-4-maverick:free
  • Provider: Meta
  • Best For: Cost-effective automation, experimentation
  • Cost: Free tier available

Selecting a Model in the App

You can change the AI model directly from the PhoneClaw interface:
  1. Launch PhoneClaw on your Android device
  2. Look for the “Model:” button in the main interface
  3. Tap the button to open the model selection dialog
  4. Select your preferred model from the list
  5. The app will confirm your selection with voice feedback
Your model selection is saved automatically and persists across app restarts.

Model Configuration Details

The model selection is implemented in MainActivity.kt:1229-1284:
private fun getOpenRouterModelOptions(): List<OpenRouterModel> {
    return listOf(
        OpenRouterModel("google/gemini-2.0-flash-001", "Gemini 2.0 Flash"),
        OpenRouterModel("meta-llama/llama-4-maverick:free", "Llama 4 Maverick (Free)")
    )
}

Storage Location

The selected model is stored in SharedPreferences:
  • Key: openrouter_model
  • Storage: AgentsBasePrefs preference file
  • Default: google/gemini-2.0-flash-001

Adding Custom Models

To add additional OpenRouter models, you can modify the source code:
  1. Open app/src/main/java/com/example/universal/MainActivity.kt
  2. Locate the getOpenRouterModelOptions() function (around line 1229)
  3. Add new models to the list:
private fun getOpenRouterModelOptions(): List<OpenRouterModel> {
    return listOf(
        OpenRouterModel("google/gemini-2.0-flash-001", "Gemini 2.0 Flash"),
        OpenRouterModel("meta-llama/llama-4-maverick:free", "Llama 4 Maverick (Free)"),
        // Add your custom model here
        OpenRouterModel("anthropic/claude-3-opus", "Claude 3 Opus"),
        OpenRouterModel("openai/gpt-4-turbo", "GPT-4 Turbo")
    )
}
  1. Rebuild and reinstall the app
Make sure the model IDs match exactly with OpenRouter’s model naming. Check OpenRouter’s model documentation for available models.

OpenRouter API Integration

PhoneClaw communicates with OpenRouter’s API to generate code and responses. The integration includes:
  • HTTP Client: OkHttp (configured in dependencies)
  • Request Timeout: 30 seconds for connection and read operations
  • Content Type: JSON (application/json)

API Request Flow

  1. User provides voice command or text input
  2. PhoneClaw captures current screen context
  3. Request is sent to OpenRouter with selected model
  4. Generated code/response is executed or displayed
  5. Results are tracked in generation history

Model Performance Considerations

Response Time

  • Gemini 2.0 Flash: Optimized for speed (~1-3 seconds)
  • Llama 4 Maverick: May be slower but cost-effective

Token Limits

Be aware of token limits when sending large screen captures or complex prompts. OpenRouter enforces per-model token limits.

Rate Limiting

OpenRouter may rate-limit requests based on your account tier. Implement appropriate delays between automation tasks.

Troubleshooting

Model Not Responding

  1. Check your OpenRouter API key is valid
  2. Verify you have sufficient API credits
  3. Ensure your device has internet connectivity
  4. Check the logs for API error messages

Model Selection Not Saving

  1. Ensure the app has storage permissions
  2. Check that SharedPreferences is initialized correctly
  3. Try clearing app data and reconfiguring

Using Free Models

The Llama 4 Maverick free model is great for testing but may have usage limits. Monitor your OpenRouter dashboard for quota information.

Generation History

PhoneClaw tracks all model-generated code in the Generation History tab:
  • View past commands and generated code
  • Timestamp for each generation
  • Unique ID for tracking
  • Stored locally in SharedPreferences

Best Practices

  1. Start with Free Models: Test your automation workflows with free models before committing to paid options
  2. Monitor Usage: Keep track of your OpenRouter usage and costs
  3. Match Model to Task: Use faster models for simple tasks, more powerful models for complex automation
  4. Test Generation Quality: Compare different models to find the best balance of speed, cost, and accuracy

Environment Variables

For advanced users deploying PhoneClaw programmatically, you can set the model via build configuration:
// In app/build.gradle.kts
buildConfigField("String", "DEFAULT_MODEL", "\"google/gemini-2.0-flash-001\"")

Next Steps