The Agents application is an advanced multi-agent AI system that intelligently routes requests to specialized agents and subgraphs for comprehensive assistance. The system features a compound agent architecture with XML-based routing, code execution capabilities, and multi-step research workflows. The Agents application helps users by:
- Providing intelligent assistance through a unified compound agent system.
- Executing code in secure Daytona sandbox environments.
- Performing comprehensive data science workflows with multi-agent collaboration.
- Generating detailed research reports and educational content.
- Conducting advanced financial analysis with real-time data.
- Automatically routing queries to appropriate specialized subgraphs.
- Supporting voice input for natural interaction.
The basic process of the Agents application is described below.
-
Enhanced agent processing
- User submits a query via text or voice input.
- The compound agent system uses XML-based routing to determine the best approach.
- Queries are processed through the main agent or routed to specialized subgraphs.
-
Intelligent subgraph routing
- The system automatically determines if queries require specialized subgraph processing.
- Available subgraphs include: Financial Analysis, Deep Research, Data Science, and Code Execution.
- Multi-agent collaboration within subgraphs for complex workflows.
-
Tool and data integration
- Dynamic tool loading based on user context and permissions.
- Integration with external APIs, databases, and knowledge sources.
- Secure code execution and file generation in Daytona sandbox.
-
Real-time response generation
- WebSocket-based streaming for real-time updates and agent reasoning.
- Structured responses with metadata for appropriate UI rendering.
- File artifacts (PDF, HTML, images, CSV) automatically processed and displayed.
-
Adaptive user interaction
- Frontend intelligence automatically detects agent behaviors and adapts UI.
- Agent reasoning panel shows real-time thought processes.
- Continuous learning from interactions to improve future responses.
- API Access
- The application provides a REST API for programmatic access to its features such as deep research and data science workflows.
Note: View the Agent Reasoning panel on the right side of the application to see the real-time thought output. The Daytona Sidebar automatically opens when code execution is detected.
Ensure to install the prerequisites.
-
Python 3.11 (exact version required)
-
Redis (via Docker or Homebrew)
# Run Redis with Docker docker run -p 6379:6379 redis/redis-stack:latest# Install Redis with Homebrew on macOS brew install redis-stack brew services start redis-stack # or redis-stack-server
Get the following API keys to setup the Agents application.
- SambaNova API key (required)
- Serper API key for web search (required)
- Exa API key for company data (required)
- Tavily API key for deep research capabilities (required)
- Daytona API key for secure code execution sandbox (required)
- Clerk for authentication (you'll need both publishable and secret keys)
- Hume API key and Secret key for voice interaction capabilities (required for voice features)
- LangSmith API key for optional usage tracking and monitoring (optional)
Note: The system supports multiple LLM providers including SambaNova's DeepSeek V3, Llama 3.3 70B, Llama Maverick, and DeepSeek R1 models.
You can setup and run the application in two ways: Cloud hosted version or locally hosted version.
This version is hosted on SambaNova Cloud. No need to install dependencies locally.
- Go to the Agents application login page.
- Sign in using Clerk authentication (you will receive an email with login instructions).
- Once you login, go to settings and add the API keys.
- Start using the application to enhance workflows, conduct research, execute code, and gain actionable insights.
Follow the steps below to install the frontend for the Agents application.
Note: For the following commands, go to
/frontend/sales-agent-crew/directory.
-
Install Vue.js dependencies.
yarn install
-
Run a local development environment.
yarn dev
-
Create a production build.
yarn build
Follow the steps below to install the backend for the Agents application.
Note: For the following commands, go to
/backend/directory.
-
Install Python dependencies: Create and activate a virtual environment (for example with venv) and install the project dependencies inside it. Make sure to use Python 3.11.
# Install uv first pip install uv cd backend uv sync source .venv/bin/activate
-
Run the application.
If you are running it on mac export the following variables
export DYLD_LIBRARY_PATH="/opt/homebrew/lib:$DYLD_LIBRARY_PATH" export PKG_CONFIG_PATH="/opt/homebrew/lib/pkgconfig:$PKG_CONFIG_PATH"
Run the Backend server with the following command
uvicorn agents.api.main:app --reload --host 127.0.0.1 --port 8000 --no-access-log
Note: For the frontend environment variables, go to
/frontend/sales-agent-crew/.
- Create a
.envfile with the following variables.VITE_API_URL=/api VITE_WEBSOCKET_URL=ws://localhost:8000 VITE_AUTH0_DOMAIN=your_auth0_domain VITE_AUTH0_CLIENT_ID=your_auth0_client_id VITE_AUTH0_AUDIENCE=your_auth0_audience
Note: For the backend environment variables, go to
/backend/.
-
Create a
.envfile with the following required variables.# Authentication AUTH0_DOMAIN=your-auth0-domain.auth0.com AUTH0_AUDIENCE=your-auth0-api-audience # Core API Keys (can be user-provided or environment-based) SERPER_KEY=your_serper_api_key EXA_KEY=your_exa_api_key # Research and Deep Analysis TAVILY_API_KEY=your_tavily_api_key # Required for Deep Research subgraph TAVILY_API_KEY_1=your_second_tavily_key # Optional: Additional keys for rotation TAVILY_API_KEY_2=your_third_tavily_key # Optional: Additional keys for rotation # Code Execution DAYTONA_API_KEY=your_daytona_api_key # Required for secure code execution sandbox # Voice Interaction HUME_API_KEY=your_hume_api_key # Required for voice features HUME_SECRET_KEY=your_hume_secret_key # Required for voice features HUME_CONFIG_ID=your_hume_config_id # Required: Voice configuration ID from Hume platform VOICE_MODE_ENABLED=true # Set to "false" to disable voice mode features # OAuth Integrations (Optional - app works without these, just without the specific connectors) GOOGLE_CLIENT_ID=your_google_client_id # Optional: For Google integration GOOGLE_CLIENT_SECRET=your_google_client_secret # Optional: For Google integration NOTION_CLIENT_ID=your_notion_client_id # Optional: For Notion integration NOTION_CLIENT_SECRET=your_notion_client_secret # Optional: For Notion integration ATLASSIAN_CLIENT_ID=your_atlassian_client_id # Optional: For Atlassian integration ATLASSIAN_CLIENT_SECRET=your_atlassian_client_secret # Optional: For Atlassian integration # System Configuration ENABLE_USER_KEYS=true # Set to "false" to use only environment API keys REDIS_MASTER_SALT=your_redis_encryption_salt # For encrypting user data # Optional: Tracking and Monitoring LANGSMITH_API_KEY=your_langsmith_api_key # Optional for usage tracking and monitoring # Optional: MLflow Integration MLFLOW_TRACKING_ENABLED=false # Set to "true" to enable MLflow tracking MLFLOW_TRACKING_URI=your_mlflow_uri # Required if MLflow is enabled # Optional: Usage tracking LANGTRACE_API_KEY=your_langtrace_api_key # Optional for usage tracking
-
Start the FastAPI backend server.
# From the project root cd backend uvicorn agents.api.main:app --reload --host 127.0.0.1 --port 8000 --no-access-log
-
Start the Vue.js frontend development server.
# From the project root cd frontend/sales-agent-crew/ yarn dev
-
Open your browser and navigate to:
http://localhost:5174/
You can access the settings modal to configure the API keys mentioned in the prerequisites section. The system supports both user-provided API keys and environment-based configuration.
- Sign up for a Clerk account at clerk.com.
- Create a new application in the Clerk dashboard.
- Get your publishable key and secret key.
- Configure your JWT issuer URL.
- Add these values to your environment variables as shown above.
To enable voice interaction capabilities, you need to set up a Hume AI account and configure a voice profile:
- Sign up for a Hume AI account at platform.hume.ai.
- Navigate to the API Keys section in your Hume dashboard.
- Create a new API key and secret key.
- Go to the EVI Configurations section to create a voice configuration:
- Create a new configuration
- Configure voice settings (voice type, language, etc.)
- Customize system prompts and behavior as needed
- Save and copy the Configuration ID
- Add the API key, secret key, and configuration ID to your backend
.envfile as shown in the environment variables section above.
For detailed instructions on creating and customizing voice configurations, refer to the Hume AI Documentation.
The application can integrate with Google, Notion, and Atlassian services through optional connectors. Note: These are optional - the app will work without them, just without the specific connector features. If you want to enable these integrations, create OAuth applications for each service:
- Go to the Google Cloud Console.
- Create a new project or select an existing one.
- Navigate to APIs & Services > Credentials.
- Click Create Credentials > OAuth 2.0 Client ID.
- Configure the OAuth consent screen if prompted.
- Select Web application as the application type.
- Add authorized redirect URIs for your application.
- Copy the Client ID and Client Secret.
- Add these to your backend
.envfile asGOOGLE_CLIENT_IDandGOOGLE_CLIENT_SECRET.
For more details, see Google OAuth 2.0 Setup Guide.
- Go to Notion Developers.
- Click New integration or Create new integration.
- Fill in the integration details (name, logo, etc.).
- Under Capabilities, configure the permissions your application needs.
- Under OAuth Domain & URIs, add your redirect URIs.
- Copy the OAuth client ID and OAuth client secret.
- Add these to your backend
.envfile asNOTION_CLIENT_IDandNOTION_CLIENT_SECRET.
For more details, see Notion Authorization Guide.
- Go to the Atlassian Developer Console.
- Click Create > OAuth 2.0 integration.
- Fill in the app details and permissions.
- Add your callback URL under Authorization callback URL.
- Configure the required scopes for your application (Jira, Confluence, etc.).
- Copy the Client ID and Client Secret.
- Add these to your backend
.envfile asATLASSIAN_CLIENT_IDandATLASSIAN_CLIENT_SECRET.
For more details, see Atlassian OAuth 2.0 Guide.
If you want to track usage and monitor the application's performance:
- Sign up for a LangTrace account
- Add your LangTrace API key to the backend
.envfile - The application will automatically log traces (disabled by default)
The system is built on a compound agent architecture with intelligent routing and specialized subgraphs:
- EnhancedConfigurableAgent: Main orchestrator with XML-based routing
- XML Agent Executor: Decision-making engine that routes between tools and subgraphs
- Dynamic Tool Loading: User-specific and static tools with caching
- WebSocket Streaming: Real-time execution updates and agent reasoning
- Financial Analysis: Comprehensive financial reporting with crew-based analysis
- Deep Research: Multi-step research workflows with user feedback
- Data Science: End-to-end data science workflows with multiple specialized agents
- Code Execution: Secure Daytona sandbox for code execution and file generation
This application is built with:
- Vue 3 + Composition API with intelligent UI adaptation
- Vite
- TailwindCSS
- Clerk for authentication
- LangGraph for agent workflows
- FastAPI backend with Redis caching
- WebSocket for real-time communication
The stack is designed to offer high-performance and scalability for both frontend and backend needs. See the frontend and backend technology stack listed in the table below.
| Category | Technologies used |
|---|---|
| Frontend |
|
| Backend |
|
This section describes the agents and feature capabilities of the application.
The core agent system provides:
- Intelligent routing: XML-based decision making for optimal tool and subgraph selection.
- Multi-LLM support: SambaNova (DeepSeek V3, Llama 3.3 70B, Llama Maverick), and other Models powered by SambaNova.
- Dynamic tool loading: User-specific tools with 5-minute caching and graceful fallbacks.
- Real-time streaming: WebSocket-based execution updates with agent reasoning.
- Secure execution: All code execution happens in isolated Daytona sandbox environments.
The General assistant agent helps with:
- Answering basic questions and queries.
- Providing explanations and clarifications.
- Offering technical support.
- Assisting with general research tasks.
- Quick factual information about companies, products, and current events.
Example queries for general assistance are listed below.
- "What's the difference between supervised and unsupervised learning?"
- "Can you explain how REST APIs work?"
- "What are the best practices for data visualization?"
- "How do I optimize database queries?"
- "What is Tesla's current stock price?"
- "What's the latest news on Apple?"
The application uses the Daytona Code Sandbox and Data Science Subgraph for:
- Secure code execution: Python code runs in isolated Daytona sandbox environments.
- File generation: Automatic creation and display of PDFs, HTML, images, CSV files.
- Data science workflows: Multi-agent collaboration for complex data analysis projects.
- Machine learning: Model development, predictive analytics, statistical modeling.
- Data visualization: Automatic chart and graph generation.
- Hypothesis testing: Scientific approach to data analysis with validation.
Example queries for code execution and data science are listed below.
- "Create a machine learning model to predict customer churn using this dataset"
- "Generate a Python script to analyze sales trends and create visualizations"
- "Build a statistical model to analyze the relationship between variables"
- "Create a data cleaning pipeline for this CSV file"
- "Develop a predictive analytics dashboard with interactive charts"
- "Perform hypothesis testing on this experimental data"
For research queries, the application uses the Deep Research Subgraph to:
- Generate comprehensive multi-perspective research reports.
- Create structured educational content with citations.
- Conduct multi-step research workflows with user feedback.
- Provide in-depth analysis on complex topics.
- Include relevant sources and academic references.
Example queries for research and content generation are listed below.
- "Generate a comprehensive report on quantum computing applications in cryptography"
- "Create an in-depth analysis of CRISPR gene editing in modern medicine"
- "Research the relationship between AI and neuromorphic computing with sources"
- "Provide a thorough investigation of blockchain's impact on supply chain management"
- "Analyze the latest developments in fusion energy research with academic citations"
- "Create a detailed market research report on the EV industry trends"
For financial queries, the application uses the Financial Analysis Subgraph to:
- Analyze company financial performance with comprehensive reporting.
- Track market trends and competitive positioning.
- Evaluate stock performance and valuation metrics.
- Generate investment insights and risk assessments.
- Monitor industry-specific metrics and comparisons.
- Provide real-time financial data and analysis.
Example queries for financial analysis and market research are listed below.
- "Provide a comprehensive financial analysis of Tesla including competitors and risk assessment"
- "Analyze the semiconductor industry performance this quarter with market trends"
- "Compare cloud revenue growth between Microsoft Azure and AWS with technical analysis"
- "Evaluate Apple's financial health considering recent product launches and market position"
- "Create a detailed investment report on major EV manufacturers with risk analysis"
- "Analyze the fintech industry trends and top performers with financial metrics"
The application includes several intelligent automation features:
- Daytona sidebar: Automatically opens when code execution is detected
- Agent type detection: Routes responses to appropriate UI components
- File artifact handling: Inline preview for images, PDFs, HTML, CSV files
- Real-time updates: Live execution logs and status tracking
- XML-based decisions: Sophisticated routing between tools and subgraphs
- Context awareness: System understands query intent and complexity
- Subgraph selection: Automatic determination of best workflow for the task
- Tool orchestration: Dynamic loading and execution of appropriate tools
The application allows you to make queries using audio input. Simply click the microphone icon to start speaking. It also offers:
- Automatic speech-to-text transcription
- Hands-free operation for convenience
Additional features of the application are listed below.
- π Secure API key management β Encrypted storage with user-specific keys
- π Chat history tracking β Persistent conversation storage with Redis
- π₯ Results export functionality β Download generated files and reports
- π Real-time agent reasoning β Live thought process display
- π‘οΈ Secure code execution β Isolated Daytona sandbox environments
- π Multi-format file support β PDF, HTML, CSV, images with inline preview
- πΉ Real-time financial data β Live market data and analysis
- π€ Multi-agent collaboration β Specialized agents working together
- β‘ WebSocket streaming β Real-time updates and responsive UI
-
Configure API keys
- Open settings
- Enter your API keys (SambaNova, Serper, Exa, Tavily, Daytona)
- Keys are securely encrypted and stored per user
-
Start using the system
- Type your query or use voice input
- System automatically determines the best approach (main agent vs. specialized subgraphs)
- Watch real-time agent reasoning in the sidebar
-
Code execution and data science
- Upload datasets or ask for code generation
- Daytona sidebar automatically opens for code execution
- Generated files (PDFs, charts, data) appear with inline preview
-
View and export results
- Research reports displayed as structured documents
- Financial analysis shown with charts and metrics
- Export functionality available for all generated content
- Save important conversations and artifacts
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a new pull request

